Changes v14->{present}:
----------------------
This is back after a bit of a hiatus. In the last attempt to do this in
the beginning of 2023, I think we reached consensus on a few things --
the use case, the vDSO implementation and semantics, its integration
with libc, the test code and documentation, and so forth. It was
basically "ready to go". Almost. But there was a lingering issue that
bogged this down, which is that it demanded some new mm semantics that
weren't very popular.
In particular, the series from last year made use of the x86 instruction
decoder to just skip over faulting instructions. I still think this is
nifty, but it's not actually essential for the semantics needed, and I
can understand why this was by far the largest objection. So all of that
is dropped, which simplifies quite a bit.
In another avenue of the mm discussion, Andy had mentioned using
_install_special_mapping() instead of the VM_DROPPABLE work, and I spent
a long while looking into this, and attempted several times to code up a
working implementation that used that. But the semantics really just
weren't possible without adding hooks to lots of other core code, and
duplicating a lot of code that really ought not to be. So I've kept the
VM_DROPPABLE patch here, but because the x86 instruction decoding stuff
has been removed, that patch is actually a lot smaller and simpler and I
don't think should be too controversial. In terms of actual C code, it
only adds around ten lines, and is compact enough that you can just grep
for VM_DROPPABLE to see the whole thing.
The original cover letter is produced below. I'm eager to finally get
this patchset moving, and sorry for the delay in producing the v+1 from
before.
Assuming this goes well, the plan would be to take this through my
random.git tree for 6.11. And if the mm part looks fine, I'll get this
cooking in linux-next ASAP.
Thanks ahead of time for taking a look at it.
Changes v15->v16:
- DavidH pointed out a missing swap edge case in 1/5.
- Mostly just a resend because I forgot --cc-cover, and sent it during
the merge window.
--------------
Two statements:
1) Userspace wants faster cryptographically secure random numbers of
arbitrary size, big or small.
2) Userspace is currently unable to safely roll its own RNG with the
same security profile as getrandom().
Statement (1) has been debated for years, with arguments ranging from
"we need faster cryptographically secure card shuffling!" to "the only
things that actually need good randomness are keys, which are few and
far between" to "actually, TLS CBC nonces are frequent" and so on. I
don't intend to wade into that debate substantially, except to note that
recently glibc added arc4random(), whose goal is to return a
cryptographically secure uint32_t, and there are real user reports of it
being too slow. So here we are.
Statement (2) is more interesting. The kernel is the nexus of all
entropic inputs that influence the RNG. It is in the best position, and
probably the only position, to decide anything at all about the current
state of the RNG and of its entropy. One of the things it uniquely knows
about is when reseeding is necessary.
For example, when a virtual machine is forked, restored, or duplicated,
it's imparative that the RNG doesn't generate the same outputs. For this
reason, there's a small protocol between hypervisors and the kernel that
indicates this has happened, alongside some ID, which the RNG uses to
immediately reseed, so as not to return the same numbers. Were userspace
to expand a getrandom() seed from time T1 for the next hour, and at some
point T2 < hour, the virtual machine forked, userspace would continue to
provide the same numbers to two (or more) different virtual machines,
resulting in potential cryptographic catastrophe. Something similar
happens on resuming from hibernation (or even suspend), with various
compromise scenarios there in mind.
There's a more general reason why userspace rolling its own RNG from a
getrandom() seed is fraught. There's a lot of attention paid to this
particular Linuxism we have of the RNG being initialized and thus
non-blocking or uninitialized and thus blocking until it is initialized.
These are our Two Big States that many hold to be the holy
differentiating factor between safe and not safe, between
cryptographically secure and garbage. The fact is, however, that the
distinction between these two states is a hand-wavy wishy-washy inexact
approximation. Outside of a few exceptional cases (e.g. a HW RNG is
available), we actually don't really ever know with any rigor at all
when the RNG is safe and ready (nor when it's compromised). We do the
best we can to "estimate" it, but entropy estimation is fundamentally
impossible in the general case. So really, we're just doing guess work,
and hoping it's good and conservative enough. Let's then assume that
there's always some potential error involved in this differentiator.
In fact, under the surface, the RNG is engineered around a different
principal, and that is trying to *use* new entropic inputs regularly and
at the right specific moments in time. For example, close to boot time,
the RNG reseeds itself more often than later. At certain events, like VM
fork, the RNG reseeds itself immediately. The various heuristics for
when the RNG will use new entropy and how often is really a core aspect
of what the RNG has some potential to do decently enough (and something
that will probably continue to improve in the future from random.c's
present set of algorithms). So in your mind, put away the metal
attachment to the Two Big States, which represent an approximation with
a potential margin of error. Instead keep in mind that the RNG's primary
operating heuristic is how often and exactly when it's going to reseed.
So, if userspace takes a seed from getrandom() at point T1, and uses it
for the next hour (or N megabytes or some other meaningless metric),
during that time, potential errors in the Two Big States approximation
are amplified. During that time potential reseeds are being lost,
forgotten, not reflected in the output stream. That's not good.
The simplest statement you could make is that userspace RNGs that expand
a getrandom() seed at some point T1 are nearly always *worse*, in some
way, than just calling getrandom() every time a random number is
desired.
For those reasons, after some discussion on libc-alpha, glibc's
arc4random() now just calls getrandom() on each invocation. That's
trivially safe, and gives us latitude to then make the safe thing faster
without becoming unsafe at our leasure. Card shuffling isn't
particularly fast, however.
How do we rectify this? By putting a safe implementation of getrandom()
in the vDSO, which has access to whatever information a
particular iteration of random.c is using to make its decisions. I use
that careful language of "particular iteration of random.c", because the
set of things that a vDSO getrandom() implementation might need for making
decisions as good as the kernel's will likely change over time. This
isn't just a matter of exporting certain *data* to userspace. We're not
going to commit to a "data API" where the various heuristics used are
exposed, locking in how the kernel works for decades to come, and then
leave it to various userspaces to roll something on top and shoot
themselves in the foot and have all sorts of complexity disasters.
Rather, vDSO getrandom() is supposed to be the *same exact algorithm*
that runs in the kernel, except it's been hoisted into userspace as
much as possible. And so vDSO getrandom() and kernel getrandom() will
always mirror each other hermetically.
API-wise, the vDSO gains this function:
ssize_t vgetrandom(void *buffer, size_t len, unsigned int flags, void *opaque_state);
The return value and the first 3 arguments are the same as ordinary
getrandom(), while the last argument is a pointer to some state
allocated with vgetrandom_alloc(), explained below. Were all four
arguments passed to the getrandom syscall, nothing different would
happen, and the functions would have the exact same behavior.
Then, we introduce a new syscall:
void *vgetrandom_alloc(unsigned int *num, unsigned int *size_per_each,
unsigned long addr, unsigned int flags);
This takes a hinted number of opaque states in `num`, and returns a
pointer to an array of opaque states, the number actually allocated back
in `num`, and the size in bytes of each one in `size_per_each`, enabling
a libc to slice up the returned array into a state per each thread. (The
`flags` and `addr` arguments, as well as the `*size_per_each` input
value, are reserved for the future and are forced to be zero for now.)
Libc is expected to allocate a chunk of these on first use, and then
dole them out to threads as they're created, allocating more when
needed. The returned address of the first state may be passed to
munmap(2) with a length of `num * size_per_each`, in order to deallocate
the memory.
We very intentionally do *not* leave state allocation up to the caller
of vgetrandom, but provide vgetrandom_alloc for that allocation. There
are too many weird things that can go wrong, and it's important that
vDSO does not provide too generic of a mechanism. It's not going to
store its state in just any old memory address. It'll do it only in ones
it allocates.
Right now this means it uses a new mm flag called VM_DROPPABLE, along
with VM_WIPEONFORK. In the future maybe there will be other interesting
page flags or anti-heartbleed measures, or other platform-specific
kernel-specific things that can be set from the syscall. Again, it's
important that the kernel has a say in how this works rather than
agreeing to operate on any old address; memory isn't neutral.
The interesting meat of the implementation is in lib/vdso/getrandom.c,
as generic C code, and it aims to mainly follow random.c's buffered fast
key erasure logic. Before the RNG is initialized, it falls back to the
syscall. Right now it uses a simple generation counter to make its decisions
on reseeding (though this could be made more extensive over time).
The actual place that has the most work to do is in all of the other
files. Most of the vDSO shared page infrastructure is centered around
gettimeofday, and so the main structs are all in arrays for different
timestamp types, and attached to time namespaces, and so forth. I've
done the best I could to add onto this in an unintrusive way.
In my test results, performance is pretty stellar (around 15x for uint32_t
generation), and it seems to be working. There's an extended example in the
second commit of this series, showing how the syscall and the vDSO function
are meant to be used together.
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: Thomas Gleixner <[email protected]>
Cc: Greg Kroah-Hartman <[email protected]>
Cc: Adhemerval Zanella Netto <[email protected]>
Cc: Carlos O'Donell <[email protected]>
Cc: Florian Weimer <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Jann Horn <[email protected]>
Cc: Christian Brauner <[email protected]>
Cc: David Hildenbrand <[email protected]>
Jason A. Donenfeld (5):
mm: add VM_DROPPABLE for designating always lazily freeable mappings
random: add vgetrandom_alloc() syscall
arch: allocate vgetrandom_alloc() syscall number
random: introduce generic vDSO getrandom() implementation
x86: vdso: Wire up getrandom() vDSO implementation
MAINTAINERS | 2 +
arch/alpha/kernel/syscalls/syscall.tbl | 1 +
arch/arm/tools/syscall.tbl | 1 +
arch/arm64/include/asm/unistd.h | 2 +-
arch/arm64/include/asm/unistd32.h | 2 +
arch/m68k/kernel/syscalls/syscall.tbl | 1 +
arch/microblaze/kernel/syscalls/syscall.tbl | 1 +
arch/mips/kernel/syscalls/syscall_n32.tbl | 1 +
arch/mips/kernel/syscalls/syscall_n64.tbl | 1 +
arch/mips/kernel/syscalls/syscall_o32.tbl | 1 +
arch/parisc/kernel/syscalls/syscall.tbl | 1 +
arch/powerpc/kernel/syscalls/syscall.tbl | 1 +
arch/s390/kernel/syscalls/syscall.tbl | 1 +
arch/sh/kernel/syscalls/syscall.tbl | 1 +
arch/sparc/kernel/syscalls/syscall.tbl | 1 +
arch/x86/Kconfig | 1 +
arch/x86/entry/syscalls/syscall_32.tbl | 1 +
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
arch/x86/entry/vdso/Makefile | 3 +-
arch/x86/entry/vdso/vdso.lds.S | 2 +
arch/x86/entry/vdso/vgetrandom-chacha.S | 178 +++++++++++
arch/x86/entry/vdso/vgetrandom.c | 17 ++
arch/x86/include/asm/vdso/getrandom.h | 55 ++++
arch/x86/include/asm/vdso/vsyscall.h | 2 +
arch/x86/include/asm/vvar.h | 16 +
arch/xtensa/kernel/syscalls/syscall.tbl | 1 +
drivers/char/random.c | 143 +++++++++
fs/proc/task_mmu.c | 3 +
include/linux/mm.h | 8 +
include/linux/syscalls.h | 3 +
include/trace/events/mmflags.h | 7 +
include/uapi/asm-generic/unistd.h | 5 +-
include/vdso/datapage.h | 12 +
include/vdso/getrandom.h | 44 +++
include/vdso/types.h | 35 +++
kernel/sys_ni.c | 3 +
lib/vdso/Kconfig | 6 +
lib/vdso/getrandom.c | 226 ++++++++++++++
mm/Kconfig | 3 +
mm/memory.c | 4 +
mm/mempolicy.c | 3 +
mm/mprotect.c | 2 +-
mm/rmap.c | 8 +-
tools/include/uapi/asm-generic/unistd.h | 5 +-
.../arch/mips/entry/syscalls/syscall_n64.tbl | 1 +
.../arch/powerpc/entry/syscalls/syscall.tbl | 1 +
.../perf/arch/s390/entry/syscalls/syscall.tbl | 1 +
.../arch/x86/entry/syscalls/syscall_64.tbl | 1 +
tools/testing/selftests/vDSO/.gitignore | 2 +
tools/testing/selftests/vDSO/Makefile | 11 +
.../testing/selftests/vDSO/vdso_test_chacha.c | 43 +++
.../selftests/vDSO/vdso_test_getrandom.c | 283 ++++++++++++++++++
52 files changed, 1150 insertions(+), 8 deletions(-)
create mode 100644 arch/x86/entry/vdso/vgetrandom-chacha.S
create mode 100644 arch/x86/entry/vdso/vgetrandom.c
create mode 100644 arch/x86/include/asm/vdso/getrandom.h
create mode 100644 include/vdso/getrandom.h
create mode 100644 include/vdso/types.h
create mode 100644 lib/vdso/getrandom.c
create mode 100644 tools/testing/selftests/vDSO/vdso_test_chacha.c
create mode 100644 tools/testing/selftests/vDSO/vdso_test_getrandom.c
--
2.44.0
The vDSO getrandom() implementation works with a buffer allocated with a
new system call that has certain requirements:
- It shouldn't be written to core dumps.
* Easy: VM_DONTDUMP.
- It should be zeroed on fork.
* Easy: VM_WIPEONFORK.
- It shouldn't be written to swap.
* Uh-oh: mlock is rlimited.
* Uh-oh: mlock isn't inherited by forks.
- It shouldn't reserve actual memory, but it also shouldn't crash when
page faulting in memory if none is available
* Uh-oh: MAP_NORESERVE respects vm.overcommit_memory=2.
* Uh-oh: VM_NORESERVE means segfaults.
It turns out that the vDSO getrandom() function has three really nice
characteristics that we can exploit to solve this problem:
1) Due to being wiped during fork(), the vDSO code is already robust to
having the contents of the pages it reads zeroed out midway through
the function's execution.
2) In the absolute worst case of whatever contingency we're coding for,
we have the option to fallback to the getrandom() syscall, and
everything is fine.
3) The buffers the function uses are only ever useful for a maximum of
60 seconds -- a sort of cache, rather than a long term allocation.
These characteristics mean that we can introduce VM_DROPPABLE, which
has the following semantics:
a) It never is written out to swap.
b) Under memory pressure, mm can just drop the pages (so that they're
zero when read back again).
c) If there's not enough memory to service a page fault, it's not fatal.
d) It is inherited by fork.
e) It doesn't count against the mlock budget, since nothing is locked.
This is fairly simple to implement, with the one snag that we have to
use 64-bit VM_* flags, but this shouldn't be a problem, since the only
consumers will probably be 64-bit anyway.
This way, allocations used by vDSO getrandom() can use:
VM_DROPPABLE | VM_DONTDUMP | VM_WIPEONFORK | VM_NORESERVE
And there will be no problem with OOMing, crashing on overcommitment,
using memory when not in use, not wiping on fork(), coredumps, or
writing out to swap.
Cc: [email protected]
Signed-off-by: Jason A. Donenfeld <[email protected]>
---
fs/proc/task_mmu.c | 3 +++
include/linux/mm.h | 8 ++++++++
include/trace/events/mmflags.h | 7 +++++++
mm/Kconfig | 3 +++
mm/memory.c | 4 ++++
mm/mempolicy.c | 3 +++
mm/mprotect.c | 2 +-
mm/rmap.c | 8 +++++---
8 files changed, 34 insertions(+), 4 deletions(-)
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index e5a5f015ff03..b5a59e57bde1 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -706,6 +706,9 @@ static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma)
#endif /* CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */
#ifdef CONFIG_X86_USER_SHADOW_STACK
[ilog2(VM_SHADOW_STACK)] = "ss",
+#endif
+#ifdef CONFIG_NEED_VM_DROPPABLE
+ [ilog2(VM_DROPPABLE)] = "dp",
#endif
};
size_t i;
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 9849dfda44d4..5978cb4cc21c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -321,12 +321,14 @@ extern unsigned int kobjsize(const void *objp);
#define VM_HIGH_ARCH_BIT_3 35 /* bit only usable on 64-bit architectures */
#define VM_HIGH_ARCH_BIT_4 36 /* bit only usable on 64-bit architectures */
#define VM_HIGH_ARCH_BIT_5 37 /* bit only usable on 64-bit architectures */
+#define VM_HIGH_ARCH_BIT_6 38 /* bit only usable on 64-bit architectures */
#define VM_HIGH_ARCH_0 BIT(VM_HIGH_ARCH_BIT_0)
#define VM_HIGH_ARCH_1 BIT(VM_HIGH_ARCH_BIT_1)
#define VM_HIGH_ARCH_2 BIT(VM_HIGH_ARCH_BIT_2)
#define VM_HIGH_ARCH_3 BIT(VM_HIGH_ARCH_BIT_3)
#define VM_HIGH_ARCH_4 BIT(VM_HIGH_ARCH_BIT_4)
#define VM_HIGH_ARCH_5 BIT(VM_HIGH_ARCH_BIT_5)
+#define VM_HIGH_ARCH_6 BIT(VM_HIGH_ARCH_BIT_6)
#endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */
#ifdef CONFIG_ARCH_HAS_PKEYS
@@ -357,6 +359,12 @@ extern unsigned int kobjsize(const void *objp);
# define VM_SHADOW_STACK VM_NONE
#endif
+#ifdef CONFIG_NEED_VM_DROPPABLE
+# define VM_DROPPABLE VM_HIGH_ARCH_6
+#else
+# define VM_DROPPABLE VM_NONE
+#endif
+
#if defined(CONFIG_X86)
# define VM_PAT VM_ARCH_1 /* PAT reserves whole VMA at once (x86) */
#elif defined(CONFIG_PPC)
diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
index e46d6e82765e..fab7848df50a 100644
--- a/include/trace/events/mmflags.h
+++ b/include/trace/events/mmflags.h
@@ -165,6 +165,12 @@ IF_HAVE_PG_ARCH_X(arch_3)
# define IF_HAVE_UFFD_MINOR(flag, name)
#endif
+#ifdef CONFIG_NEED_VM_DROPPABLE
+# define IF_HAVE_VM_DROPPABLE(flag, name) {flag, name},
+#else
+# define IF_HAVE_VM_DROPPABLE(flag, name)
+#endif
+
#define __def_vmaflag_names \
{VM_READ, "read" }, \
{VM_WRITE, "write" }, \
@@ -197,6 +203,7 @@ IF_HAVE_VM_SOFTDIRTY(VM_SOFTDIRTY, "softdirty" ) \
{VM_MIXEDMAP, "mixedmap" }, \
{VM_HUGEPAGE, "hugepage" }, \
{VM_NOHUGEPAGE, "nohugepage" }, \
+IF_HAVE_VM_DROPPABLE(VM_DROPPABLE, "droppable" ) \
{VM_MERGEABLE, "mergeable" } \
#define show_vma_flags(flags) \
diff --git a/mm/Kconfig b/mm/Kconfig
index b4cb45255a54..6cd65ea4b3ad 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -1056,6 +1056,9 @@ config ARCH_USES_HIGH_VMA_FLAGS
bool
config ARCH_HAS_PKEYS
bool
+config NEED_VM_DROPPABLE
+ select ARCH_USES_HIGH_VMA_FLAGS
+ bool
config ARCH_USES_PG_ARCH_X
bool
diff --git a/mm/memory.c b/mm/memory.c
index b5453b86ec4b..57b03fc73159 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -5689,6 +5689,10 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
lru_gen_exit_fault();
+ /* If the mapping is droppable, then errors due to OOM aren't fatal. */
+ if (vma->vm_flags & VM_DROPPABLE)
+ ret &= ~VM_FAULT_OOM;
+
if (flags & FAULT_FLAG_USER) {
mem_cgroup_exit_user_fault();
/*
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index aec756ae5637..a66289f1d931 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -2300,6 +2300,9 @@ struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct vm_area_struct
pgoff_t ilx;
struct page *page;
+ if (vma->vm_flags & VM_DROPPABLE)
+ gfp |= __GFP_NOWARN | __GFP_NORETRY;
+
pol = get_vma_policy(vma, addr, order, &ilx);
page = alloc_pages_mpol_noprof(gfp | __GFP_COMP, order,
pol, ilx, numa_node_id());
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 94878c39ee32..88ff3ecc08a1 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -622,7 +622,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb,
may_expand_vm(mm, oldflags, nrpages))
return -ENOMEM;
if (!(oldflags & (VM_ACCOUNT|VM_WRITE|VM_HUGETLB|
- VM_SHARED|VM_NORESERVE))) {
+ VM_SHARED|VM_NORESERVE|VM_DROPPABLE))) {
charged = nrpages;
if (security_vm_enough_memory_mm(mm, charged))
return -ENOMEM;
diff --git a/mm/rmap.c b/mm/rmap.c
index e8fc5ecb59b2..d873a3f06506 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1397,7 +1397,8 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio);
VM_BUG_ON_VMA(address < vma->vm_start ||
address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
- __folio_set_swapbacked(folio);
+ if (!(vma->vm_flags & VM_DROPPABLE))
+ __folio_set_swapbacked(folio);
__folio_set_anon(folio, vma, address, true);
if (likely(!folio_test_large(folio))) {
@@ -1841,7 +1842,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
* plus the rmap(s) (dropped by discard:).
*/
if (ref_count == 1 + map_count &&
- !folio_test_dirty(folio)) {
+ (!folio_test_dirty(folio) || (vma->vm_flags & VM_DROPPABLE))) {
dec_mm_counter(mm, MM_ANONPAGES);
goto discard;
}
@@ -1851,7 +1852,8 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
* discarded. Remap the page to page table.
*/
set_pte_at(mm, address, pvmw.pte, pteval);
- folio_set_swapbacked(folio);
+ if (!(vma->vm_flags & VM_DROPPABLE))
+ folio_set_swapbacked(folio);
ret = false;
page_vma_mapped_walk_done(&pvmw);
break;
--
2.44.0
On Tue, May 28, 2024 at 5:24 AM Jason A. Donenfeld <[email protected]> wrote:
>
> The vDSO getrandom() implementation works with a buffer allocated with a
> new system call that has certain requirements:
>
> - It shouldn't be written to core dumps.
> * Easy: VM_DONTDUMP.
> - It should be zeroed on fork.
> * Easy: VM_WIPEONFORK.
>
> - It shouldn't be written to swap.
> * Uh-oh: mlock is rlimited.
> * Uh-oh: mlock isn't inherited by forks.
>
> - It shouldn't reserve actual memory, but it also shouldn't crash when
> page faulting in memory if none is available
> * Uh-oh: MAP_NORESERVE respects vm.overcommit_memory=2.
> * Uh-oh: VM_NORESERVE means segfaults.
>
> It turns out that the vDSO getrandom() function has three really nice
> characteristics that we can exploit to solve this problem:
>
> 1) Due to being wiped during fork(), the vDSO code is already robust to
> having the contents of the pages it reads zeroed out midway through
> the function's execution.
>
> 2) In the absolute worst case of whatever contingency we're coding for,
> we have the option to fallback to the getrandom() syscall, and
> everything is fine.
>
> 3) The buffers the function uses are only ever useful for a maximum of
> 60 seconds -- a sort of cache, rather than a long term allocation.
>
> These characteristics mean that we can introduce VM_DROPPABLE, which
> has the following semantics:
>
> a) It never is written out to swap.
> b) Under memory pressure, mm can just drop the pages (so that they're
> zero when read back again).
> c) If there's not enough memory to service a page fault, it's not fatal.
> d) It is inherited by fork.
> e) It doesn't count against the mlock budget, since nothing is locked.
>
> This is fairly simple to implement, with the one snag that we have to
> use 64-bit VM_* flags, but this shouldn't be a problem, since the only
> consumers will probably be 64-bit anyway.
>
> This way, allocations used by vDSO getrandom() can use:
>
> VM_DROPPABLE | VM_DONTDUMP | VM_WIPEONFORK | VM_NORESERVE
>
> And there will be no problem with OOMing, crashing on overcommitment,
> using memory when not in use, not wiping on fork(), coredumps, or
> writing out to swap.
>
> Cc: [email protected]
> Signed-off-by: Jason A. Donenfeld <[email protected]>
> ---
> fs/proc/task_mmu.c | 3 +++
> include/linux/mm.h | 8 ++++++++
> include/trace/events/mmflags.h | 7 +++++++
> mm/Kconfig | 3 +++
> mm/memory.c | 4 ++++
> mm/mempolicy.c | 3 +++
> mm/mprotect.c | 2 +-
> mm/rmap.c | 8 +++++---
> 8 files changed, 34 insertions(+), 4 deletions(-)
>
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index e5a5f015ff03..b5a59e57bde1 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -706,6 +706,9 @@ static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma)
> #endif /* CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */
> #ifdef CONFIG_X86_USER_SHADOW_STACK
> [ilog2(VM_SHADOW_STACK)] = "ss",
> +#endif
> +#ifdef CONFIG_NEED_VM_DROPPABLE
> + [ilog2(VM_DROPPABLE)] = "dp",
> #endif
> };
> size_t i;
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 9849dfda44d4..5978cb4cc21c 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -321,12 +321,14 @@ extern unsigned int kobjsize(const void *objp);
> #define VM_HIGH_ARCH_BIT_3 35 /* bit only usable on 64-bit architectures */
> #define VM_HIGH_ARCH_BIT_4 36 /* bit only usable on 64-bit architectures */
> #define VM_HIGH_ARCH_BIT_5 37 /* bit only usable on 64-bit architectures */
> +#define VM_HIGH_ARCH_BIT_6 38 /* bit only usable on 64-bit architectures */
> #define VM_HIGH_ARCH_0 BIT(VM_HIGH_ARCH_BIT_0)
> #define VM_HIGH_ARCH_1 BIT(VM_HIGH_ARCH_BIT_1)
> #define VM_HIGH_ARCH_2 BIT(VM_HIGH_ARCH_BIT_2)
> #define VM_HIGH_ARCH_3 BIT(VM_HIGH_ARCH_BIT_3)
> #define VM_HIGH_ARCH_4 BIT(VM_HIGH_ARCH_BIT_4)
> #define VM_HIGH_ARCH_5 BIT(VM_HIGH_ARCH_BIT_5)
> +#define VM_HIGH_ARCH_6 BIT(VM_HIGH_ARCH_BIT_6)
> #endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */
>
> #ifdef CONFIG_ARCH_HAS_PKEYS
> @@ -357,6 +359,12 @@ extern unsigned int kobjsize(const void *objp);
> # define VM_SHADOW_STACK VM_NONE
> #endif
>
> +#ifdef CONFIG_NEED_VM_DROPPABLE
> +# define VM_DROPPABLE VM_HIGH_ARCH_6
> +#else
> +# define VM_DROPPABLE VM_NONE
> +#endif
> +
> #if defined(CONFIG_X86)
> # define VM_PAT VM_ARCH_1 /* PAT reserves whole VMA at once (x86) */
> #elif defined(CONFIG_PPC)
> diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
> index e46d6e82765e..fab7848df50a 100644
> --- a/include/trace/events/mmflags.h
> +++ b/include/trace/events/mmflags.h
> @@ -165,6 +165,12 @@ IF_HAVE_PG_ARCH_X(arch_3)
> # define IF_HAVE_UFFD_MINOR(flag, name)
> #endif
>
> +#ifdef CONFIG_NEED_VM_DROPPABLE
> +# define IF_HAVE_VM_DROPPABLE(flag, name) {flag, name},
> +#else
> +# define IF_HAVE_VM_DROPPABLE(flag, name)
> +#endif
> +
> #define __def_vmaflag_names \
> {VM_READ, "read" }, \
> {VM_WRITE, "write" }, \
> @@ -197,6 +203,7 @@ IF_HAVE_VM_SOFTDIRTY(VM_SOFTDIRTY, "softdirty" ) \
> {VM_MIXEDMAP, "mixedmap" }, \
> {VM_HUGEPAGE, "hugepage" }, \
> {VM_NOHUGEPAGE, "nohugepage" }, \
> +IF_HAVE_VM_DROPPABLE(VM_DROPPABLE, "droppable" ) \
> {VM_MERGEABLE, "mergeable" } \
>
> #define show_vma_flags(flags) \
> diff --git a/mm/Kconfig b/mm/Kconfig
> index b4cb45255a54..6cd65ea4b3ad 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -1056,6 +1056,9 @@ config ARCH_USES_HIGH_VMA_FLAGS
> bool
> config ARCH_HAS_PKEYS
> bool
> +config NEED_VM_DROPPABLE
> + select ARCH_USES_HIGH_VMA_FLAGS
> + bool
>
> config ARCH_USES_PG_ARCH_X
> bool
> diff --git a/mm/memory.c b/mm/memory.c
> index b5453b86ec4b..57b03fc73159 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -5689,6 +5689,10 @@ vm_fault_t handle_mm_fault(struct vm_area_struct *vma, unsigned long address,
>
> lru_gen_exit_fault();
>
> + /* If the mapping is droppable, then errors due to OOM aren't fatal. */
> + if (vma->vm_flags & VM_DROPPABLE)
> + ret &= ~VM_FAULT_OOM;
> +
> if (flags & FAULT_FLAG_USER) {
> mem_cgroup_exit_user_fault();
> /*
> diff --git a/mm/mempolicy.c b/mm/mempolicy.c
> index aec756ae5637..a66289f1d931 100644
> --- a/mm/mempolicy.c
> +++ b/mm/mempolicy.c
> @@ -2300,6 +2300,9 @@ struct folio *vma_alloc_folio_noprof(gfp_t gfp, int order, struct vm_area_struct
> pgoff_t ilx;
> struct page *page;
>
> + if (vma->vm_flags & VM_DROPPABLE)
> + gfp |= __GFP_NOWARN | __GFP_NORETRY;
> +
> pol = get_vma_policy(vma, addr, order, &ilx);
> page = alloc_pages_mpol_noprof(gfp | __GFP_COMP, order,
> pol, ilx, numa_node_id());
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index 94878c39ee32..88ff3ecc08a1 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -622,7 +622,7 @@ mprotect_fixup(struct vma_iterator *vmi, struct mmu_gather *tlb,
> may_expand_vm(mm, oldflags, nrpages))
> return -ENOMEM;
> if (!(oldflags & (VM_ACCOUNT|VM_WRITE|VM_HUGETLB|
> - VM_SHARED|VM_NORESERVE))) {
> + VM_SHARED|VM_NORESERVE|VM_DROPPABLE))) {
> charged = nrpages;
> if (security_vm_enough_memory_mm(mm, charged))
> return -ENOMEM;
> diff --git a/mm/rmap.c b/mm/rmap.c
> index e8fc5ecb59b2..d873a3f06506 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1397,7 +1397,8 @@ void folio_add_new_anon_rmap(struct folio *folio, struct vm_area_struct *vma,
> VM_WARN_ON_FOLIO(folio_test_hugetlb(folio), folio);
> VM_BUG_ON_VMA(address < vma->vm_start ||
> address + (nr << PAGE_SHIFT) > vma->vm_end, vma);
> - __folio_set_swapbacked(folio);
> + if (!(vma->vm_flags & VM_DROPPABLE))
> + __folio_set_swapbacked(folio);
> __folio_set_anon(folio, vma, address, true);
>
> if (likely(!folio_test_large(folio))) {
> @@ -1841,7 +1842,7 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> * plus the rmap(s) (dropped by discard:).
> */
> if (ref_count == 1 + map_count &&
> - !folio_test_dirty(folio)) {
> + (!folio_test_dirty(folio) || (vma->vm_flags & VM_DROPPABLE))) {
> dec_mm_counter(mm, MM_ANONPAGES);
> goto discard;
> }
> @@ -1851,7 +1852,8 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
> * discarded. Remap the page to page table.
> */
> set_pte_at(mm, address, pvmw.pte, pteval);
> - folio_set_swapbacked(folio);
> + if (!(vma->vm_flags & VM_DROPPABLE))
> + folio_set_swapbacked(folio);
> ret = false;
> page_vma_mapped_walk_done(&pvmw);
> break;
> --
> 2.44.0
>
>
This seems like an obvious question, but I can't seem to find a
message asking this in the long history of this patchset: VM_DROPPABLE
seems very close to MADV_FREE lazyfree memory.
Could those functionalities be folded in to one?
- Frank
On Tue, May 28, 2024 at 5:24 AM Jason A. Donenfeld <[email protected]> wrote:
>
> The vDSO getrandom() implementation works with a buffer allocated with a
> new system call that has certain requirements:
>
> - It shouldn't be written to core dumps.
> * Easy: VM_DONTDUMP.
I'll bite: why shouldn't it be written to core dumps?
The implementation is supposed to be forward-secret: an attacker who
gets the state can't predict prior outputs. And a core-dumped process
is dead: there won't be future outputs.