From: Jeff Xu <[email protected]>
This is V4 of the patch, the patch has improved significantly since V1,
thanks to diverse inputs, a few discussions remain, please read those
in the open discussion section of v4 of change history.
-----------------------------------------------------------------
This patchset proposes a new mseal() syscall for the Linux kernel.
In a nutshell, mseal() protects the VMAs of a given virtual memory
range against modifications, such as changes to their permission bits.
Modern CPUs support memory permissions, such as the read/write (RW)
and no-execute (NX) bits. Linux has supported NX since the release of
kernel version 2.6.8 in August 2004 [1]. The memory permission feature
improves the security stance on memory corruption bugs, as an attacker
cannot simply write to arbitrary memory and point the code to it. The
memory must be marked with the X bit, or else an exception will occur.
Internally, the kernel maintains the memory permissions in a data
structure called VMA (vm_area_struct). mseal() additionally protects
the VMA itself against modifications of the selected seal type.
Memory sealing is useful to mitigate memory corruption issues where a
corrupted pointer is passed to a memory management system. For
example, such an attacker primitive can break control-flow integrity
guarantees since read-only memory that is supposed to be trusted can
become writable or .text pages can get remapped. Memory sealing can
automatically be applied by the runtime loader to seal .text and
.rodata pages and applications can additionally seal security critical
data at runtime. A similar feature already exists in the XNU kernel
with the VM_FLAGS_PERMANENT [3] flag and on OpenBSD with the
mimmutable syscall [4]. Also, Chrome wants to adopt this feature for
their CFI work [2] and this patchset has been designed to be
compatible with the Chrome use case.
Two system calls are involved in sealing the map: mmap() and mseal().
The new mseal() is an syscall on 64 bit CPU, and with
following signature:
int mseal(void addr, size_t len, unsigned long flags)
addr/len: memory range.
flags: reserved.
mseal() blocks following operations for the given memory range.
1> Unmapping, moving to another location, and shrinking the size,
via munmap() and mremap(), can leave an empty space, therefore can
be replaced with a VMA with a new set of attributes.
2> Moving or expanding a different VMA into the current location,
via mremap().
3> Modifying a VMA via mmap(MAP_FIXED).
4> Size expansion, via mremap(), does not appear to pose any specific
risks to sealed VMAs. It is included anyway because the use case is
unclear. In any case, users can rely on merging to expand a sealed VMA.
5> mprotect() and pkey_mprotect().
6> Some destructive madvice() behaviors (e.g. MADV_DONTNEED) for anonymous
memory, when users don't have write permission to the memory. Those
behaviors can alter region contents by discarding pages, effectively a
memset(0) for anonymous memory.
In addition: mmap() has two related changes.
The PROT_SEAL bit in prot field of mmap(). When present, it marks
the map sealed since creation.
The MAP_SEALABLE bit in the flags field of mmap(). When present, it marks
the map as sealable. A map created without MAP_SEALABLE will not support
sealing, i.e. mseal() will fail.
Applications that don't care about sealing will expect their behavior
unchanged. For those that need sealing support, opt-in by adding
MAP_SEALABLE in mmap().
The idea that inspired this patch comes from Stephen Röttger’s work in
V8 CFI [5]. Chrome browser in ChromeOS will be the first user of this
API.
Indeed, the Chrome browser has very specific requirements for sealing,
which are distinct from those of most applications. For example, in
the case of libc, sealing is only applied to read-only (RO) or
read-execute (RX) memory segments (such as .text and .RELRO) to
prevent them from becoming writable, the lifetime of those mappings
are tied to the lifetime of the process.
Chrome wants to seal two large address space reservations that are
managed by different allocators. The memory is mapped RW- and RWX
respectively but write access to it is restricted using pkeys (or in
the future ARM permission overlay extensions). The lifetime of those
mappings are not tied to the lifetime of the process, therefore, while
the memory is sealed, the allocators still need to free or discard the
unused memory. For example, with madvise(DONTNEED).
However, always allowing madvise(DONTNEED) on this range poses a
security risk. For example if a jump instruction crosses a page
boundary and the second page gets discarded, it will overwrite the
target bytes with zeros and change the control flow. Checking
write-permission before the discard operation allows us to control
when the operation is valid. In this case, the madvise will only
succeed if the executing thread has PKEY write permissions and PKRU
changes are protected in software by control-flow integrity.
Although the initial version of this patch series is targeting the
Chrome browser as its first user, it became evident during upstream
discussions that we would also want to ensure that the patch set
eventually is a complete solution for memory sealing and compatible
with other use cases. The specific scenario currently in mind is
glibc's use case of loading and sealing ELF executables. To this end,
Stephen is working on a change to glibc to add sealing support to the
dynamic linker, which will seal all non-writable segments at startup.
Once this work is completed, all applications will be able to
automatically benefit from these new protections.
--------------------------------------------------------------------
Change history:
===============
V4:
(Suggested by Linus Torvalds)
- new signature: mseal(start,len,flags)
- 32 bit is not supported. vm_seal is removed, use vm_flags instead.
- single bit in vm_flags for sealed state.
- CONFIG_MSEAL kernel config is removed.
- single bit of PROT_SEAL in the "Prot" field of mmap().
Other changes:
- update selftest (Suggested by Muhammad Usama Anjum)
- update documentation.
Open discussions:
=================
Below discussion were brought up in V3, and did not receive any input:
the one important to this patch is MAP_SEALABLE in mmap(), which is in
current version of patch, list here for input/comments.
---------------------------------------------------------------------
During the development of V3, I had new questions and thoughts and
wished to discuss.
1> shm/aio
From reading the code, it seems to me that aio/shm can mmap/munmap
maps on behalf of userspace, e.g. ksys_shmdt() in shm.c. The lifetime
of those mapping are not tied to the lifetime of the process. If those
memories are sealed from userspace, then unmap will fail. This isn’t a
huge problem, since the memory will eventually be freed at exit or
exec. However, it feels like the solution is not complete, because of
the leaks in VMA address space during the lifetime of the process.
2> Brk (heap/stack)
Currently, userspace applications can seal parts of the heap by
calling malloc() and mseal(). This raises the question of what the
expected behavior is when sealing the heap is attempted.
let's assume following calls from user space:
ptr = malloc(size);
mprotect(ptr, size, RO);
mseal(ptr, size, SEAL_PROT_PKEY);
free(ptr);
Technically, before mseal() is added, the user can change the
protection of the heap by calling mprotect(RO). As long as the user
changes the protection back to RW before free(), the memory can be
reused.
Adding mseal() into picture, however, the heap is then sealed
partially, user can still free it, but the memory remains to be RO,
and the result of brk-shrink is nondeterministic, depending on if
munmap() will try to free the sealed memory.(brk uses munmap to shrink
the heap).
3> Above two cases led to the third topic:
There one option to address the problem mentioned above.
Option 1: A “MAP_SEALABLE” flag in mmap().
If a map is created without this flag, the mseal() operation will
fail. Applications that are not concerned with sealing will expect
their behavior to be unchanged. For those that are concerned, adding a
flag at mmap time to opt in is not difficult. For the short term, this
solves problems 1 and 2 above. The memory in shm/aio/brk will not have
the MAP_SEALABLE flag at mmap(), and the same is true for the heap.
If we choose not to go with path, all mapping will by default
sealable. We could document above mentioned limitations so devs are
more careful at the time to choose what memory to seal. I think
deny of service through mseal() by attacker is probably not a concern,
if attackers have access to mseal() and unsealed memory, then they can
also do other harmful thing to the memory, such as munmap, etc.
4>
I think it might be possible to seal the stack or other special
mappings created at runtime (vdso, vsyscall, vvar). This means we can
enforce and seal W^X for certain types of application. For instance,
the stack is typically used in read-write mode, but in some cases, it
can become executable. To defend against unintented addition of executable
bit to stack, we could let the application to seal it.
Sealing the heap (for adding X) requires special handling, since the
heap can shrink, and shrink is implemented through munmap().
Indeed, it might be possible that all virtual memory accessible to user
space, regardless of its usage pattern, could be sealed. However, this
would require additional research and development work.
=====================================================================
V3:
- Abandon per-syscall approach, (Suggested by Linus Torvalds).
- Organize sealing types around their functionality, such as
MM_SEAL_BASE, MM_SEAL_PROT_PKEY.
- Extend the scope of sealing from calls originated in userspace to
both kernel and userspace. (Suggested by Linus Torvalds)
- Add seal type support in mmap(). (Suggested by Pedro Falcato)
- Add a new sealing type: MM_SEAL_DISCARD_RO_ANON to prevent
destructive operations of madvise. (Suggested by Jann Horn and
Stephen Röttger)
- Make sealed VMAs mergeable. (Suggested by Jann Horn)
- Add MAP_SEALABLE to mmap()
- Add documentation - mseal.rst
https://lore.kernel.org/linux-mm/[email protected]/
v2:
Use _BITUL to define MM_SEAL_XX type.
Use unsigned long for seal type in sys_mseal() and other functions.
Remove internal VM_SEAL_XX type and convert_user_seal_type().
Remove MM_ACTION_XX type.
Remove caller_origin(ON_BEHALF_OF_XX) and replace with sealing bitmask.
Add more comments in code.
Add a detailed commit message.
https://lore.kernel.org/lkml/[email protected]/
v1:
https://lore.kernel.org/lkml/[email protected]/
----------------------------------------------------------------
[1] https://kernelnewbies.org/Linux_2_6_8
[2] https://v8.dev/blog/control-flow-integrity
[3] https://github.com/apple-oss-distributions/xnu/blob/1031c584a5e37aff177559b9f69dbd3c8c3fd30a/osfmk/mach/vm_statistics.h#L274
[4] https://man.openbsd.org/mimmutable.2
[5] https://docs.google.com/document/d/1O2jwK4dxI3nRcOJuPYkonhTkNQfbmwdvxQMyXgeaRHo/edit#heading=h.bvaojj9fu6hc
[6] https://lore.kernel.org/lkml/CAG48ez3ShUYey+ZAFsU2i1RpQn0a5eOs2hzQ426FkcgnfUGLvA@mail.gmail.com/
[7] https://lore.kernel.org/lkml/[email protected]/
Jeff Xu (4):
mseal: Wire up mseal syscall
mseal: add mseal syscall
selftest mm/mseal memory sealing
mseal:add documentation
Documentation/userspace-api/mseal.rst | 181 ++
arch/alpha/kernel/syscalls/syscall.tbl | 1 +
arch/arm/tools/syscall.tbl | 1 +
arch/arm64/include/asm/unistd.h | 2 +-
arch/arm64/include/asm/unistd32.h | 2 +
arch/m68k/kernel/syscalls/syscall.tbl | 1 +
arch/microblaze/kernel/syscalls/syscall.tbl | 1 +
arch/mips/kernel/syscalls/syscall_n32.tbl | 1 +
arch/mips/kernel/syscalls/syscall_n64.tbl | 1 +
arch/mips/kernel/syscalls/syscall_o32.tbl | 1 +
arch/parisc/kernel/syscalls/syscall.tbl | 1 +
arch/powerpc/kernel/syscalls/syscall.tbl | 1 +
arch/s390/kernel/syscalls/syscall.tbl | 1 +
arch/sh/kernel/syscalls/syscall.tbl | 1 +
arch/sparc/kernel/syscalls/syscall.tbl | 1 +
arch/x86/entry/syscalls/syscall_32.tbl | 1 +
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
arch/xtensa/kernel/syscalls/syscall.tbl | 1 +
include/linux/mm.h | 60 +
include/linux/syscalls.h | 1 +
include/uapi/asm-generic/mman-common.h | 7 +
include/uapi/asm-generic/unistd.h | 5 +-
kernel/sys_ni.c | 1 +
mm/Makefile | 4 +
mm/madvise.c | 12 +
mm/mmap.c | 27 +
mm/mprotect.c | 10 +
mm/mremap.c | 31 +
mm/mseal.c | 330 ++++
tools/testing/selftests/mm/.gitignore | 1 +
tools/testing/selftests/mm/Makefile | 1 +
tools/testing/selftests/mm/mseal_test.c | 1971 +++++++++++++++++++
32 files changed, 2659 insertions(+), 2 deletions(-)
create mode 100644 Documentation/userspace-api/mseal.rst
create mode 100644 mm/mseal.c
create mode 100644 tools/testing/selftests/mm/mseal_test.c
--
2.43.0.195.gebba966016-goog
From: Jeff Xu <[email protected]>
Wire up mseal syscall for all architectures.
Signed-off-by: Jeff Xu <[email protected]>
---
arch/alpha/kernel/syscalls/syscall.tbl | 1 +
arch/arm/tools/syscall.tbl | 1 +
arch/arm64/include/asm/unistd.h | 2 +-
arch/arm64/include/asm/unistd32.h | 2 ++
arch/m68k/kernel/syscalls/syscall.tbl | 1 +
arch/microblaze/kernel/syscalls/syscall.tbl | 1 +
arch/mips/kernel/syscalls/syscall_n32.tbl | 1 +
arch/mips/kernel/syscalls/syscall_n64.tbl | 1 +
arch/mips/kernel/syscalls/syscall_o32.tbl | 1 +
arch/parisc/kernel/syscalls/syscall.tbl | 1 +
arch/powerpc/kernel/syscalls/syscall.tbl | 1 +
arch/s390/kernel/syscalls/syscall.tbl | 1 +
arch/sh/kernel/syscalls/syscall.tbl | 1 +
arch/sparc/kernel/syscalls/syscall.tbl | 1 +
arch/x86/entry/syscalls/syscall_32.tbl | 1 +
arch/x86/entry/syscalls/syscall_64.tbl | 1 +
arch/xtensa/kernel/syscalls/syscall.tbl | 1 +
include/uapi/asm-generic/unistd.h | 5 ++++-
18 files changed, 22 insertions(+), 2 deletions(-)
diff --git a/arch/alpha/kernel/syscalls/syscall.tbl b/arch/alpha/kernel/syscalls/syscall.tbl
index 18c842ca6c32..dd92e2639a03 100644
--- a/arch/alpha/kernel/syscalls/syscall.tbl
+++ b/arch/alpha/kernel/syscalls/syscall.tbl
@@ -496,3 +496,4 @@
564 common futex_wake sys_futex_wake
565 common futex_wait sys_futex_wait
566 common futex_requeue sys_futex_requeue
+567 common mseal sys_mseal
diff --git a/arch/arm/tools/syscall.tbl b/arch/arm/tools/syscall.tbl
index 584f9528c996..d96461ee1ebe 100644
--- a/arch/arm/tools/syscall.tbl
+++ b/arch/arm/tools/syscall.tbl
@@ -470,3 +470,4 @@
454 common futex_wake sys_futex_wake
455 common futex_wait sys_futex_wait
456 common futex_requeue sys_futex_requeue
+457 common mseal sys_mseal
diff --git a/arch/arm64/include/asm/unistd.h b/arch/arm64/include/asm/unistd.h
index 531effca5f1f..298313d2e0af 100644
--- a/arch/arm64/include/asm/unistd.h
+++ b/arch/arm64/include/asm/unistd.h
@@ -39,7 +39,7 @@
#define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE + 5)
#define __ARM_NR_COMPAT_END (__ARM_NR_COMPAT_BASE + 0x800)
-#define __NR_compat_syscalls 457
+#define __NR_compat_syscalls 458
#endif
#define __ARCH_WANT_SYS_CLONE
diff --git a/arch/arm64/include/asm/unistd32.h b/arch/arm64/include/asm/unistd32.h
index 9f7c1bf99526..e3118e0327c7 100644
--- a/arch/arm64/include/asm/unistd32.h
+++ b/arch/arm64/include/asm/unistd32.h
@@ -919,6 +919,8 @@ __SYSCALL(__NR_futex_wake, sys_futex_wake)
__SYSCALL(__NR_futex_wait, sys_futex_wait)
#define __NR_futex_requeue 456
__SYSCALL(__NR_futex_requeue, sys_futex_requeue)
+#define __NR_mseal 457
+__SYSCALL(__NR_mseal, sys_mseal)
/*
* Please add new compat syscalls above this comment and update
diff --git a/arch/m68k/kernel/syscalls/syscall.tbl b/arch/m68k/kernel/syscalls/syscall.tbl
index 7a4b780e82cb..105e966db655 100644
--- a/arch/m68k/kernel/syscalls/syscall.tbl
+++ b/arch/m68k/kernel/syscalls/syscall.tbl
@@ -456,3 +456,4 @@
454 common futex_wake sys_futex_wake
455 common futex_wait sys_futex_wait
456 common futex_requeue sys_futex_requeue
+457 common mseal sys_mseal
diff --git a/arch/microblaze/kernel/syscalls/syscall.tbl b/arch/microblaze/kernel/syscalls/syscall.tbl
index 5b6a0b02b7de..18956345d348 100644
--- a/arch/microblaze/kernel/syscalls/syscall.tbl
+++ b/arch/microblaze/kernel/syscalls/syscall.tbl
@@ -462,3 +462,4 @@
454 common futex_wake sys_futex_wake
455 common futex_wait sys_futex_wait
456 common futex_requeue sys_futex_requeue
+457 common mseal sys_mseal
diff --git a/arch/mips/kernel/syscalls/syscall_n32.tbl b/arch/mips/kernel/syscalls/syscall_n32.tbl
index a842b41c8e06..06cac28209d2 100644
--- a/arch/mips/kernel/syscalls/syscall_n32.tbl
+++ b/arch/mips/kernel/syscalls/syscall_n32.tbl
@@ -395,3 +395,4 @@
454 n32 futex_wake sys_futex_wake
455 n32 futex_wait sys_futex_wait
456 n32 futex_requeue sys_futex_requeue
+457 n32 mseal sys_mseal
diff --git a/arch/mips/kernel/syscalls/syscall_n64.tbl b/arch/mips/kernel/syscalls/syscall_n64.tbl
index 116ff501bf92..bb8270588953 100644
--- a/arch/mips/kernel/syscalls/syscall_n64.tbl
+++ b/arch/mips/kernel/syscalls/syscall_n64.tbl
@@ -371,3 +371,4 @@
454 n64 futex_wake sys_futex_wake
455 n64 futex_wait sys_futex_wait
456 n64 futex_requeue sys_futex_requeue
+457 n64 mseal sys_mseal
diff --git a/arch/mips/kernel/syscalls/syscall_o32.tbl b/arch/mips/kernel/syscalls/syscall_o32.tbl
index 525cc54bc63b..93958f063c3f 100644
--- a/arch/mips/kernel/syscalls/syscall_o32.tbl
+++ b/arch/mips/kernel/syscalls/syscall_o32.tbl
@@ -444,3 +444,4 @@
454 o32 futex_wake sys_futex_wake
455 o32 futex_wait sys_futex_wait
456 o32 futex_requeue sys_futex_requeue
+457 o32 mseal sys_mseal
diff --git a/arch/parisc/kernel/syscalls/syscall.tbl b/arch/parisc/kernel/syscalls/syscall.tbl
index a47798fed54e..c6bc9277dcd7 100644
--- a/arch/parisc/kernel/syscalls/syscall.tbl
+++ b/arch/parisc/kernel/syscalls/syscall.tbl
@@ -455,3 +455,4 @@
454 common futex_wake sys_futex_wake
455 common futex_wait sys_futex_wait
456 common futex_requeue sys_futex_requeue
+457 common mseal sys_mseal
diff --git a/arch/powerpc/kernel/syscalls/syscall.tbl b/arch/powerpc/kernel/syscalls/syscall.tbl
index 7fab411378f2..2947c4caf22e 100644
--- a/arch/powerpc/kernel/syscalls/syscall.tbl
+++ b/arch/powerpc/kernel/syscalls/syscall.tbl
@@ -543,3 +543,4 @@
454 common futex_wake sys_futex_wake
455 common futex_wait sys_futex_wait
456 common futex_requeue sys_futex_requeue
+457 common mseal sys_mseal
diff --git a/arch/s390/kernel/syscalls/syscall.tbl b/arch/s390/kernel/syscalls/syscall.tbl
index 86fec9b080f6..2400a0e91883 100644
--- a/arch/s390/kernel/syscalls/syscall.tbl
+++ b/arch/s390/kernel/syscalls/syscall.tbl
@@ -459,3 +459,4 @@
454 common futex_wake sys_futex_wake sys_futex_wake
455 common futex_wait sys_futex_wait sys_futex_wait
456 common futex_requeue sys_futex_requeue sys_futex_requeue
+457 common mseal sys_mseal sys_mseal
diff --git a/arch/sh/kernel/syscalls/syscall.tbl b/arch/sh/kernel/syscalls/syscall.tbl
index 363fae0fe9bf..6768e43c5d23 100644
--- a/arch/sh/kernel/syscalls/syscall.tbl
+++ b/arch/sh/kernel/syscalls/syscall.tbl
@@ -459,3 +459,4 @@
454 common futex_wake sys_futex_wake
455 common futex_wait sys_futex_wait
456 common futex_requeue sys_futex_requeue
+457 common mseal sys_mseal
diff --git a/arch/sparc/kernel/syscalls/syscall.tbl b/arch/sparc/kernel/syscalls/syscall.tbl
index 7bcaa3d5ea44..4285465f7e4b 100644
--- a/arch/sparc/kernel/syscalls/syscall.tbl
+++ b/arch/sparc/kernel/syscalls/syscall.tbl
@@ -502,3 +502,4 @@
454 common futex_wake sys_futex_wake
455 common futex_wait sys_futex_wait
456 common futex_requeue sys_futex_requeue
+457 common mseal sys_mseal
diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index c8fac5205803..e4e3b2097658 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -461,3 +461,4 @@
454 i386 futex_wake sys_futex_wake
455 i386 futex_wait sys_futex_wait
456 i386 futex_requeue sys_futex_requeue
+457 i386 mseal sys_mseal
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index 8cb8bf68721c..03cff8a24726 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -378,6 +378,7 @@
454 common futex_wake sys_futex_wake
455 common futex_wait sys_futex_wait
456 common futex_requeue sys_futex_requeue
+457 common mseal sys_mseal
#
# Due to a historical design error, certain syscalls are numbered differently
diff --git a/arch/xtensa/kernel/syscalls/syscall.tbl b/arch/xtensa/kernel/syscalls/syscall.tbl
index 06eefa9c1458..3c87cf0424c8 100644
--- a/arch/xtensa/kernel/syscalls/syscall.tbl
+++ b/arch/xtensa/kernel/syscalls/syscall.tbl
@@ -427,3 +427,4 @@
454 common futex_wake sys_futex_wake
455 common futex_wait sys_futex_wait
456 common futex_requeue sys_futex_requeue
+457 common mseal sys_mseal
diff --git a/include/uapi/asm-generic/unistd.h b/include/uapi/asm-generic/unistd.h
index 756b013fb832..9b2b6a4a80b6 100644
--- a/include/uapi/asm-generic/unistd.h
+++ b/include/uapi/asm-generic/unistd.h
@@ -829,8 +829,11 @@ __SYSCALL(__NR_futex_wait, sys_futex_wait)
#define __NR_futex_requeue 456
__SYSCALL(__NR_futex_requeue, sys_futex_requeue)
+#define __NR_mseal 457
+__SYSCALL(__NR_mseal, sys_mseal)
+
#undef __NR_syscalls
-#define __NR_syscalls 457
+#define __NR_syscalls 458
/*
* 32 bit systems traditionally used different
--
2.43.0.195.gebba966016-goog
From: Jeff Xu <[email protected]>
The new mseal() is an syscall on 64 bit CPU, and with
following signature:
int mseal(void addr, size_t len, unsigned long flags)
addr/len: memory range.
flags: reserved.
mseal() blocks following operations for the given memory range.
1> Unmapping, moving to another location, and shrinking the size,
via munmap() and mremap(), can leave an empty space, therefore can
be replaced with a VMA with a new set of attributes.
2> Moving or expanding a different VMA into the current location,
via mremap().
3> Modifying a VMA via mmap(MAP_FIXED).
4> Size expansion, via mremap(), does not appear to pose any specific
risks to sealed VMAs. It is included anyway because the use case is
unclear. In any case, users can rely on merging to expand a sealed VMA.
5> mprotect() and pkey_mprotect().
6> Some destructive madvice() behaviors (e.g. MADV_DONTNEED) for anonymous
memory, when users don't have write permission to the memory. Those
behaviors can alter region contents by discarding pages, effectively a
memset(0) for anonymous memory.
In addition: mmap() has two related changes.
The PROT_SEAL bit in prot field of mmap(). When present, it marks
the map sealed since creation.
The MAP_SEALABLE bit in the flags field of mmap(). When present, it marks
the map as sealable. A map created without MAP_SEALABLE will not support
sealing, i.e. mseal() will fail.
Applications that don't care about sealing will expect their behavior
unchanged. For those that need sealing support, opt-in by adding
MAP_SEALABLE in mmap().
Signed-off-by: Jeff Xu <[email protected]>
---
include/linux/mm.h | 60 +++++
include/linux/syscalls.h | 1 +
include/uapi/asm-generic/mman-common.h | 7 +
kernel/sys_ni.c | 1 +
mm/Makefile | 4 +
mm/madvise.c | 12 +
mm/mmap.c | 27 ++
mm/mprotect.c | 10 +
mm/mremap.c | 31 +++
mm/mseal.c | 330 +++++++++++++++++++++++++
10 files changed, 483 insertions(+)
create mode 100644 mm/mseal.c
diff --git a/include/linux/mm.h b/include/linux/mm.h
index da5219b48d52..50d6613ec352 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -30,6 +30,7 @@
#include <linux/kasan.h>
#include <linux/memremap.h>
#include <linux/slab.h>
+#include <uapi/linux/mman.h>
struct mempolicy;
struct anon_vma;
@@ -328,6 +329,14 @@ extern unsigned int kobjsize(const void *objp);
#define VM_HIGH_ARCH_5 BIT(VM_HIGH_ARCH_BIT_5)
#endif /* CONFIG_ARCH_USES_HIGH_VMA_FLAGS */
+#ifdef CONFIG_64BIT
+/* VM is sealable, in vm_flags */
+#define VM_SEALABLE _BITUL(63)
+
+/* VM is sealed, in vm_flags */
+#define VM_SEALED _BITUL(62)
+#endif
+
#ifdef CONFIG_ARCH_HAS_PKEYS
# define VM_PKEY_SHIFT VM_HIGH_ARCH_BIT_0
# define VM_PKEY_BIT0 VM_HIGH_ARCH_0 /* A protection key is a 4-bit value */
@@ -4143,4 +4152,55 @@ static inline bool pfn_is_unaccepted_memory(unsigned long pfn)
return range_contains_unaccepted_memory(paddr, paddr + PAGE_SIZE);
}
+#ifdef CONFIG_64BIT
+static inline int can_do_mseal(unsigned long flags)
+{
+ if (flags)
+ return -EINVAL;
+
+ return 0;
+}
+
+extern bool can_modify_mm(struct mm_struct *mm, unsigned long start,
+ unsigned long end);
+extern bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start,
+ unsigned long end, int behavior);
+
+static inline unsigned long get_mmap_seals(unsigned long prot,
+ unsigned long flags)
+{
+ unsigned long vm_seals;
+
+ if (prot & PROT_SEAL)
+ vm_seals = VM_SEALED | VM_SEALABLE;
+ else
+ vm_seals = (flags & MAP_SEALABLE) ? VM_SEALABLE:0;
+
+ return vm_seals;
+}
+#else
+static inline int can_do_mseal(unsigned long flags)
+{
+ return -EPERM;
+}
+
+static inline bool can_modify_mm(struct mm_struct *mm, unsigned long start,
+ unsigned long end)
+{
+ return true;
+}
+
+static inline bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start,
+ unsigned long end, int behavior)
+{
+ return true;
+}
+
+static inline unsigned long get_mmap_seals(unsigned long prot,
+ unsigned long flags)
+{
+ return 0;
+}
+#endif
+
#endif /* _LINUX_MM_H */
diff --git a/include/linux/syscalls.h b/include/linux/syscalls.h
index fd9d12de7e92..60a2cf0f6bb5 100644
--- a/include/linux/syscalls.h
+++ b/include/linux/syscalls.h
@@ -811,6 +811,7 @@ asmlinkage long sys_process_mrelease(int pidfd, unsigned int flags);
asmlinkage long sys_remap_file_pages(unsigned long start, unsigned long size,
unsigned long prot, unsigned long pgoff,
unsigned long flags);
+asmlinkage long sys_mseal(unsigned long start, size_t len, unsigned long flags);
asmlinkage long sys_mbind(unsigned long start, unsigned long len,
unsigned long mode,
const unsigned long __user *nmask,
diff --git a/include/uapi/asm-generic/mman-common.h b/include/uapi/asm-generic/mman-common.h
index 6ce1f1ceb432..4a135eb3fe7e 100644
--- a/include/uapi/asm-generic/mman-common.h
+++ b/include/uapi/asm-generic/mman-common.h
@@ -17,6 +17,11 @@
#define PROT_GROWSDOWN 0x01000000 /* mprotect flag: extend change to start of growsdown vma */
#define PROT_GROWSUP 0x02000000 /* mprotect flag: extend change to end of growsup vma */
+/*
+ * The PROT_SEAL defines memory sealing in the prot argument of mmap().
+ */
+#define PROT_SEAL _BITUL(26) /* 0x04000000 */
+
/* 0x01 - 0x03 are defined in linux/mman.h */
#define MAP_TYPE 0x0f /* Mask for type of mapping */
#define MAP_FIXED 0x10 /* Interpret addr exactly */
@@ -33,6 +38,8 @@
#define MAP_UNINITIALIZED 0x4000000 /* For anonymous mmap, memory could be
* uninitialized */
+#define MAP_SEALABLE _BITUL(27) /* 0x8000000, map is sealable */
+
/*
* Flags for mlock
*/
diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
index 9a846439b36a..02280199069b 100644
--- a/kernel/sys_ni.c
+++ b/kernel/sys_ni.c
@@ -193,6 +193,7 @@ COND_SYSCALL(migrate_pages);
COND_SYSCALL(move_pages);
COND_SYSCALL(set_mempolicy_home_node);
COND_SYSCALL(cachestat);
+COND_SYSCALL(mseal);
COND_SYSCALL(perf_event_open);
COND_SYSCALL(accept4);
diff --git a/mm/Makefile b/mm/Makefile
index 33873c8aedb3..ba03c09a0519 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -46,6 +46,10 @@ ifdef CONFIG_CROSS_MEMORY_ATTACH
mmu-$(CONFIG_MMU) += process_vm_access.o
endif
+ifdef CONFIG_64BIT
+mmu-$(CONFIG_MMU) += mseal.o
+endif
+
obj-y := filemap.o mempool.o oom_kill.o fadvise.o \
maccess.o page-writeback.o folio-compat.o \
readahead.o swap.o truncate.o vmscan.o shrinker.o \
diff --git a/mm/madvise.c b/mm/madvise.c
index 6214a1ab5654..1d4cd41bad23 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -1393,6 +1393,7 @@ int madvise_set_anon_name(struct mm_struct *mm, unsigned long start,
* -EIO - an I/O error occurred while paging in data.
* -EBADF - map exists, but area maps something that isn't a file.
* -EAGAIN - a kernel resource was temporarily unavailable.
+ * -EACCES - memory is sealed.
*/
int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int behavior)
{
@@ -1436,10 +1437,21 @@ int do_madvise(struct mm_struct *mm, unsigned long start, size_t len_in, int beh
start = untagged_addr_remote(mm, start);
end = start + len;
+ /*
+ * Check if the address range is sealed for do_madvise().
+ * can_modify_mm_madv assumes we have acquired the lock on MM.
+ */
+ if (!can_modify_mm_madv(mm, start, end, behavior)) {
+ error = -EACCES;
+ goto out;
+ }
+
blk_start_plug(&plug);
error = madvise_walk_vmas(mm, start, end, behavior,
madvise_vma_behavior);
blk_finish_plug(&plug);
+
+out:
if (write)
mmap_write_unlock(mm);
else
diff --git a/mm/mmap.c b/mm/mmap.c
index 1971bfffcc03..54a38b11a305 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1213,6 +1213,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
{
struct mm_struct *mm = current->mm;
int pkey = 0;
+ unsigned long vm_seals;
*populate = 0;
@@ -1233,6 +1234,8 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
if (flags & MAP_FIXED_NOREPLACE)
flags |= MAP_FIXED;
+ vm_seals = get_mmap_seals(prot, flags);
+
if (!(flags & MAP_FIXED))
addr = round_hint_to_min(addr);
@@ -1261,6 +1264,13 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
return -EEXIST;
}
+ /*
+ * Check if the address range is sealed for do_mmap().
+ * can_modify_mm assumes we have acquired the lock on MM.
+ */
+ if (!can_modify_mm(mm, addr, addr + len))
+ return -EACCES;
+
if (prot == PROT_EXEC) {
pkey = execute_only_pkey(mm);
if (pkey < 0)
@@ -1376,6 +1386,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
vm_flags |= VM_NORESERVE;
}
+ vm_flags |= vm_seals;
addr = mmap_region(file, addr, len, vm_flags, pgoff, uf);
if (!IS_ERR_VALUE(addr) &&
((vm_flags & VM_LOCKED) ||
@@ -2711,6 +2722,14 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
if (end == start)
return -EINVAL;
+ /*
+ * Check if memory is sealed before arch_unmap.
+ * Prevent unmapping a sealed VMA.
+ * can_modify_mm assumes we have acquired the lock on MM.
+ */
+ if (!can_modify_mm(mm, start, end))
+ return -EACCES;
+
/* arch_unmap() might do unmaps itself. */
arch_unmap(mm, start, end);
@@ -3134,6 +3153,14 @@ int do_vma_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
{
struct mm_struct *mm = vma->vm_mm;
+ /*
+ * Check if memory is sealed before arch_unmap.
+ * Prevent unmapping a sealed VMA.
+ * can_modify_mm assumes we have acquired the lock on MM.
+ */
+ if (!can_modify_mm(mm, start, end))
+ return -EACCES;
+
arch_unmap(mm, start, end);
return do_vmi_align_munmap(vmi, vma, mm, start, end, uf, unlock);
}
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 81991102f785..eaa356ff3099 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -32,6 +32,7 @@
#include <linux/sched/sysctl.h>
#include <linux/userfaultfd_k.h>
#include <linux/memory-tiers.h>
+#include <uapi/linux/mman.h>
#include <asm/cacheflush.h>
#include <asm/mmu_context.h>
#include <asm/tlbflush.h>
@@ -743,6 +744,15 @@ static int do_mprotect_pkey(unsigned long start, size_t len,
}
}
+ /*
+ * checking if memory is sealed.
+ * can_modify_mm assumes we have acquired the lock on MM.
+ */
+ if (!can_modify_mm(current->mm, start, end)) {
+ error = -EACCES;
+ goto out;
+ }
+
prev = vma_prev(&vmi);
if (start > vma->vm_start)
prev = vma;
diff --git a/mm/mremap.c b/mm/mremap.c
index 38d98465f3d8..81db7d05dbe0 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -902,7 +902,25 @@ static unsigned long mremap_to(unsigned long addr, unsigned long old_len,
if ((mm->map_count + 2) >= sysctl_max_map_count - 3)
return -ENOMEM;
+ /*
+ * In mremap_to().
+ * Move a VMA to another location, check if src addr is sealed.
+ *
+ * Place can_modify_mm here because mremap_to()
+ * does its own checking for address range, and we only
+ * check the sealing after passing those checks.
+ *
+ * can_modify_mm assumes we have acquired the lock on MM.
+ */
+ if (!can_modify_mm(mm, addr, addr + old_len))
+ return -EACCES;
+
if (flags & MREMAP_FIXED) {
+ /*
+ * In mremap_to().
+ * VMA is moved to dst address, and munmap dst first.
+ * do_munmap will check if dst is sealed.
+ */
ret = do_munmap(mm, new_addr, new_len, uf_unmap_early);
if (ret)
goto out;
@@ -1061,6 +1079,19 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len,
goto out;
}
+ /*
+ * Below is shrink/expand case (not mremap_to())
+ * Check if src address is sealed, if so, reject.
+ * In other words, prevent shrinking or expanding a sealed VMA.
+ *
+ * Place can_modify_mm here so we can keep the logic related to
+ * shrink/expand together.
+ */
+ if (!can_modify_mm(mm, addr, addr + old_len)) {
+ ret = -EACCES;
+ goto out;
+ }
+
/*
* Always allow a shrinking remap: that just unmaps
* the unnecessary pages..
diff --git a/mm/mseal.c b/mm/mseal.c
new file mode 100644
index 000000000000..397a7b4cc7b3
--- /dev/null
+++ b/mm/mseal.c
@@ -0,0 +1,330 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Implement mseal() syscall.
+ *
+ * Copyright (c) 2023,2024 Google, Inc.
+ *
+ * Author: Jeff Xu <[email protected]>
+ */
+
+#include <linux/mempolicy.h>
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/mm_inline.h>
+#include <linux/mmu_context.h>
+#include <linux/syscalls.h>
+#include <linux/sched.h>
+#include "internal.h"
+
+static inline bool vma_is_sealed(struct vm_area_struct *vma)
+{
+ return (vma->vm_flags & VM_SEALED);
+}
+
+static inline bool vma_is_sealable(struct vm_area_struct *vma)
+{
+ return vma->vm_flags & VM_SEALABLE;
+}
+
+static inline void set_vma_sealed(struct vm_area_struct *vma)
+{
+ vma->__vm_flags |= VM_SEALED;
+}
+
+/*
+ * check if a vma is sealed for modification.
+ * return true, if modification is allowed.
+ */
+bool can_modify_vma(struct vm_area_struct *vma)
+{
+ if (vma_is_sealed(vma))
+ return false;
+
+ return true;
+}
+
+static bool is_madv_discard(int behavior)
+{
+ return behavior &
+ (MADV_FREE | MADV_DONTNEED | MADV_DONTNEED_LOCKED |
+ MADV_REMOVE | MADV_DONTFORK | MADV_WIPEONFORK);
+}
+
+static bool is_ro_anon(struct vm_area_struct *vma)
+{
+ /* check anonymous mapping. */
+ if (vma->vm_file || vma->vm_flags & VM_SHARED)
+ return false;
+
+ /*
+ * check for non-writable:
+ * PROT=RO or PKRU is not writeable.
+ */
+ if (!(vma->vm_flags & VM_WRITE) ||
+ !arch_vma_access_permitted(vma, true, false, false))
+ return true;
+
+ return false;
+}
+
+/*
+ * Check if the vmas of a memory range are allowed to be modified.
+ * the memory ranger can have a gap (unallocated memory).
+ * return true, if it is allowed.
+ */
+bool can_modify_mm(struct mm_struct *mm, unsigned long start, unsigned long end)
+{
+ struct vm_area_struct *vma;
+
+ VMA_ITERATOR(vmi, mm, start);
+
+ /* going through each vma to check. */
+ for_each_vma_range(vmi, vma, end) {
+ if (!can_modify_vma(vma))
+ return false;
+ }
+
+ /* Allow by default. */
+ return true;
+}
+
+/*
+ * Check if the vmas of a memory range are allowed to be modified by madvise.
+ * the memory ranger can have a gap (unallocated memory).
+ * return true, if it is allowed.
+ */
+bool can_modify_mm_madv(struct mm_struct *mm, unsigned long start, unsigned long end,
+ int behavior)
+{
+ struct vm_area_struct *vma;
+
+ VMA_ITERATOR(vmi, mm, start);
+
+ if (!is_madv_discard(behavior))
+ return true;
+
+ /* going through each vma to check. */
+ for_each_vma_range(vmi, vma, end)
+ if (is_ro_anon(vma) && !can_modify_vma(vma))
+ return false;
+
+ /* Allow by default. */
+ return true;
+}
+
+/*
+ * Check if a seal type can be added to VMA.
+ */
+static bool can_add_vma_seal(struct vm_area_struct *vma)
+{
+ /* if map is not sealable, reject. */
+ if (!vma_is_sealable(vma))
+ return false;
+
+ return true;
+}
+
+static int mseal_fixup(struct vma_iterator *vmi, struct vm_area_struct *vma,
+ struct vm_area_struct **prev, unsigned long start,
+ unsigned long end, vm_flags_t newflags)
+{
+ int ret = 0;
+ vm_flags_t oldflags = vma->vm_flags;
+
+ if (newflags == oldflags)
+ goto out;
+
+ vma = vma_modify_flags(vmi, *prev, vma, start, end, newflags);
+ if (IS_ERR(vma)) {
+ ret = PTR_ERR(vma);
+ goto out;
+ }
+
+ set_vma_sealed(vma);
+out:
+ *prev = vma;
+ return ret;
+}
+
+/*
+ * Check for do_mseal:
+ * 1> start is part of a valid vma.
+ * 2> end is part of a valid vma.
+ * 3> No gap (unallocated address) between start and end.
+ * 4> map is sealable.
+ */
+static int check_mm_seal(unsigned long start, unsigned long end)
+{
+ struct vm_area_struct *vma;
+ unsigned long nstart = start;
+
+ VMA_ITERATOR(vmi, current->mm, start);
+
+ /* going through each vma to check. */
+ for_each_vma_range(vmi, vma, end) {
+ if (vma->vm_start > nstart)
+ /* unallocated memory found. */
+ return -ENOMEM;
+
+ if (!can_add_vma_seal(vma))
+ return -EACCES;
+
+ if (vma->vm_end >= end)
+ return 0;
+
+ nstart = vma->vm_end;
+ }
+
+ return -ENOMEM;
+}
+
+/*
+ * Apply sealing.
+ */
+static int apply_mm_seal(unsigned long start, unsigned long end)
+{
+ unsigned long nstart;
+ struct vm_area_struct *vma, *prev;
+
+ VMA_ITERATOR(vmi, current->mm, start);
+
+ vma = vma_iter_load(&vmi);
+ /*
+ * Note: check_mm_seal should already checked ENOMEM case.
+ * so vma should not be null, same for the other ENOMEM cases.
+ */
+ prev = vma_prev(&vmi);
+ if (start > vma->vm_start)
+ prev = vma;
+
+ nstart = start;
+ for_each_vma_range(vmi, vma, end) {
+ int error;
+ unsigned long tmp;
+ vm_flags_t newflags;
+
+ newflags = vma->vm_flags | VM_SEALED;
+ tmp = vma->vm_end;
+ if (tmp > end)
+ tmp = end;
+ error = mseal_fixup(&vmi, vma, &prev, nstart, tmp, newflags);
+ if (error)
+ return error;
+ tmp = vma_iter_end(&vmi);
+ nstart = tmp;
+ }
+
+ return 0;
+}
+
+/*
+ * mseal(2) seals the VM's meta data from
+ * selected syscalls.
+ *
+ * addr/len: VM address range.
+ *
+ * The address range by addr/len must meet:
+ * start (addr) must be in a valid VMA.
+ * end (addr + len) must be in a valid VMA.
+ * no gap (unallocated memory) between start and end.
+ * start (addr) must be page aligned.
+ *
+ * len: len will be page aligned implicitly.
+ *
+ * Below VMA operations are blocked after sealing.
+ * 1> Unmapping, moving to another location, and shrinking
+ * the size, via munmap() and mremap(), can leave an empty
+ * space, therefore can be replaced with a VMA with a new
+ * set of attributes.
+ * 2> Moving or expanding a different vma into the current location,
+ * via mremap().
+ * 3> Modifying a VMA via mmap(MAP_FIXED).
+ * 4> Size expansion, via mremap(), does not appear to pose any
+ * specific risks to sealed VMAs. It is included anyway because
+ * the use case is unclear. In any case, users can rely on
+ * merging to expand a sealed VMA.
+ * 5> mprotect and pkey_mprotect.
+ * 6> Some destructive madvice() behavior (e.g. MADV_DONTNEED)
+ * for anonymous memory, when users don't have write permission to the
+ * memory. Those behaviors can alter region contents by discarding pages,
+ * effectively a memset(0) for anonymous memory.
+ *
+ * flags: reserved.
+ *
+ * return values:
+ * zero: success.
+ * -EINVAL:
+ * invalid input flags.
+ * start address is not page aligned.
+ * Address arange (start + len) overflow.
+ * -ENOMEM:
+ * addr is not a valid address (not allocated).
+ * end (start + len) is not a valid address.
+ * a gap (unallocated memory) between start and end.
+ * -EACCES:
+ * MAP_SEALABLE is not set.
+ * -EPERM:
+ * - In 32 bit architecture, sealing is not supported.
+ * Note:
+ * user can call mseal(2) multiple times, adding a seal on an
+ * already sealed memory is a no-action (no error).
+ *
+ * unseal() is not supported.
+ */
+static int do_mseal(unsigned long start, size_t len_in, unsigned long flags)
+{
+ size_t len;
+ int ret = 0;
+ unsigned long end;
+ struct mm_struct *mm = current->mm;
+
+ ret = can_do_mseal(flags);
+ if (ret)
+ return ret;
+
+ start = untagged_addr(start);
+ if (!PAGE_ALIGNED(start))
+ return -EINVAL;
+
+ len = PAGE_ALIGN(len_in);
+ /* Check to see whether len was rounded up from small -ve to zero. */
+ if (len_in && !len)
+ return -EINVAL;
+
+ end = start + len;
+ if (end < start)
+ return -EINVAL;
+
+ if (end == start)
+ return 0;
+
+ if (mmap_write_lock_killable(mm))
+ return -EINTR;
+
+ /*
+ * First pass, this helps to avoid
+ * partial sealing in case of error in input address range,
+ * e.g. ENOMEM and EACCESS error.
+ */
+ ret = check_mm_seal(start, end);
+ if (ret)
+ goto out;
+
+ /*
+ * Second pass, this should success, unless there are errors
+ * from vma_modify_flags, e.g. merge/split error, or process
+ * reaching the max supported VMAs, however, those cases shall
+ * be rare.
+ */
+ ret = apply_mm_seal(start, end);
+
+out:
+ mmap_write_unlock(current->mm);
+ return ret;
+}
+
+SYSCALL_DEFINE3(mseal, unsigned long, start, size_t, len, unsigned long,
+ flags)
+{
+ return do_mseal(start, len, flags);
+}
--
2.43.0.195.gebba966016-goog
From: Jeff Xu <[email protected]>
selftest for memory sealing change in mmap() and mseal().
Signed-off-by: Jeff Xu <[email protected]>
---
tools/testing/selftests/mm/.gitignore | 1 +
tools/testing/selftests/mm/Makefile | 1 +
tools/testing/selftests/mm/mseal_test.c | 1971 +++++++++++++++++++++++
3 files changed, 1973 insertions(+)
create mode 100644 tools/testing/selftests/mm/mseal_test.c
diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore
index 4ff10ea61461..76474c51c786 100644
--- a/tools/testing/selftests/mm/.gitignore
+++ b/tools/testing/selftests/mm/.gitignore
@@ -46,3 +46,4 @@ gup_longterm
mkdirty
va_high_addr_switch
hugetlb_fault_after_madv
+mseal_test
diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
index dede0bcf97a3..652c07ff81f5 100644
--- a/tools/testing/selftests/mm/Makefile
+++ b/tools/testing/selftests/mm/Makefile
@@ -59,6 +59,7 @@ TEST_GEN_FILES += mlock2-tests
TEST_GEN_FILES += mrelease_test
TEST_GEN_FILES += mremap_dontunmap
TEST_GEN_FILES += mremap_test
+TEST_GEN_FILES += mseal_test
TEST_GEN_FILES += on-fault-limit
TEST_GEN_FILES += pagemap_ioctl
TEST_GEN_FILES += thuge-gen
diff --git a/tools/testing/selftests/mm/mseal_test.c b/tools/testing/selftests/mm/mseal_test.c
new file mode 100644
index 000000000000..a916d30e97c2
--- /dev/null
+++ b/tools/testing/selftests/mm/mseal_test.c
@@ -0,0 +1,1971 @@
+// SPDX-License-Identifier: GPL-2.0
+#define _GNU_SOURCE
+#include <sys/mman.h>
+#include <stdint.h>
+#include <unistd.h>
+#include <string.h>
+#include <sys/time.h>
+#include <sys/resource.h>
+#include <stdbool.h>
+#include "../kselftest.h"
+#include <syscall.h>
+#include <errno.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <assert.h>
+#include <fcntl.h>
+#include <assert.h>
+#include <sys/ioctl.h>
+#include <sys/vfs.h>
+#include <sys/stat.h>
+
+/*
+ * need those definition for manually build using gcc.
+ * gcc -I ../../../../usr/include -DDEBUG -O3 -DDEBUG -O3 mseal_test.c -o mseal_test
+ */
+#ifndef MAP_SEALABLE
+#define MAP_SEALABLE 0x8000000
+#endif
+
+#ifndef PROT_SEAL
+#define PROT_SEAL 0x04000000
+#endif
+
+#ifndef PKEY_DISABLE_ACCESS
+# define PKEY_DISABLE_ACCESS 0x1
+#endif
+
+#ifndef PKEY_DISABLE_WRITE
+# define PKEY_DISABLE_WRITE 0x2
+#endif
+
+#ifndef PKEY_BITS_PER_KEY
+#define PKEY_BITS_PER_PKEY 2
+#endif
+
+#ifndef PKEY_MASK
+#define PKEY_MASK (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE)
+#endif
+
+#ifndef DEBUG
+#define LOG_TEST_ENTER() {}
+#else
+#define LOG_TEST_ENTER() {ksft_print_msg("%s\n", __func__); }
+#endif
+
+#ifndef u64
+#define u64 unsigned long long
+#endif
+
+static unsigned long get_vma_size(void *addr)
+{
+ FILE *maps;
+ char line[256];
+ int size = 0;
+ uintptr_t addr_start, addr_end;
+
+ maps = fopen("/proc/self/maps", "r");
+ if (!maps)
+ return 0;
+
+ while (fgets(line, sizeof(line), maps)) {
+ if (sscanf(line, "%lx-%lx", &addr_start, &addr_end) == 2) {
+ if (addr_start == (uintptr_t) addr) {
+ size = addr_end - addr_start;
+ break;
+ }
+ }
+ }
+ fclose(maps);
+ return size;
+}
+
+/*
+ * define sys_xyx to call syscall directly.
+ */
+static int sys_mseal(void *start, size_t len)
+{
+ int sret;
+
+ errno = 0;
+ sret = syscall(__NR_mseal, start, len, 0);
+ return sret;
+}
+
+static int sys_mprotect(void *ptr, size_t size, unsigned long prot)
+{
+ int sret;
+
+ errno = 0;
+ sret = syscall(SYS_mprotect, ptr, size, prot);
+ return sret;
+}
+
+static int sys_mprotect_pkey(void *ptr, size_t size, unsigned long orig_prot,
+ unsigned long pkey)
+{
+ int sret;
+
+ errno = 0;
+ sret = syscall(__NR_pkey_mprotect, ptr, size, orig_prot, pkey);
+ return sret;
+}
+
+/*
+ * define sys_xyx to call syscall directly.
+ */
+static void *sys_mmap(void *addr, unsigned long len, unsigned long prot,
+ unsigned long flags, unsigned long fd, unsigned long offset)
+{
+ void *sret;
+
+ errno = 0;
+ sret = (void *) syscall(__NR_mmap, addr, len, prot,
+ flags, fd, offset);
+ return sret;
+}
+
+static int sys_munmap(void *ptr, size_t size)
+{
+ int sret;
+
+ errno = 0;
+ sret = syscall(SYS_munmap, ptr, size);
+ return sret;
+}
+
+static int sys_madvise(void *start, size_t len, int types)
+{
+ int sret;
+
+ errno = 0;
+ sret = syscall(__NR_madvise, start, len, types);
+ return sret;
+}
+
+static int sys_pkey_alloc(unsigned long flags, unsigned long init_val)
+{
+ int ret = syscall(SYS_pkey_alloc, flags, init_val);
+
+ return ret;
+}
+
+static unsigned int __read_pkey_reg(void)
+{
+ unsigned int eax, edx;
+ unsigned int ecx = 0;
+ unsigned int pkey_reg;
+
+ asm volatile(".byte 0x0f,0x01,0xee\n\t"
+ : "=a" (eax), "=d" (edx)
+ : "c" (ecx));
+ pkey_reg = eax;
+ return pkey_reg;
+}
+
+static void __write_pkey_reg(u64 pkey_reg)
+{
+ unsigned int eax = pkey_reg;
+ unsigned int ecx = 0;
+ unsigned int edx = 0;
+
+ asm volatile(".byte 0x0f,0x01,0xef\n\t"
+ : : "a" (eax), "c" (ecx), "d" (edx));
+ assert(pkey_reg == __read_pkey_reg());
+}
+
+static unsigned long pkey_bit_position(int pkey)
+{
+ return pkey * PKEY_BITS_PER_PKEY;
+}
+
+static u64 set_pkey_bits(u64 reg, int pkey, u64 flags)
+{
+ unsigned long shift = pkey_bit_position(pkey);
+
+ /* mask out bits from pkey in old value */
+ reg &= ~((u64)PKEY_MASK << shift);
+ /* OR in new bits for pkey */
+ reg |= (flags & PKEY_MASK) << shift;
+ return reg;
+}
+
+static void set_pkey(int pkey, unsigned long pkey_value)
+{
+ unsigned long mask = (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE);
+ u64 new_pkey_reg;
+
+ assert(!(pkey_value & ~mask));
+ new_pkey_reg = set_pkey_bits(__read_pkey_reg(), pkey, pkey_value);
+ __write_pkey_reg(new_pkey_reg);
+}
+
+static void setup_single_address(int size, void **ptrOut)
+{
+ void *ptr;
+
+ ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
+ assert(ptr != (void *)-1);
+ *ptrOut = ptr;
+}
+
+static void setup_single_address_sealable(int size, void **ptrOut, bool sealable)
+{
+ void *ptr;
+ unsigned long mapflags = MAP_ANONYMOUS | MAP_PRIVATE;
+
+ if (sealable)
+ mapflags |= MAP_SEALABLE;
+
+ ptr = sys_mmap(NULL, size, PROT_READ, mapflags, -1, 0);
+ assert(ptr != (void *)-1);
+ *ptrOut = ptr;
+}
+
+static void setup_single_address_rw_sealable(int size, void **ptrOut, bool sealable)
+{
+ void *ptr;
+ unsigned long mapflags = MAP_ANONYMOUS | MAP_PRIVATE;
+
+ if (sealable)
+ mapflags |= MAP_SEALABLE;
+
+ ptr = sys_mmap(NULL, size, PROT_READ | PROT_WRITE, mapflags, -1, 0);
+ assert(ptr != (void *)-1);
+ *ptrOut = ptr;
+}
+
+static void clean_single_address(void *ptr, int size)
+{
+ int ret;
+
+ ret = munmap(ptr, size);
+ assert(!ret);
+}
+
+static void seal_single_address(void *ptr, int size)
+{
+ int ret;
+
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+}
+
+static void test_seal_addseal(void)
+{
+ LOG_TEST_ENTER();
+ int ret;
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+
+ setup_single_address(size, &ptr);
+
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+}
+
+static void test_seal_unmapped_start(void)
+{
+ LOG_TEST_ENTER();
+ int ret;
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+
+ setup_single_address(size, &ptr);
+
+ /* munmap 2 pages from ptr. */
+ ret = sys_munmap(ptr, 2 * page_size);
+ assert(!ret);
+
+ /* mprotect will fail because 2 pages from ptr are unmapped. */
+ ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
+ assert(ret < 0);
+
+ /* mseal will fail because 2 pages from ptr are unmapped. */
+ ret = sys_mseal(ptr, size);
+ assert(ret < 0);
+
+ ret = sys_mseal(ptr + 2 * page_size, 2 * page_size);
+ assert(!ret);
+}
+
+static void test_seal_unmapped_middle(void)
+{
+ LOG_TEST_ENTER();
+ int ret;
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+
+ setup_single_address(size, &ptr);
+
+ /* munmap 2 pages from ptr + page. */
+ ret = sys_munmap(ptr + page_size, 2 * page_size);
+ assert(!ret);
+
+ /* mprotect will fail, since size is 4 pages. */
+ ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
+ assert(ret < 0);
+
+ /* mseal will fail as well. */
+ ret = sys_mseal(ptr, size);
+ assert(ret < 0);
+
+ /* we still can add seal to the first page and last page*/
+ ret = sys_mseal(ptr, page_size);
+ assert(!ret);
+
+ ret = sys_mseal(ptr + 3 * page_size, page_size);
+ assert(!ret);
+}
+
+static void test_seal_unmapped_end(void)
+{
+ LOG_TEST_ENTER();
+ int ret;
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+
+ setup_single_address(size, &ptr);
+
+ /* unmap last 2 pages. */
+ ret = sys_munmap(ptr + 2 * page_size, 2 * page_size);
+ assert(!ret);
+
+ /* mprotect will fail since last 2 pages are unmapped. */
+ ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
+ assert(ret < 0);
+
+ /* mseal will fail as well. */
+ ret = sys_mseal(ptr, size);
+ assert(ret < 0);
+
+ /* The first 2 pages is not sealed, and can add seals */
+ ret = sys_mseal(ptr, 2 * page_size);
+ assert(!ret);
+}
+
+static void test_seal_multiple_vmas(void)
+{
+ LOG_TEST_ENTER();
+ int ret;
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+
+ setup_single_address(size, &ptr);
+
+ /* use mprotect to split the vma into 3. */
+ ret = sys_mprotect(ptr + page_size, 2 * page_size,
+ PROT_READ | PROT_WRITE);
+ assert(!ret);
+
+ /* mprotect will get applied to all 4 pages - 3 VMAs. */
+ ret = sys_mprotect(ptr, size, PROT_READ);
+ assert(!ret);
+
+ /* use mprotect to split the vma into 3. */
+ ret = sys_mprotect(ptr + page_size, 2 * page_size,
+ PROT_READ | PROT_WRITE);
+ assert(!ret);
+
+ /* mseal get applied to all 4 pages - 3 VMAs. */
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+}
+
+static void test_seal_split_start(void)
+{
+ LOG_TEST_ENTER();
+ int ret;
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+
+ setup_single_address(size, &ptr);
+
+ /* use mprotect to split at middle */
+ ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE);
+ assert(!ret);
+
+ /* seal the first page, this will split the VMA */
+ ret = sys_mseal(ptr, page_size);
+ assert(!ret);
+
+ /* add seal to the remain 3 pages */
+ ret = sys_mseal(ptr + page_size, 3 * page_size);
+ assert(!ret);
+}
+
+static void test_seal_split_end(void)
+{
+ LOG_TEST_ENTER();
+ int ret;
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+
+ setup_single_address(size, &ptr);
+
+ /* use mprotect to split at middle */
+ ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE);
+ assert(!ret);
+
+ /* seal the last page */
+ ret = sys_mseal(ptr + 3 * page_size, page_size);
+ assert(!ret);
+
+ /* Adding seals to the first 3 pages */
+ ret = sys_mseal(ptr, 3 * page_size);
+ assert(!ret);
+}
+
+static void test_seal_invalid_input(void)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(8 * page_size, &ptr);
+ clean_single_address(ptr + 4 * page_size, 4 * page_size);
+
+ /* invalid flag */
+ ret = syscall(__NR_mseal, ptr, size, 0x20);
+ assert(ret < 0);
+
+ /* unaligned address */
+ ret = sys_mseal(ptr + 1, 2 * page_size);
+ assert(ret < 0);
+
+ /* length too big */
+ ret = sys_mseal(ptr, 5 * page_size);
+ assert(ret < 0);
+
+ /* start is not in a valid VMA */
+ ret = sys_mseal(ptr - page_size, 5 * page_size);
+ assert(ret < 0);
+}
+
+static void test_seal_zero_length(void)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+
+ ret = sys_mprotect(ptr, 0, PROT_READ | PROT_WRITE);
+ assert(!ret);
+
+ /* seal 0 length will be OK, same as mprotect */
+ ret = sys_mseal(ptr, 0);
+ assert(!ret);
+
+ /* verify the 4 pages are not sealed by previous call. */
+ ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
+ assert(!ret);
+}
+
+static void test_seal_twice(void)
+{
+ LOG_TEST_ENTER();
+ int ret;
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+
+ setup_single_address(size, &ptr);
+
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+
+ /* apply the same seal will be OK. idempotent. */
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+}
+
+static void test_seal_mprotect(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+
+ if (seal)
+ seal_single_address(ptr, size);
+
+ ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+}
+
+static void test_seal_start_mprotect(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+
+ if (seal)
+ seal_single_address(ptr, page_size);
+
+ /* the first page is sealed. */
+ ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+
+ /* pages after the first page is not sealed. */
+ ret = sys_mprotect(ptr + page_size, page_size * 3,
+ PROT_READ | PROT_WRITE);
+ assert(!ret);
+}
+
+static void test_seal_end_mprotect(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+
+ if (seal)
+ seal_single_address(ptr + page_size, 3 * page_size);
+
+ /* first page is not sealed */
+ ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
+ assert(!ret);
+
+ /* last 3 page are sealed */
+ ret = sys_mprotect(ptr + page_size, page_size * 3,
+ PROT_READ | PROT_WRITE);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+}
+
+static void test_seal_mprotect_unalign_len(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+
+ if (seal)
+ seal_single_address(ptr, page_size * 2 - 1);
+
+ /* 2 pages are sealed. */
+ ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+
+ ret = sys_mprotect(ptr + page_size * 2, page_size,
+ PROT_READ | PROT_WRITE);
+ assert(!ret);
+}
+
+static void test_seal_mprotect_unalign_len_variant_2(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+ if (seal)
+ seal_single_address(ptr, page_size * 2 + 1);
+
+ /* 3 pages are sealed. */
+ ret = sys_mprotect(ptr, page_size * 3, PROT_READ | PROT_WRITE);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+
+ ret = sys_mprotect(ptr + page_size * 3, page_size,
+ PROT_READ | PROT_WRITE);
+ assert(!ret);
+}
+
+static void test_seal_mprotect_two_vma(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+
+ /* use mprotect to split */
+ ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
+ assert(!ret);
+
+ if (seal)
+ seal_single_address(ptr, page_size * 4);
+
+ ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+
+ ret = sys_mprotect(ptr + page_size * 2, page_size * 2,
+ PROT_READ | PROT_WRITE);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+}
+
+static void test_seal_mprotect_two_vma_with_split(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+
+ /* use mprotect to split as two vma. */
+ ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
+ assert(!ret);
+
+ /* mseal can apply across 2 vma, also split them. */
+ if (seal)
+ seal_single_address(ptr + page_size, page_size * 2);
+
+ /* the first page is not sealed. */
+ ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
+ assert(!ret);
+
+ /* the second page is sealed. */
+ ret = sys_mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+
+ /* the third page is sealed. */
+ ret = sys_mprotect(ptr + 2 * page_size, page_size,
+ PROT_READ | PROT_WRITE);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+
+ /* the fouth page is not sealed. */
+ ret = sys_mprotect(ptr + 3 * page_size, page_size,
+ PROT_READ | PROT_WRITE);
+ assert(!ret);
+}
+
+static void test_seal_mprotect_partial_mprotect(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+
+ /* seal one page. */
+ if (seal)
+ seal_single_address(ptr, page_size);
+
+ /* mprotect first 2 page will fail, since the first page are sealed. */
+ ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+}
+
+static void test_seal_mprotect_two_vma_with_gap(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+
+ /* use mprotect to split. */
+ ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
+ assert(!ret);
+
+ /* use mprotect to split. */
+ ret = sys_mprotect(ptr + 3 * page_size, page_size,
+ PROT_READ | PROT_WRITE);
+ assert(!ret);
+
+ /* use munmap to free two pages in the middle */
+ ret = sys_munmap(ptr + page_size, 2 * page_size);
+ assert(!ret);
+
+ /* mprotect will fail, because there is a gap in the address. */
+ /* notes, internally mprotect still updated the first page. */
+ ret = sys_mprotect(ptr, 4 * page_size, PROT_READ);
+ assert(ret < 0);
+
+ /* mseal will fail as well. */
+ ret = sys_mseal(ptr, 4 * page_size);
+ assert(ret < 0);
+
+ /* unlike mprotect, the first page is not sealed. */
+ ret = sys_mprotect(ptr, page_size, PROT_READ);
+ assert(ret == 0);
+
+ /* the last page is not sealed. */
+ ret = sys_mprotect(ptr + 3 * page_size, page_size, PROT_READ);
+ assert(ret == 0);
+}
+
+static void test_seal_mprotect_split(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+
+ /* use mprotect to split. */
+ ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
+ assert(!ret);
+
+ /* seal all 4 pages. */
+ if (seal) {
+ ret = sys_mseal(ptr, 4 * page_size);
+ assert(!ret);
+ }
+
+ /* mprotect is sealed. */
+ ret = sys_mprotect(ptr, 2 * page_size, PROT_READ);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+
+
+ ret = sys_mprotect(ptr + 2 * page_size, 2 * page_size, PROT_READ);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+}
+
+static void test_seal_mprotect_merge(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+
+ /* use mprotect to split one page. */
+ ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
+ assert(!ret);
+
+ /* seal first two pages. */
+ if (seal) {
+ ret = sys_mseal(ptr, 2 * page_size);
+ assert(!ret);
+ }
+
+ ret = sys_madvise(ptr, page_size, MADV_WILLNEED);
+ assert(!ret);
+
+ /* 2 pages are sealed. */
+ ret = sys_mprotect(ptr, 2 * page_size, PROT_READ);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+
+ /* last 2 pages are not sealed. */
+ ret = sys_mprotect(ptr + 2 * page_size, 2 * page_size, PROT_READ);
+ assert(ret == 0);
+}
+
+static void test_seal_munmap(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+
+ if (seal) {
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+ }
+
+ /* 4 pages are sealed. */
+ ret = sys_munmap(ptr, size);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+}
+
+/*
+ * allocate 4 pages,
+ * use mprotect to split it as two VMAs
+ * seal the whole range
+ * munmap will fail on both
+ */
+static void test_seal_munmap_two_vma(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+
+ /* use mprotect to split */
+ ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
+ assert(!ret);
+
+ if (seal) {
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+ }
+
+ ret = sys_munmap(ptr, page_size * 2);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+
+ ret = sys_munmap(ptr + page_size, page_size * 2);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+}
+
+/*
+ * allocate a VMA with 4 pages.
+ * munmap the middle 2 pages.
+ * seal the whole 4 pages, will fail.
+ * note: one of the pages are sealed
+ * munmap the first page will be OK.
+ * munmap the last page will be OK.
+ */
+static void test_seal_munmap_vma_with_gap(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+
+ ret = sys_munmap(ptr + page_size, page_size * 2);
+ assert(!ret);
+
+ if (seal) {
+ /* can't have gap in the middle. */
+ ret = sys_mseal(ptr, size);
+ assert(ret < 0);
+ }
+
+ ret = sys_munmap(ptr, page_size);
+ assert(!ret);
+
+ ret = sys_munmap(ptr + page_size * 2, page_size);
+ assert(!ret);
+
+ ret = sys_munmap(ptr, size);
+ assert(!ret);
+}
+
+static void test_munmap_start_freed(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+
+ /* unmap the first page. */
+ ret = sys_munmap(ptr, page_size);
+ assert(!ret);
+
+ /* seal the last 3 pages. */
+ if (seal) {
+ ret = sys_mseal(ptr + page_size, 3 * page_size);
+ assert(!ret);
+ }
+
+ /* unmap from the first page. */
+ ret = sys_munmap(ptr, size);
+ if (seal)
+ assert(ret < 0);
+ else
+ /* note: this will be OK, even the first page is */
+ /* already unmapped. */
+ assert(!ret);
+}
+
+static void test_munmap_end_freed(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+ /* unmap last page. */
+ ret = sys_munmap(ptr + page_size * 3, page_size);
+ assert(!ret);
+
+ /* seal the first 3 pages. */
+ if (seal) {
+ ret = sys_mseal(ptr, 3 * page_size);
+ assert(!ret);
+ }
+
+ /* unmap all pages. */
+ ret = sys_munmap(ptr, size);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+}
+
+static void test_munmap_middle_freed(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+ /* unmap 2 pages in the middle. */
+ ret = sys_munmap(ptr + page_size, page_size * 2);
+ assert(!ret);
+
+ /* seal the first page. */
+ if (seal) {
+ ret = sys_mseal(ptr, page_size);
+ assert(!ret);
+ }
+
+ /* munmap all 4 pages. */
+ ret = sys_munmap(ptr, size);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+}
+
+static void test_seal_mremap_shrink(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+ void *ret2;
+
+ setup_single_address(size, &ptr);
+
+ if (seal) {
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+ }
+
+ /* shrink from 4 pages to 2 pages. */
+ ret2 = mremap(ptr, size, 2 * page_size, 0, 0);
+ if (seal) {
+ assert(ret2 == MAP_FAILED);
+ assert(errno == EACCES);
+ } else {
+ assert(ret2 != MAP_FAILED);
+
+ }
+}
+
+static void test_seal_mremap_expand(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+ void *ret2;
+
+ setup_single_address(size, &ptr);
+ /* ummap last 2 pages. */
+ ret = sys_munmap(ptr + 2 * page_size, 2 * page_size);
+ assert(!ret);
+
+ if (seal) {
+ ret = sys_mseal(ptr, 2 * page_size);
+ assert(!ret);
+ }
+
+ /* expand from 2 page to 4 pages. */
+ ret2 = mremap(ptr, 2 * page_size, 4 * page_size, 0, 0);
+ if (seal) {
+ assert(ret2 == MAP_FAILED);
+ assert(errno == EACCES);
+ } else {
+ assert(ret2 == ptr);
+
+ }
+}
+
+static void test_seal_mremap_move(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr, *newPtr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = page_size;
+ int ret;
+ void *ret2;
+
+ setup_single_address(size, &ptr);
+ setup_single_address(size, &newPtr);
+ clean_single_address(newPtr, size);
+
+ if (seal) {
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+ }
+
+ /* move from ptr to fixed address. */
+ ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newPtr);
+ if (seal) {
+ assert(ret2 == MAP_FAILED);
+ assert(errno == EACCES);
+ } else {
+ assert(ret2 != MAP_FAILED);
+
+ }
+}
+
+static void test_seal_mmap_overwrite_prot(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = page_size;
+ int ret;
+ void *ret2;
+
+ setup_single_address(size, &ptr);
+
+ if (seal) {
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+ }
+
+ /* use mmap to change protection. */
+ ret2 = sys_mmap(ptr, size, PROT_NONE,
+ MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
+ if (seal) {
+ assert(ret2 == MAP_FAILED);
+ assert(errno == EACCES);
+ } else
+ assert(ret2 == ptr);
+}
+
+static void test_seal_mmap_expand(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 12 * page_size;
+ int ret;
+ void *ret2;
+
+ setup_single_address(size, &ptr);
+ /* ummap last 4 pages. */
+ ret = sys_munmap(ptr + 8 * page_size, 4 * page_size);
+ assert(!ret);
+
+ if (seal) {
+ ret = sys_mseal(ptr, 8 * page_size);
+ assert(!ret);
+ }
+
+ /* use mmap to expand. */
+ ret2 = sys_mmap(ptr, size, PROT_READ,
+ MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
+ if (seal) {
+ assert(ret2 == MAP_FAILED);
+ assert(errno == EACCES);
+ } else
+ assert(ret2 == ptr);
+}
+
+static void test_seal_mmap_shrink(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 12 * page_size;
+ int ret;
+ void *ret2;
+
+ setup_single_address(size, &ptr);
+
+ if (seal) {
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+ }
+
+ /* use mmap to shrink. */
+ ret2 = sys_mmap(ptr, 8 * page_size, PROT_READ,
+ MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
+ if (seal) {
+ assert(ret2 == MAP_FAILED);
+ assert(errno == EACCES);
+ } else
+ assert(ret2 == ptr);
+}
+
+static void test_seal_mremap_shrink_fixed(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ void *newAddr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+ void *ret2;
+
+ setup_single_address(size, &ptr);
+ setup_single_address(size, &newAddr);
+
+ if (seal) {
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+ }
+
+ /* mremap to move and shrink to fixed address */
+ ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
+ newAddr);
+ if (seal) {
+ assert(ret2 == MAP_FAILED);
+ assert(errno == EACCES);
+ } else
+ assert(ret2 == newAddr);
+}
+
+static void test_seal_mremap_expand_fixed(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ void *newAddr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+ void *ret2;
+
+ setup_single_address(page_size, &ptr);
+ setup_single_address(size, &newAddr);
+
+ if (seal) {
+ ret = sys_mseal(newAddr, size);
+ assert(!ret);
+ }
+
+ /* mremap to move and expand to fixed address */
+ ret2 = mremap(ptr, page_size, size, MREMAP_MAYMOVE | MREMAP_FIXED,
+ newAddr);
+ if (seal) {
+ assert(ret2 == MAP_FAILED);
+ assert(errno == EACCES);
+ } else
+ assert(ret2 == newAddr);
+}
+
+static void test_seal_mremap_move_fixed(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ void *newAddr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+ void *ret2;
+
+ setup_single_address(size, &ptr);
+ setup_single_address(size, &newAddr);
+
+ if (seal) {
+ ret = sys_mseal(newAddr, size);
+ assert(!ret);
+ }
+
+ /* mremap to move to fixed address */
+ ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newAddr);
+ if (seal) {
+ assert(ret2 == MAP_FAILED);
+ assert(errno == EACCES);
+ } else
+ assert(ret2 == newAddr);
+}
+
+static void test_seal_mremap_move_fixed_zero(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ void *newAddr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+ void *ret2;
+
+ setup_single_address(size, &ptr);
+
+ if (seal) {
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+ }
+
+ /*
+ * MREMAP_FIXED can move the mapping to zero address
+ */
+ ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
+ 0);
+ if (seal) {
+ assert(ret2 == MAP_FAILED);
+ assert(errno == EACCES);
+ } else {
+ assert(ret2 == 0);
+
+ }
+}
+
+static void test_seal_mremap_move_dontunmap(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ void *newAddr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+ void *ret2;
+
+ setup_single_address(size, &ptr);
+
+ if (seal) {
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+ }
+
+ /* mremap to move, and don't unmap src addr. */
+ ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 0);
+ if (seal) {
+ assert(ret2 == MAP_FAILED);
+ assert(errno == EACCES);
+ } else {
+ assert(ret2 != MAP_FAILED);
+
+ }
+}
+
+static void test_seal_mremap_move_dontunmap_anyaddr(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ void *newAddr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+ void *ret2;
+
+ setup_single_address(size, &ptr);
+
+ if (seal) {
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+ }
+
+ /*
+ * The 0xdeaddead should not have effect on dest addr
+ * when MREMAP_DONTUNMAP is set.
+ */
+ ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
+ 0xdeaddead);
+ if (seal) {
+ assert(ret2 == MAP_FAILED);
+ assert(errno == EACCES);
+ } else {
+ assert(ret2 != MAP_FAILED);
+ assert((long)ret2 != 0xdeaddead);
+
+ }
+}
+
+
+static void test_seal_mmap_seal(void)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+ void *ret2;
+
+ ptr = sys_mmap(NULL, size, PROT_READ | PROT_SEAL, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+ assert(ptr != (void *)-1);
+
+ ret = sys_munmap(ptr, size);
+ assert(ret < 0);
+
+ ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
+ assert(ret < 0);
+}
+
+static void test_seal_merge_and_split(void)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size;
+ int ret;
+ void *ret2;
+
+ /* (24 RO) */
+ setup_single_address(24 * page_size, &ptr);
+
+ /* use mprotect(NONE) to set out boundary */
+ /* (1 NONE) (22 RO) (1 NONE) */
+ ret = sys_mprotect(ptr, page_size, PROT_NONE);
+ assert(!ret);
+ ret = sys_mprotect(ptr + 23 * page_size, page_size, PROT_NONE);
+ assert(!ret);
+ size = get_vma_size(ptr + page_size);
+ assert(size == 22 * page_size);
+
+ /* use mseal to split from beginning */
+ /* (1 NONE) (1 RO_SEAL) (21 RO) (1 NONE) */
+ ret = sys_mseal(ptr + page_size, page_size);
+ assert(!ret);
+ size = get_vma_size(ptr + page_size);
+ assert(size == page_size);
+ size = get_vma_size(ptr + 2 * page_size);
+ assert(size == 21 * page_size);
+
+ /* use mseal to split from the end. */
+ /* (1 NONE) (1 RO_SEAL) (20 RO) (1 RO_SEAL) (1 NONE) */
+ ret = sys_mseal(ptr + 22 * page_size, page_size);
+ assert(!ret);
+ size = get_vma_size(ptr + 22 * page_size);
+ assert(size == page_size);
+ size = get_vma_size(ptr + 2 * page_size);
+ assert(size == 20 * page_size);
+
+ /* merge with prev. */
+ /* (1 NONE) (2 RO_SEAL) (19 RO) (1 RO_SEAL) (1 NONE) */
+ ret = sys_mseal(ptr + 2 * page_size, page_size);
+ assert(!ret);
+ size = get_vma_size(ptr + page_size);
+ assert(size == 2 * page_size);
+
+ /* merge with after. */
+ /* (1 NONE) (2 RO_SEAL) (18 RO) (2 RO_SEALS) (1 NONE) */
+ ret = sys_mseal(ptr + 21 * page_size, page_size);
+ assert(!ret);
+ size = get_vma_size(ptr + 21 * page_size);
+ assert(size == 2 * page_size);
+
+ /* split and merge from prev */
+ /* (1 NONE) (3 RO_SEAL) (17 RO) (2 RO_SEALS) (1 NONE) */
+ ret = sys_mseal(ptr + 2 * page_size, 2 * page_size);
+ assert(!ret);
+ size = get_vma_size(ptr + 1 * page_size);
+ assert(size == 3 * page_size);
+ ret = sys_munmap(ptr + page_size, page_size);
+ assert(ret < 0);
+ ret = sys_mprotect(ptr + 2 * page_size, page_size, PROT_NONE);
+ assert(ret < 0);
+
+ /* split and merge from next */
+ /* (1 NONE) (3 RO_SEAL) (16 RO) (3 RO_SEALS) (1 NONE) */
+ ret = sys_mseal(ptr + 20 * page_size, 2 * page_size);
+ assert(!ret);
+ size = get_vma_size(ptr + 20 * page_size);
+ assert(size == 3 * page_size);
+
+ /* merge from middle of prev and middle of next. */
+ /* (1 NONE) (22 RO_SEAL) (1 NONE) */
+ ret = sys_mseal(ptr + 2 * page_size, 20 * page_size);
+ assert(!ret);
+ size = get_vma_size(ptr + page_size);
+ assert(size == 22 * page_size);
+}
+
+static void test_seal_mmap_merge(void)
+{
+ LOG_TEST_ENTER();
+
+ void *ptr, *ptr2;
+ unsigned long page_size = getpagesize();
+ unsigned long size;
+ int ret;
+ void *ret2;
+
+ /* (24 RO) */
+ setup_single_address(24 * page_size, &ptr);
+
+ /* use mprotect(NONE) to set out boundary */
+ /* (1 NONE) (22 RO) (1 NONE) */
+ ret = sys_mprotect(ptr, page_size, PROT_NONE);
+ assert(!ret);
+ ret = sys_mprotect(ptr + 23 * page_size, page_size, PROT_NONE);
+ assert(!ret);
+ size = get_vma_size(ptr + page_size);
+ assert(size == 22 * page_size);
+
+ /* use munmap to free 2 segment of memory. */
+ /* (1 NONE) (1 free) (20 RO) (1 free) (1 NONE) */
+ ret = sys_munmap(ptr + page_size, page_size);
+ assert(!ret);
+
+ ret = sys_munmap(ptr + 22 * page_size, page_size);
+ assert(!ret);
+
+ /* apply seal to the middle */
+ /* (1 NONE) (1 free) (20 RO_SEAL) (1 free) (1 NONE) */
+ ret = sys_mseal(ptr + 2 * page_size, 20 * page_size);
+ assert(!ret);
+ size = get_vma_size(ptr + 2 * page_size);
+ assert(size == 20 * page_size);
+
+ /* allocate a mapping at beginning, and make sure it merges. */
+ /* (1 NONE) (21 RO_SEAL) (1 free) (1 NONE) */
+ ptr2 = sys_mmap(ptr + page_size, page_size, PROT_READ | PROT_SEAL,
+ MAP_FIXED | MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+ assert(ptr != (void *)-1);
+ size = get_vma_size(ptr + page_size);
+ assert(size == 21 * page_size);
+
+ /* allocate a mapping at end, and make sure it merges. */
+ /* (1 NONE) (22 RO_SEAL) (1 NONE) */
+ ptr2 = sys_mmap(ptr + 22 * page_size, page_size, PROT_READ | PROT_SEAL,
+ MAP_FIXED | MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+ assert(ptr != (void *)-1);
+ size = get_vma_size(ptr + page_size);
+ assert(size == 22 * page_size);
+}
+
+static void test_not_sealable(void)
+{
+ LOG_TEST_ENTER();
+ int ret;
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+
+ ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+ assert(ptr != (void *)-1);
+
+ ret = sys_mseal(ptr, size);
+ assert(ret < 0);
+}
+
+static void test_mmap_fixed_change_to_sealable(void)
+{
+ LOG_TEST_ENTER();
+ int ret;
+ void *ptr, *ptr2;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+
+ ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+ assert(ptr != (void *)-1);
+
+ ret = sys_mseal(ptr, size);
+ assert(ret < 0);
+
+ ptr2 = sys_mmap(ptr, size, PROT_READ,
+ MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
+ assert(ptr2 == ptr);
+
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+}
+
+static void test_mmap_fixed_change_to_not_sealable(void)
+{
+ LOG_TEST_ENTER();
+ int ret;
+ void *ptr, *ptr2;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+
+ ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
+ assert(ptr != (void *)-1);
+
+ ptr2 = sys_mmap(ptr, size, PROT_READ,
+ MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+ assert(ptr2 == ptr);
+
+ ret = sys_mseal(ptr, size);
+ assert(ret < 0);
+}
+
+static void test_merge_sealable(void)
+{
+ LOG_TEST_ENTER();
+ int ret;
+ void *ptr, *ptr2;
+ unsigned long page_size = getpagesize();
+ unsigned long size;
+
+ /* (24 RO) */
+ setup_single_address(24 * page_size, &ptr);
+
+ /* use mprotect(NONE) to set out boundary */
+ /* (1 NONE) (22 RO) (1 NONE) */
+ ret = sys_mprotect(ptr, page_size, PROT_NONE);
+ assert(!ret);
+ ret = sys_mprotect(ptr + 23 * page_size, page_size, PROT_NONE);
+ assert(!ret);
+ size = get_vma_size(ptr + page_size);
+ assert(size == 22 * page_size);
+
+ /* (1 NONE) (RO) (4 free) (17 RO) (1 NONE) */
+ ret = sys_munmap(ptr + 2 * page_size, 4 * page_size);
+ assert(!ret);
+ size = get_vma_size(ptr + page_size);
+ assert(size == 1 * page_size);
+ size = get_vma_size(ptr + 6 * page_size);
+ assert(size == 17 * page_size);
+
+ /* (1 NONE) (RO) (1 free) (2 RO) (1 free) (17 RO) (1 NONE) */
+ ptr2 = sys_mmap(ptr + 3 * page_size, 2 * page_size, PROT_READ,
+ MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
+ size = get_vma_size(ptr + 3 * page_size);
+ assert(size == 2 * page_size);
+
+ /* (1 NONE) (RO) (1 free) (20 RO) (1 NONE) */
+ ptr2 = sys_mmap(ptr + 5 * page_size, 1 * page_size, PROT_READ,
+ MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
+ assert(ptr2 != (void *)-1);
+ size = get_vma_size(ptr + 3 * page_size);
+ assert(size == 20 * page_size);
+
+ /* (1 NONE) (RO) (1 free) (19 RO) (1 RO_SB) (1 NONE) */
+ ret = sys_mseal(ptr + 22 * page_size, page_size);
+ assert(!ret);
+
+ /* (1 NONE) (RO) (not sealable) (19 RO) (1 RO_SB) (1 NONE) */
+ ptr2 = sys_mmap(ptr + 2 * page_size, page_size, PROT_READ,
+ MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+ assert(ptr2 != (void *)-1);
+ size = get_vma_size(ptr + page_size);
+ assert(size == page_size);
+ size = get_vma_size(ptr + 2 * page_size);
+ assert(size == page_size);
+}
+
+static void test_seal_discard_ro_anon_on_rw(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address_rw_sealable(size, &ptr, seal);
+ assert(ptr != (void *)-1);
+
+ if (seal) {
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+ }
+
+ /* sealing doesn't take effect on RW memory. */
+ ret = sys_madvise(ptr, size, MADV_DONTNEED);
+ assert(!ret);
+
+ /* base seal still apply. */
+ ret = sys_munmap(ptr, size);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+}
+
+static void test_seal_discard_ro_anon_on_pkey(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+ int pkey;
+
+ setup_single_address_rw_sealable(size, &ptr, seal);
+ assert(ptr != (void *)-1);
+
+ pkey = sys_pkey_alloc(0, 0);
+ assert(pkey > 0);
+
+ ret = sys_mprotect_pkey((void *)ptr, size, PROT_READ | PROT_WRITE, pkey);
+ assert(!ret);
+
+ if (seal) {
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+ }
+
+ /* sealing doesn't take effect if PKRU allow write. */
+ set_pkey(pkey, 0);
+ ret = sys_madvise(ptr, size, MADV_DONTNEED);
+ assert(!ret);
+
+ /* sealing will take effect if PKRU deny write. */
+ set_pkey(pkey, PKEY_DISABLE_WRITE);
+ ret = sys_madvise(ptr, size, MADV_DONTNEED);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+
+ /* base seal still apply. */
+ ret = sys_munmap(ptr, size);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+}
+
+static void test_seal_discard_ro_anon_on_filebacked(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+ int fd;
+ unsigned long mapflags = MAP_PRIVATE;
+
+ if (seal)
+ mapflags |= MAP_SEALABLE;
+
+ fd = memfd_create("test", 0);
+ assert(fd > 0);
+
+ ret = fallocate(fd, 0, 0, size);
+ assert(!ret);
+
+ ptr = sys_mmap(NULL, size, PROT_READ, mapflags, fd, 0);
+ assert(ptr != MAP_FAILED);
+
+ if (seal) {
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+ }
+
+ /* sealing doesn't apply for file backed mapping. */
+ ret = sys_madvise(ptr, size, MADV_DONTNEED);
+ assert(!ret);
+
+ ret = sys_munmap(ptr, size);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+ close(fd);
+}
+
+static void test_seal_discard_ro_anon_on_shared(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+ unsigned long mapflags = MAP_ANONYMOUS | MAP_SHARED;
+
+ if (seal)
+ mapflags |= MAP_SEALABLE;
+
+ ptr = sys_mmap(NULL, size, PROT_READ, mapflags, -1, 0);
+ assert(ptr != (void *)-1);
+
+ if (seal) {
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+ }
+
+ /* sealing doesn't apply for shared mapping. */
+ ret = sys_madvise(ptr, size, MADV_DONTNEED);
+ assert(!ret);
+
+ ret = sys_munmap(ptr, size);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+}
+
+static void test_seal_discard_ro_anon_invalid_shared(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+ int fd;
+
+ fd = open("/proc/self/maps", O_RDONLY);
+ ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, fd, 0);
+ assert(ptr != (void *)-1);
+
+ if (seal) {
+ ret = sys_mseal(ptr, size);
+ assert(!ret);
+ }
+
+ ret = sys_madvise(ptr, size, MADV_DONTNEED);
+ assert(!ret);
+
+ ret = sys_munmap(ptr, size);
+ assert(ret < 0);
+ close(fd);
+}
+
+static void test_seal_discard_ro_anon(bool seal)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ setup_single_address(size, &ptr);
+
+ if (seal)
+ seal_single_address(ptr, size);
+
+ ret = sys_madvise(ptr, size, MADV_DONTNEED);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+
+ ret = sys_munmap(ptr, size);
+ if (seal)
+ assert(ret < 0);
+ else
+ assert(!ret);
+}
+
+static void test_mmap_seal_discard_ro_anon(void)
+{
+ LOG_TEST_ENTER();
+ void *ptr;
+ unsigned long page_size = getpagesize();
+ unsigned long size = 4 * page_size;
+ int ret;
+
+ ptr = sys_mmap(NULL, size, PROT_READ | PROT_WRITE | PROT_SEAL,
+ MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+ assert(ptr != (void *)-1);
+
+ ret = sys_mprotect(ptr, size, PROT_READ);
+ assert(!ret);
+
+ ret = sys_madvise(ptr, size, MADV_DONTNEED);
+ assert(ret < 0);
+
+ ret = sys_munmap(ptr, size);
+ assert(ret < 0);
+}
+
+bool seal_support(void)
+{
+ int ret;
+ void *ptr;
+ unsigned long page_size = getpagesize();
+
+ ptr = sys_mmap(NULL, page_size, PROT_READ | PROT_SEAL, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+ if (ptr == (void *) -1)
+ return false;
+
+ ret = sys_mseal(ptr, page_size);
+ if (ret < 0)
+ return false;
+
+ return true;
+}
+
+bool pkey_supported(void)
+{
+ int pkey = sys_pkey_alloc(0, 0);
+
+ if (pkey > 0)
+ return true;
+ return false;
+}
+
+int main(int argc, char **argv)
+{
+ bool test_seal = seal_support();
+
+ if (!test_seal) {
+ ksft_print_msg("sealing not supported (check CONFIG_64BIT)\n");
+ return 0;
+ }
+
+ test_seal_addseal();
+
+ test_seal_unmapped_start();
+ test_seal_unmapped_middle();
+ test_seal_unmapped_end();
+ test_seal_multiple_vmas();
+ test_seal_split_start();
+ test_seal_split_end();
+ test_seal_invalid_input();
+ test_seal_zero_length();
+ test_seal_twice();
+
+ test_seal_mprotect(false);
+ test_seal_mprotect(true);
+
+ test_seal_start_mprotect(false);
+ test_seal_start_mprotect(true);
+
+ test_seal_end_mprotect(false);
+ test_seal_end_mprotect(true);
+
+ test_seal_mprotect_unalign_len(false);
+ test_seal_mprotect_unalign_len(true);
+
+ test_seal_mprotect_unalign_len_variant_2(false);
+ test_seal_mprotect_unalign_len_variant_2(true);
+
+ test_seal_mprotect_two_vma(false);
+ test_seal_mprotect_two_vma(true);
+
+ test_seal_mprotect_two_vma_with_split(false);
+ test_seal_mprotect_two_vma_with_split(true);
+
+ test_seal_mprotect_partial_mprotect(false);
+ test_seal_mprotect_partial_mprotect(true);
+
+ test_seal_mprotect_two_vma_with_gap(false);
+ test_seal_mprotect_two_vma_with_gap(true);
+
+ test_seal_mprotect_merge(false);
+ test_seal_mprotect_merge(true);
+
+ test_seal_mprotect_split(false);
+ test_seal_mprotect_split(true);
+
+ test_seal_munmap(false);
+ test_seal_munmap(true);
+ test_seal_munmap_two_vma(false);
+ test_seal_munmap_two_vma(true);
+ test_seal_munmap_vma_with_gap(false);
+ test_seal_munmap_vma_with_gap(true);
+
+ test_munmap_start_freed(false);
+ test_munmap_start_freed(true);
+ test_munmap_middle_freed(false);
+ test_munmap_middle_freed(true);
+ test_munmap_end_freed(false);
+ test_munmap_end_freed(true);
+
+ test_seal_mremap_shrink(false);
+ test_seal_mremap_shrink(true);
+ test_seal_mremap_expand(false);
+ test_seal_mremap_expand(true);
+ test_seal_mremap_move(false);
+ test_seal_mremap_move(true);
+
+ test_seal_mremap_shrink_fixed(false);
+ test_seal_mremap_shrink_fixed(true);
+ test_seal_mremap_expand_fixed(false);
+ test_seal_mremap_expand_fixed(true);
+ test_seal_mremap_move_fixed(false);
+ test_seal_mremap_move_fixed(true);
+ test_seal_mremap_move_dontunmap(false);
+ test_seal_mremap_move_dontunmap(true);
+ test_seal_mremap_move_fixed_zero(false);
+ test_seal_mremap_move_fixed_zero(true);
+ test_seal_mremap_move_dontunmap_anyaddr(false);
+ test_seal_mremap_move_dontunmap_anyaddr(true);
+ test_seal_discard_ro_anon(false);
+ test_seal_discard_ro_anon(true);
+ test_seal_discard_ro_anon_on_rw(false);
+ test_seal_discard_ro_anon_on_rw(true);
+ test_seal_discard_ro_anon_on_shared(false);
+ test_seal_discard_ro_anon_on_shared(true);
+ test_seal_discard_ro_anon_on_filebacked(false);
+ test_seal_discard_ro_anon_on_filebacked(true);
+ test_seal_mmap_overwrite_prot(false);
+ test_seal_mmap_overwrite_prot(true);
+ test_seal_mmap_expand(false);
+ test_seal_mmap_expand(true);
+ test_seal_mmap_shrink(false);
+ test_seal_mmap_shrink(true);
+
+ test_seal_mmap_seal();
+ test_seal_merge_and_split();
+ test_seal_mmap_merge();
+
+ test_not_sealable();
+ test_merge_sealable();
+ test_mmap_fixed_change_to_sealable();
+ test_mmap_fixed_change_to_not_sealable();
+
+ if (pkey_supported()) {
+ test_seal_discard_ro_anon_on_pkey(false);
+ test_seal_discard_ro_anon_on_pkey(true);
+ } else
+ ksft_print_msg("PKEY not supported, skip pkey related test\n");
+
+ ksft_print_msg("Done\n");
+ return 0;
+}
--
2.43.0.195.gebba966016-goog
From: Jeff Xu <[email protected]>
Add documentation for mseal().
Signed-off-by: Jeff Xu <[email protected]>
---
Documentation/userspace-api/mseal.rst | 181 ++++++++++++++++++++++++++
1 file changed, 181 insertions(+)
create mode 100644 Documentation/userspace-api/mseal.rst
diff --git a/Documentation/userspace-api/mseal.rst b/Documentation/userspace-api/mseal.rst
new file mode 100644
index 000000000000..1700ce5af218
--- /dev/null
+++ b/Documentation/userspace-api/mseal.rst
@@ -0,0 +1,181 @@
+.. SPDX-License-Identifier: GPL-2.0
+
+=====================
+Introduction of mseal
+=====================
+
+:Author: Jeff Xu <[email protected]>
+
+Modern CPUs support memory permissions such as RW and NX bits. The memory
+permission feature improves security stance on memory corruption bugs, i.e.
+the attacker can’t just write to arbitrary memory and point the code to it,
+the memory has to be marked with X bit, or else an exception will happen.
+
+Memory sealing additionally protects the mapping itself against
+modifications. This is useful to mitigate memory corruption issues where a
+corrupted pointer is passed to a memory management system. For example,
+such an attacker primitive can break control-flow integrity guarantees
+since read-only memory that is supposed to be trusted can become writable
+or .text pages can get remapped. Memory sealing can automatically be
+applied by the runtime loader to seal .text and .rodata pages and
+applications can additionally seal security critical data at runtime.
+
+A similar feature already exists in the XNU kernel with the
+VM_FLAGS_PERMANENT flag [1] and on OpenBSD with the mimmutable syscall [2].
+
+User API
+========
+Two system calls are involved in virtual memory sealing, mseal() and mmap().
+
+mseal()
+-----------
+The mseal() syscall has following signature:
+
+``int mseal(void addr, size_t len, unsigned long flags)``
+
+**addr/len**: virtual memory address range.
+
+The address range set by ``addr``/``len`` must meet:
+ - The start address must be in an allocated VMA.
+ - The start address must be page aligned.
+ - The end address (``addr`` + ``len``) must be in an allocated VMA.
+ - no gap (unallocated memory) between start and end address.
+
+The ``len`` will be paged aligned implicitly by the kernel.
+
+**flags**: reserved for future use.
+
+**return values**:
+
+- ``0``: Success.
+
+- ``-EINVAL``:
+ - Invalid input ``flags``.
+ - The start address (``addr``) is not page aligned.
+ - Address range (``addr`` + ``len``) overflow.
+
+- ``-ENOMEM``:
+ - The start address (``addr``) is not allocated.
+ - The end address (``addr`` + ``len``) is not allocated.
+ - A gap (unallocated memory) between start and end address.
+
+- ``-EACCES``:
+ - ``MAP_SEALABLE`` is not set during mmap().
+
+- ``-EPERM``:
+ - sealing is supported only on 64 bit CPUs, 32-bit is not supported.
+
+- For above error cases, users can expect the given memory range is
+ unmodified, i.e. no partial update.
+
+- There might be other internal errors/cases not listed here, e.g.
+ error during merging/splitting VMAs, or the process reaching the max
+ number of supported VMAs. In those cases, partial updates to the given
+ memory range could happen. However, those cases shall be rare.
+
+**Blocked operations after sealing**:
+ Unmapping, moving to another location, and shrinking the size,
+ via munmap() and mremap(), can leave an empty space, therefore
+ can be replaced with a VMA with a new set of attributes.
+
+ Moving or expanding a different VMA into the current location,
+ via mremap().
+
+ Modifying a VMA via mmap(MAP_FIXED).
+
+ Size expansion, via mremap(), does not appear to pose any
+ specific risks to sealed VMAs. It is included anyway because
+ the use case is unclear. In any case, users can rely on
+ merging to expand a sealed VMA.
+
+ mprotect() and pkey_mprotect().
+
+ Some destructive madvice() behaviors (e.g. MADV_DONTNEED)
+ for anonymous memory, when users don't have write permission to the
+ memory. Those behaviors can alter region contents by discarding pages,
+ effectively a memset(0) for anonymous memory.
+
+**Note**:
+
+- mseal() only works on 64-bit CPUs, not 32-bit CPU.
+
+- users can call mseal() multiple times, mseal() on an already sealed memory
+ is a no-action (not error).
+
+- munseal() is not supported.
+
+mmap()
+----------
+``void *mmap(void* addr, size_t length, int prot, int flags, int fd,
+off_t offset);``
+
+We add two changes in ``prot`` and ``flags`` of mmap() related to
+memory sealing.
+
+**prot**
+
+The ``PROT_SEAL`` bit in ``prot`` field of mmap().
+
+When present, it marks the memory is sealed since creation.
+
+This is useful as optimization because it avoids having to make two
+system calls: one for mmap() and one for mseal().
+
+It's worth noting that even though the sealing is set via the
+``prot`` field in mmap(), it can't be set in the ``prot``
+field in later mprotect(). This is unlike the ``PROT_READ``,
+``PROT_WRITE``, ``PROT_EXEC`` bits, e.g. if ``PROT_WRITE`` is not set in
+mprotect(), it means that the region is not writable.
+
+Setting ``PROT_SEAL`` implies setting ``MAP_SEALABLE`` below.
+
+**flags**
+
+The ``MAP_SEALABLE`` bit in the ``flags`` field of mmap().
+
+When present, it marks the map as sealable. A map created
+without ``MAP_SEALABLE`` will not support sealing; In other words,
+mseal() will fail for such a map.
+
+
+Applications that don't care about sealing will expect their
+behavior unchanged. For those that need sealing support, opt-in
+by adding ``MAP_SEALABLE`` in mmap().
+
+Note: for a map created without ``MAP_SEALABLE`` or a map created
+with ``MAP_SEALABLE`` but not sealed yet, mmap(MAP_FIXED) can
+change the sealable or sealing bit.
+
+Use Case:
+=========
+- glibc:
+ The dynamic linker, during loading ELF executables, can apply sealing to
+ non-writable memory segments.
+
+- Chrome browser: protect some security sensitive data-structures.
+
+Additional notes:
+=================
+As Jann Horn pointed out in [3], there are still a few ways to write
+to RO memory, which is, in a way, by design. Those cases are not covered
+by mseal(). If applications want to block such cases, sandbox tools (such as
+seccomp, LSM, etc) might be considered.
+
+Those cases are:
+
+- Write to read-only memory through /proc/self/mem interface.
+- Write to read-only memory through ptrace (such as PTRACE_POKETEXT).
+- userfaultfd.
+
+The idea that inspired this patch comes from Stephen Röttger’s work in V8
+CFI [4]. Chrome browser in ChromeOS will be the first user of this API.
+
+Reference:
+==========
+[1] https://github.com/apple-oss-distributions/xnu/blob/1031c584a5e37aff177559b9f69dbd3c8c3fd30a/osfmk/mach/vm_statistics.h#L274
+
+[2] https://man.openbsd.org/mimmutable.2
+
+[3] https://lore.kernel.org/lkml/CAG48ez3ShUYey+ZAFsU2i1RpQn0a5eOs2hzQ426FkcgnfUGLvA@mail.gmail.com
+
+[4] https://docs.google.com/document/d/1O2jwK4dxI3nRcOJuPYkonhTkNQfbmwdvxQMyXgeaRHo/edit#heading=h.bvaojj9fu6hc
--
2.43.0.195.gebba966016-goog
On 1/4/24 10:51, [email protected] wrote:
> From: Jeff Xu <[email protected]>
>
> Add documentation for mseal().
>
> Signed-off-by: Jeff Xu <[email protected]>
> ---
> Documentation/userspace-api/mseal.rst | 181 ++++++++++++++++++++++++++
> 1 file changed, 181 insertions(+)
> create mode 100644 Documentation/userspace-api/mseal.rst
>
> diff --git a/Documentation/userspace-api/mseal.rst b/Documentation/userspace-api/mseal.rst
> new file mode 100644
> index 000000000000..1700ce5af218
> --- /dev/null
> +++ b/Documentation/userspace-api/mseal.rst
> @@ -0,0 +1,181 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=====================
> +Introduction of mseal
> +=====================
> +
> +:Author: Jeff Xu <[email protected]>
> +
> +Modern CPUs support memory permissions such as RW and NX bits. The memory
> +permission feature improves security stance on memory corruption bugs, i.e.
> +the attacker can’t just write to arbitrary memory and point the code to it,
> +the memory has to be marked with X bit, or else an exception will happen.
> +
> +Memory sealing additionally protects the mapping itself against
> +modifications. This is useful to mitigate memory corruption issues where a
> +corrupted pointer is passed to a memory management system. For example,
> +such an attacker primitive can break control-flow integrity guarantees
> +since read-only memory that is supposed to be trusted can become writable
> +or .text pages can get remapped. Memory sealing can automatically be
> +applied by the runtime loader to seal .text and .rodata pages and
> +applications can additionally seal security critical data at runtime.
> +
> +A similar feature already exists in the XNU kernel with the
> +VM_FLAGS_PERMANENT flag [1] and on OpenBSD with the mimmutable syscall [2].
> +
> +User API
> +========
> +Two system calls are involved in virtual memory sealing, mseal() and mmap().
> +
> +mseal()
> +-----------
> +The mseal() syscall has following signature:
> +
> +``int mseal(void addr, size_t len, unsigned long flags)``
> +
> +**addr/len**: virtual memory address range.
> +
> +The address range set by ``addr``/``len`` must meet:
> + - The start address must be in an allocated VMA.
> + - The start address must be page aligned.
> + - The end address (``addr`` + ``len``) must be in an allocated VMA.
> + - no gap (unallocated memory) between start and end address.
> +
> +The ``len`` will be paged aligned implicitly by the kernel.
Does that mean that the <len> will be extended to be page aligned
if it's not already page aligned?
> +
> +**flags**: reserved for future use.
> +
> +**return values**:
> +
> +- ``0``: Success.
> +
> +- ``-EINVAL``:
> + - Invalid input ``flags``.
> + - The start address (``addr``) is not page aligned.
> + - Address range (``addr`` + ``len``) overflow.
> +
> +- ``-ENOMEM``:
> + - The start address (``addr``) is not allocated.
> + - The end address (``addr`` + ``len``) is not allocated.
> + - A gap (unallocated memory) between start and end address.
> +
> +- ``-EACCES``:
> + - ``MAP_SEALABLE`` is not set during mmap().
> +
> +- ``-EPERM``:
> + - sealing is supported only on 64 bit CPUs, 32-bit is not supported.
64-bit
> +
> +- For above error cases, users can expect the given memory range is
> + unmodified, i.e. no partial update.
> +
> +- There might be other internal errors/cases not listed here, e.g.
> + error during merging/splitting VMAs, or the process reaching the max
> + number of supported VMAs. In those cases, partial updates to the given
> + memory range could happen. However, those cases shall be rare.
> +
> +**Blocked operations after sealing**:
> + Unmapping, moving to another location, and shrinking the size,
> + via munmap() and mremap(), can leave an empty space, therefore
> + can be replaced with a VMA with a new set of attributes.
> +
> + Moving or expanding a different VMA into the current location,
> + via mremap().
> +
> + Modifying a VMA via mmap(MAP_FIXED).
> +
> + Size expansion, via mremap(), does not appear to pose any
> + specific risks to sealed VMAs. It is included anyway because
> + the use case is unclear. In any case, users can rely on
> + merging to expand a sealed VMA.
> +
> + mprotect() and pkey_mprotect().
> +
> + Some destructive madvice() behaviors (e.g. MADV_DONTNEED)
> + for anonymous memory, when users don't have write permission to the
> + memory. Those behaviors can alter region contents by discarding pages,
> + effectively a memset(0) for anonymous memory.
> +
> +**Note**:
> +
> +- mseal() only works on 64-bit CPUs, not 32-bit CPU.
> +
> +- users can call mseal() multiple times, mseal() on an already sealed memory
times;
> + is a no-action (not error).
> +
> +- munseal() is not supported.
> +
> +mmap()
> +----------
> +``void *mmap(void* addr, size_t length, int prot, int flags, int fd,
> +off_t offset);``
> +
> +We add two changes in ``prot`` and ``flags`` of mmap() related to
> +memory sealing.
> +
> +**prot**
> +
> +The ``PROT_SEAL`` bit in ``prot`` field of mmap().
> +
> +When present, it marks the memory is sealed since creation.
> +
> +This is useful as optimization because it avoids having to make two
> +system calls: one for mmap() and one for mseal().
> +
> +It's worth noting that even though the sealing is set via the
> +``prot`` field in mmap(), it can't be set in the ``prot``
> +field in later mprotect(). This is unlike the ``PROT_READ``,
> +``PROT_WRITE``, ``PROT_EXEC`` bits, e.g. if ``PROT_WRITE`` is not set in
> +mprotect(), it means that the region is not writable.
> +
> +Setting ``PROT_SEAL`` implies setting ``MAP_SEALABLE`` below.
> +
> +**flags**
> +
> +The ``MAP_SEALABLE`` bit in the ``flags`` field of mmap().
> +
> +When present, it marks the map as sealable. A map created
> +without ``MAP_SEALABLE`` will not support sealing; In other words,
> +mseal() will fail for such a map.
> +
> +
> +Applications that don't care about sealing will expect their
> +behavior unchanged. For those that need sealing support, opt-in
> +by adding ``MAP_SEALABLE`` in mmap().
> +
> +Note: for a map created without ``MAP_SEALABLE`` or a map created
> +with ``MAP_SEALABLE`` but not sealed yet, mmap(MAP_FIXED) can
> +change the sealable or sealing bit.
> +
> +Use Case:
> +=========
> +- glibc:
> + The dynamic linker, during loading ELF executables, can apply sealing to
during loading of
or
while loading
> + non-writable memory segments.
> +
> +- Chrome browser: protect some security sensitive data-structures.
security-sensitive data structures.
> +
> +Additional notes:
> +=================
> +As Jann Horn pointed out in [3], there are still a few ways to write
> +to RO memory, which is, in a way, by design. Those cases are not covered
> +by mseal(). If applications want to block such cases, sandbox tools (such as
> +seccomp, LSM, etc) might be considered.
> +
> +Those cases are:
> +
> +- Write to read-only memory through /proc/self/mem interface.
> +- Write to read-only memory through ptrace (such as PTRACE_POKETEXT).
> +- userfaultfd.
> +
> +The idea that inspired this patch comes from Stephen Röttger’s work in V8
> +CFI [4]. Chrome browser in ChromeOS will be the first user of this API.
> +
> +Reference:
> +==========
> +[1] https://github.com/apple-oss-distributions/xnu/blob/1031c584a5e37aff177559b9f69dbd3c8c3fd30a/osfmk/mach/vm_statistics.h#L274
> +
> +[2] https://man.openbsd.org/mimmutable.2
> +
> +[3] https://lore.kernel.org/lkml/CAG48ez3ShUYey+ZAFsU2i1RpQn0a5eOs2hzQ426FkcgnfUGLvA@mail.gmail.com
> +
> +[4] https://docs.google.com/document/d/1O2jwK4dxI3nRcOJuPYkonhTkNQfbmwdvxQMyXgeaRHo/edit#heading=h.bvaojj9fu6hc
--
#Randy
On Thu, Jan 04, 2024 at 06:51:34PM +0000, [email protected] wrote:
> From: Jeff Xu <[email protected]>
>
> Wire up mseal syscall for all architectures.
>
> Signed-off-by: Jeff Xu <[email protected]>
Doesn't this break the build at this point in the patch series?
Normally you add the code for the syscall first, and then you wire it
up.
thanks,
greg k-h
On Thu, Jan 4, 2024 at 3:47 PM Randy Dunlap <[email protected]> wrote:
>
>
>
> On 1/4/24 10:51, [email protected] wrote:
> > From: Jeff Xu <[email protected]>
> >
> > Add documentation for mseal().
> >
> > Signed-off-by: Jeff Xu <[email protected]>
> > ---
> > Documentation/userspace-api/mseal.rst | 181 ++++++++++++++++++++++++++
> > 1 file changed, 181 insertions(+)
> > create mode 100644 Documentation/userspace-api/mseal.rst
> >
> > diff --git a/Documentation/userspace-api/mseal.rst b/Documentation/userspace-api/mseal.rst
> > new file mode 100644
> > index 000000000000..1700ce5af218
> > --- /dev/null
> > +++ b/Documentation/userspace-api/mseal.rst
> > @@ -0,0 +1,181 @@
> > +.. SPDX-License-Identifier: GPL-2.0
> > +
> > +=====================
> > +Introduction of mseal
> > +=====================
> > +
> > +:Author: Jeff Xu <[email protected]>
> > +
> > +Modern CPUs support memory permissions such as RW and NX bits. The memory
> > +permission feature improves security stance on memory corruption bugs, i.e.
> > +the attacker can’t just write to arbitrary memory and point the code to it,
> > +the memory has to be marked with X bit, or else an exception will happen.
> > +
> > +Memory sealing additionally protects the mapping itself against
> > +modifications. This is useful to mitigate memory corruption issues where a
> > +corrupted pointer is passed to a memory management system. For example,
> > +such an attacker primitive can break control-flow integrity guarantees
> > +since read-only memory that is supposed to be trusted can become writable
> > +or .text pages can get remapped. Memory sealing can automatically be
> > +applied by the runtime loader to seal .text and .rodata pages and
> > +applications can additionally seal security critical data at runtime.
> > +
> > +A similar feature already exists in the XNU kernel with the
> > +VM_FLAGS_PERMANENT flag [1] and on OpenBSD with the mimmutable syscall [2].
> > +
> > +User API
> > +========
> > +Two system calls are involved in virtual memory sealing, mseal() and mmap().
> > +
> > +mseal()
> > +-----------
> > +The mseal() syscall has following signature:
> > +
> > +``int mseal(void addr, size_t len, unsigned long flags)``
> > +
> > +**addr/len**: virtual memory address range.
> > +
> > +The address range set by ``addr``/``len`` must meet:
> > + - The start address must be in an allocated VMA.
> > + - The start address must be page aligned.
> > + - The end address (``addr`` + ``len``) must be in an allocated VMA.
> > + - no gap (unallocated memory) between start and end address.
> > +
> > +The ``len`` will be paged aligned implicitly by the kernel.
>
> Does that mean that the <len> will be extended to be page aligned
> if it's not already page aligned?
>
Yes.
the code (do_mseal) calls PAGE_ALIGNED(len).
mprotect() also has this.
Two test cases cover this part.
test_seal_mprotect_unalign_len
test_seal_mprotect_unalign_len_variant_2
-Jeff
> --
> #Randy
On Thu, Jan 4, 2024 at 11:44 PM Greg KH <[email protected]> wrote:
>
> On Thu, Jan 04, 2024 at 06:51:34PM +0000, [email protected] wrote:
> > From: Jeff Xu <[email protected]>
> >
> > Wire up mseal syscall for all architectures.
> >
> > Signed-off-by: Jeff Xu <[email protected]>
>
> Doesn't this break the build at this point in the patch series?
>
Yes.
> Normally you add the code for the syscall first, and then you wire it
> up.
>
Thanks. Will update in the next version.
-Jeff
> thanks,
>
> greg k-h
One comment:
On Thu, 4 Jan 2024 at 10:51, <[email protected]> wrote:
>
> diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
> index 9a846439b36a..02280199069b 100644
> --- a/kernel/sys_ni.c
> +++ b/kernel/sys_ni.c
> @@ -193,6 +193,7 @@ COND_SYSCALL(migrate_pages);
> COND_SYSCALL(move_pages);
> COND_SYSCALL(set_mempolicy_home_node);
> COND_SYSCALL(cachestat);
> +COND_SYSCALL(mseal);
>
> COND_SYSCALL(perf_event_open);
> COND_SYSCALL(accept4);
Move this part to PATCH 1/1, so that it all builds cleanly.
Other than that, this seems all reasonable to me now.
Linus
On Sun, Jan 7, 2024 at 10:42 AM Linus Torvalds
<[email protected]> wrote:
>
> One comment:
>
> On Thu, 4 Jan 2024 at 10:51, <[email protected]> wrote:
> >
> > diff --git a/kernel/sys_ni.c b/kernel/sys_ni.c
> > index 9a846439b36a..02280199069b 100644
> > --- a/kernel/sys_ni.c
> > +++ b/kernel/sys_ni.c
> > @@ -193,6 +193,7 @@ COND_SYSCALL(migrate_pages);
> > COND_SYSCALL(move_pages);
> > COND_SYSCALL(set_mempolicy_home_node);
> > COND_SYSCALL(cachestat);
> > +COND_SYSCALL(mseal);
> >
> > COND_SYSCALL(perf_event_open);
> > COND_SYSCALL(accept4);
>
> Move this part to PATCH 1/1, so that it all builds cleanly.
>
Sure, thanks!
-Jeff
> Other than that, this seems all reasonable to me now.
>
> Linus
You have used ksft_print_msg() only. Can you please udpate the test to use
more TAP macros for increased readability and printing test output. We
don't want to terminate the test if one test fails because of assert. We
want the sub-tests to get executed indepdendent of other tests.
ksft_test_result(condition, fmt, ...);
ksft_test_result_pass(fmt, ...);
See some examples below:
On 1/4/24 11:51 PM, [email protected] wrote:
> From: Jeff Xu <[email protected]>
>
> selftest for memory sealing change in mmap() and mseal().
>
> Signed-off-by: Jeff Xu <[email protected]>
> ---
> tools/testing/selftests/mm/.gitignore | 1 +
> tools/testing/selftests/mm/Makefile | 1 +
> tools/testing/selftests/mm/mseal_test.c | 1971 +++++++++++++++++++++++
> 3 files changed, 1973 insertions(+)
> create mode 100644 tools/testing/selftests/mm/mseal_test.c
>
> diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore
> index 4ff10ea61461..76474c51c786 100644
> --- a/tools/testing/selftests/mm/.gitignore
> +++ b/tools/testing/selftests/mm/.gitignore
> @@ -46,3 +46,4 @@ gup_longterm
> mkdirty
> va_high_addr_switch
> hugetlb_fault_after_madv
> +mseal_test
> diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
> index dede0bcf97a3..652c07ff81f5 100644
> --- a/tools/testing/selftests/mm/Makefile
> +++ b/tools/testing/selftests/mm/Makefile
> @@ -59,6 +59,7 @@ TEST_GEN_FILES += mlock2-tests
> TEST_GEN_FILES += mrelease_test
> TEST_GEN_FILES += mremap_dontunmap
> TEST_GEN_FILES += mremap_test
> +TEST_GEN_FILES += mseal_test
> TEST_GEN_FILES += on-fault-limit
> TEST_GEN_FILES += pagemap_ioctl
> TEST_GEN_FILES += thuge-gen
> diff --git a/tools/testing/selftests/mm/mseal_test.c b/tools/testing/selftests/mm/mseal_test.c
> new file mode 100644
> index 000000000000..a916d30e97c2
> --- /dev/null
> +++ b/tools/testing/selftests/mm/mseal_test.c
> @@ -0,0 +1,1971 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#define _GNU_SOURCE
> +#include <sys/mman.h>
> +#include <stdint.h>
> +#include <unistd.h>
> +#include <string.h>
> +#include <sys/time.h>
> +#include <sys/resource.h>
> +#include <stdbool.h>
> +#include "../kselftest.h"
> +#include <syscall.h>
> +#include <errno.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <assert.h>
> +#include <fcntl.h>
> +#include <assert.h>
> +#include <sys/ioctl.h>
> +#include <sys/vfs.h>
> +#include <sys/stat.h>
> +
> +/*
> + * need those definition for manually build using gcc.
> + * gcc -I ../../../../usr/include -DDEBUG -O3 -DDEBUG -O3 mseal_test.c -o mseal_test
> + */
> +#ifndef MAP_SEALABLE
> +#define MAP_SEALABLE 0x8000000
> +#endif
> +
> +#ifndef PROT_SEAL
> +#define PROT_SEAL 0x04000000
> +#endif
> +
> +#ifndef PKEY_DISABLE_ACCESS
> +# define PKEY_DISABLE_ACCESS 0x1
> +#endif
> +
> +#ifndef PKEY_DISABLE_WRITE
> +# define PKEY_DISABLE_WRITE 0x2
> +#endif
> +
> +#ifndef PKEY_BITS_PER_KEY
> +#define PKEY_BITS_PER_PKEY 2
> +#endif
> +
> +#ifndef PKEY_MASK
> +#define PKEY_MASK (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE)
> +#endif
> +
> +#ifndef DEBUG
> +#define LOG_TEST_ENTER() {}
> +#else
> +#define LOG_TEST_ENTER() {ksft_print_msg("%s\n", __func__); }
> +#endif
> +
> +#ifndef u64
> +#define u64 unsigned long long
> +#endif
> +
> +static unsigned long get_vma_size(void *addr)
> +{
> + FILE *maps;
> + char line[256];
> + int size = 0;
> + uintptr_t addr_start, addr_end;
> +
> + maps = fopen("/proc/self/maps", "r");
> + if (!maps)
> + return 0;
> +
> + while (fgets(line, sizeof(line), maps)) {
> + if (sscanf(line, "%lx-%lx", &addr_start, &addr_end) == 2) {
> + if (addr_start == (uintptr_t) addr) {
> + size = addr_end - addr_start;
> + break;
> + }
> + }
> + }
> + fclose(maps);
> + return size;
> +}
> +
> +/*
> + * define sys_xyx to call syscall directly.
> + */
> +static int sys_mseal(void *start, size_t len)
> +{
> + int sret;
> +
> + errno = 0;
> + sret = syscall(__NR_mseal, start, len, 0);
> + return sret;
> +}
> +
> +static int sys_mprotect(void *ptr, size_t size, unsigned long prot)
> +{
> + int sret;
> +
> + errno = 0;
> + sret = syscall(SYS_mprotect, ptr, size, prot);
> + return sret;
> +}
> +
> +static int sys_mprotect_pkey(void *ptr, size_t size, unsigned long orig_prot,
> + unsigned long pkey)
> +{
> + int sret;
> +
> + errno = 0;
> + sret = syscall(__NR_pkey_mprotect, ptr, size, orig_prot, pkey);
> + return sret;
> +}
> +
> +/*
> + * define sys_xyx to call syscall directly.
> + */
> +static void *sys_mmap(void *addr, unsigned long len, unsigned long prot,
> + unsigned long flags, unsigned long fd, unsigned long offset)
> +{
> + void *sret;
> +
> + errno = 0;
> + sret = (void *) syscall(__NR_mmap, addr, len, prot,
> + flags, fd, offset);
> + return sret;
> +}
> +
> +static int sys_munmap(void *ptr, size_t size)
> +{
> + int sret;
> +
> + errno = 0;
> + sret = syscall(SYS_munmap, ptr, size);
> + return sret;
> +}
> +
> +static int sys_madvise(void *start, size_t len, int types)
> +{
> + int sret;
> +
> + errno = 0;
> + sret = syscall(__NR_madvise, start, len, types);
> + return sret;
> +}
> +
> +static int sys_pkey_alloc(unsigned long flags, unsigned long init_val)
> +{
> + int ret = syscall(SYS_pkey_alloc, flags, init_val);
> +
> + return ret;
> +}
> +
> +static unsigned int __read_pkey_reg(void)
> +{
> + unsigned int eax, edx;
> + unsigned int ecx = 0;
> + unsigned int pkey_reg;
> +
> + asm volatile(".byte 0x0f,0x01,0xee\n\t"
> + : "=a" (eax), "=d" (edx)
> + : "c" (ecx));
> + pkey_reg = eax;
> + return pkey_reg;
> +}
> +
> +static void __write_pkey_reg(u64 pkey_reg)
> +{
> + unsigned int eax = pkey_reg;
> + unsigned int ecx = 0;
> + unsigned int edx = 0;
> +
> + asm volatile(".byte 0x0f,0x01,0xef\n\t"
> + : : "a" (eax), "c" (ecx), "d" (edx));
> + assert(pkey_reg == __read_pkey_reg());
> +}
> +
> +static unsigned long pkey_bit_position(int pkey)
> +{
> + return pkey * PKEY_BITS_PER_PKEY;
> +}
> +
> +static u64 set_pkey_bits(u64 reg, int pkey, u64 flags)
> +{
> + unsigned long shift = pkey_bit_position(pkey);
> +
> + /* mask out bits from pkey in old value */
> + reg &= ~((u64)PKEY_MASK << shift);
> + /* OR in new bits for pkey */
> + reg |= (flags & PKEY_MASK) << shift;
> + return reg;
> +}
> +
> +static void set_pkey(int pkey, unsigned long pkey_value)
> +{
> + unsigned long mask = (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE);
> + u64 new_pkey_reg;
> +
> + assert(!(pkey_value & ~mask));
> + new_pkey_reg = set_pkey_bits(__read_pkey_reg(), pkey, pkey_value);
> + __write_pkey_reg(new_pkey_reg);
> +}
> +
> +static void setup_single_address(int size, void **ptrOut)
> +{
> + void *ptr;
> +
> + ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
> + assert(ptr != (void *)-1);
ksft_exit_fail_msg() here as this this fetal error and test should abort.
> + *ptrOut = ptr;
> +}
> +
> +static void setup_single_address_sealable(int size, void **ptrOut, bool sealable)
> +{
> + void *ptr;
> + unsigned long mapflags = MAP_ANONYMOUS | MAP_PRIVATE;
> +
> + if (sealable)
> + mapflags |= MAP_SEALABLE;
> +
> + ptr = sys_mmap(NULL, size, PROT_READ, mapflags, -1, 0);
> + assert(ptr != (void *)-1);
> + *ptrOut = ptr;
> +}
> +
> +static void setup_single_address_rw_sealable(int size, void **ptrOut, bool sealable)
> +{
> + void *ptr;
> + unsigned long mapflags = MAP_ANONYMOUS | MAP_PRIVATE;
> +
> + if (sealable)
> + mapflags |= MAP_SEALABLE;
> +
> + ptr = sys_mmap(NULL, size, PROT_READ | PROT_WRITE, mapflags, -1, 0);
> + assert(ptr != (void *)-1);
> + *ptrOut = ptr;
> +}
> +
> +static void clean_single_address(void *ptr, int size)
> +{
> + int ret;
> +
> + ret = munmap(ptr, size);
> + assert(!ret);
> +}
> +
> +static void seal_single_address(void *ptr, int size)
> +{
> + int ret;
> +
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> +}
> +
> +static void test_seal_addseal(void)
> +{
> + LOG_TEST_ENTER();
Move this to after variable declarations. Not sure why you aren't getting
warning already that you are writing code before variable declarations.
> + int ret;
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> +
> + setup_single_address(size, &ptr);
> +
> + ret = sys_mseal(ptr, size);
ksft_test_result(!ret, "mseal syscall\n");
> + assert(!ret);
> +}
> +
> +static void test_seal_unmapped_start(void)
> +{
> + LOG_TEST_ENTER();
> + int ret;
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> +
> + setup_single_address(size, &ptr);
> +
> + /* munmap 2 pages from ptr. */
> + ret = sys_munmap(ptr, 2 * page_size);
> + assert(!ret);
> +
> + /* mprotect will fail because 2 pages from ptr are unmapped. */
> + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
> + assert(ret < 0);
> +
> + /* mseal will fail because 2 pages from ptr are unmapped. */
> + ret = sys_mseal(ptr, size);
> + assert(ret < 0);
> +
> + ret = sys_mseal(ptr + 2 * page_size, 2 * page_size);
> + assert(!ret);
> +}
> +
> +static void test_seal_unmapped_middle(void)
> +{
> + LOG_TEST_ENTER();
> + int ret;
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> +
> + setup_single_address(size, &ptr);
> +
> + /* munmap 2 pages from ptr + page. */
> + ret = sys_munmap(ptr + page_size, 2 * page_size);
> + assert(!ret);
> +
> + /* mprotect will fail, since size is 4 pages. */
> + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
> + assert(ret < 0);
> +
> + /* mseal will fail as well. */
> + ret = sys_mseal(ptr, size);
> + assert(ret < 0);
> +
> + /* we still can add seal to the first page and last page*/
> + ret = sys_mseal(ptr, page_size);
> + assert(!ret);
> +
> + ret = sys_mseal(ptr + 3 * page_size, page_size);
> + assert(!ret);
> +}
> +
> +static void test_seal_unmapped_end(void)
> +{
> + LOG_TEST_ENTER();
> + int ret;
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> +
> + setup_single_address(size, &ptr);
> +
> + /* unmap last 2 pages. */
> + ret = sys_munmap(ptr + 2 * page_size, 2 * page_size);
> + assert(!ret);
> +
> + /* mprotect will fail since last 2 pages are unmapped. */
> + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
> + assert(ret < 0);
> +
> + /* mseal will fail as well. */
> + ret = sys_mseal(ptr, size);
> + assert(ret < 0);
> +
> + /* The first 2 pages is not sealed, and can add seals */
> + ret = sys_mseal(ptr, 2 * page_size);
> + assert(!ret);
> +}
> +
> +static void test_seal_multiple_vmas(void)
> +{
> + LOG_TEST_ENTER();
> + int ret;
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> +
> + setup_single_address(size, &ptr);
> +
> + /* use mprotect to split the vma into 3. */
> + ret = sys_mprotect(ptr + page_size, 2 * page_size,
> + PROT_READ | PROT_WRITE);
> + assert(!ret);
> +
> + /* mprotect will get applied to all 4 pages - 3 VMAs. */
> + ret = sys_mprotect(ptr, size, PROT_READ);
> + assert(!ret);
> +
> + /* use mprotect to split the vma into 3. */
> + ret = sys_mprotect(ptr + page_size, 2 * page_size,
> + PROT_READ | PROT_WRITE);
> + assert(!ret);
> +
> + /* mseal get applied to all 4 pages - 3 VMAs. */
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> +}
> +
> +static void test_seal_split_start(void)
> +{
> + LOG_TEST_ENTER();
> + int ret;
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> +
> + setup_single_address(size, &ptr);
> +
> + /* use mprotect to split at middle */
> + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE);
> + assert(!ret);
> +
> + /* seal the first page, this will split the VMA */
> + ret = sys_mseal(ptr, page_size);
> + assert(!ret);
> +
> + /* add seal to the remain 3 pages */
> + ret = sys_mseal(ptr + page_size, 3 * page_size);
> + assert(!ret);
> +}
> +
> +static void test_seal_split_end(void)
> +{
> + LOG_TEST_ENTER();
> + int ret;
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> +
> + setup_single_address(size, &ptr);
> +
> + /* use mprotect to split at middle */
> + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE);
> + assert(!ret);
> +
> + /* seal the last page */
> + ret = sys_mseal(ptr + 3 * page_size, page_size);
> + assert(!ret);
> +
> + /* Adding seals to the first 3 pages */
> + ret = sys_mseal(ptr, 3 * page_size);
> + assert(!ret);
> +}
> +
> +static void test_seal_invalid_input(void)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(8 * page_size, &ptr);
> + clean_single_address(ptr + 4 * page_size, 4 * page_size);
> +
> + /* invalid flag */
> + ret = syscall(__NR_mseal, ptr, size, 0x20);
> + assert(ret < 0);
> +
> + /* unaligned address */
> + ret = sys_mseal(ptr + 1, 2 * page_size);
> + assert(ret < 0);
> +
> + /* length too big */
> + ret = sys_mseal(ptr, 5 * page_size);
> + assert(ret < 0);
> +
> + /* start is not in a valid VMA */
> + ret = sys_mseal(ptr - page_size, 5 * page_size);
> + assert(ret < 0);
> +}
> +
> +static void test_seal_zero_length(void)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> +
> + ret = sys_mprotect(ptr, 0, PROT_READ | PROT_WRITE);
> + assert(!ret);
> +
> + /* seal 0 length will be OK, same as mprotect */
> + ret = sys_mseal(ptr, 0);
> + assert(!ret);
> +
> + /* verify the 4 pages are not sealed by previous call. */
> + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
> + assert(!ret);
> +}
> +
> +static void test_seal_twice(void)
> +{
> + LOG_TEST_ENTER();
> + int ret;
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> +
> + setup_single_address(size, &ptr);
> +
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> +
> + /* apply the same seal will be OK. idempotent. */
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> +}
> +
> +static void test_seal_mprotect(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> +
> + if (seal)
> + seal_single_address(ptr, size);
> +
> + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +}
> +
> +static void test_seal_start_mprotect(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> +
> + if (seal)
> + seal_single_address(ptr, page_size);
> +
> + /* the first page is sealed. */
> + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +
> + /* pages after the first page is not sealed. */
> + ret = sys_mprotect(ptr + page_size, page_size * 3,
> + PROT_READ | PROT_WRITE);
> + assert(!ret);
> +}
> +
> +static void test_seal_end_mprotect(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> +
> + if (seal)
> + seal_single_address(ptr + page_size, 3 * page_size);
> +
> + /* first page is not sealed */
> + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
> + assert(!ret);
> +
> + /* last 3 page are sealed */
> + ret = sys_mprotect(ptr + page_size, page_size * 3,
> + PROT_READ | PROT_WRITE);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +}
> +
> +static void test_seal_mprotect_unalign_len(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> +
> + if (seal)
> + seal_single_address(ptr, page_size * 2 - 1);
> +
> + /* 2 pages are sealed. */
> + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +
> + ret = sys_mprotect(ptr + page_size * 2, page_size,
> + PROT_READ | PROT_WRITE);
> + assert(!ret);
> +}
> +
> +static void test_seal_mprotect_unalign_len_variant_2(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> + if (seal)
> + seal_single_address(ptr, page_size * 2 + 1);
> +
> + /* 3 pages are sealed. */
> + ret = sys_mprotect(ptr, page_size * 3, PROT_READ | PROT_WRITE);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +
> + ret = sys_mprotect(ptr + page_size * 3, page_size,
> + PROT_READ | PROT_WRITE);
> + assert(!ret);
> +}
> +
> +static void test_seal_mprotect_two_vma(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> +
> + /* use mprotect to split */
> + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
> + assert(!ret);
> +
> + if (seal)
> + seal_single_address(ptr, page_size * 4);
> +
> + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +
> + ret = sys_mprotect(ptr + page_size * 2, page_size * 2,
> + PROT_READ | PROT_WRITE);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +}
> +
> +static void test_seal_mprotect_two_vma_with_split(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> +
> + /* use mprotect to split as two vma. */
> + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
> + assert(!ret);
> +
> + /* mseal can apply across 2 vma, also split them. */
> + if (seal)
> + seal_single_address(ptr + page_size, page_size * 2);
> +
> + /* the first page is not sealed. */
> + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
> + assert(!ret);
> +
> + /* the second page is sealed. */
> + ret = sys_mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +
> + /* the third page is sealed. */
> + ret = sys_mprotect(ptr + 2 * page_size, page_size,
> + PROT_READ | PROT_WRITE);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +
> + /* the fouth page is not sealed. */
> + ret = sys_mprotect(ptr + 3 * page_size, page_size,
> + PROT_READ | PROT_WRITE);
> + assert(!ret);
> +}
> +
> +static void test_seal_mprotect_partial_mprotect(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> +
> + /* seal one page. */
> + if (seal)
> + seal_single_address(ptr, page_size);
> +
> + /* mprotect first 2 page will fail, since the first page are sealed. */
> + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +}
> +
> +static void test_seal_mprotect_two_vma_with_gap(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> +
> + /* use mprotect to split. */
> + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
> + assert(!ret);
> +
> + /* use mprotect to split. */
> + ret = sys_mprotect(ptr + 3 * page_size, page_size,
> + PROT_READ | PROT_WRITE);
> + assert(!ret);
> +
> + /* use munmap to free two pages in the middle */
> + ret = sys_munmap(ptr + page_size, 2 * page_size);
> + assert(!ret);
> +
> + /* mprotect will fail, because there is a gap in the address. */
> + /* notes, internally mprotect still updated the first page. */
> + ret = sys_mprotect(ptr, 4 * page_size, PROT_READ);
> + assert(ret < 0);
> +
> + /* mseal will fail as well. */
> + ret = sys_mseal(ptr, 4 * page_size);
> + assert(ret < 0);
> +
> + /* unlike mprotect, the first page is not sealed. */
> + ret = sys_mprotect(ptr, page_size, PROT_READ);
> + assert(ret == 0);
> +
> + /* the last page is not sealed. */
> + ret = sys_mprotect(ptr + 3 * page_size, page_size, PROT_READ);
> + assert(ret == 0);
> +}
> +
> +static void test_seal_mprotect_split(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> +
> + /* use mprotect to split. */
> + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
> + assert(!ret);
> +
> + /* seal all 4 pages. */
> + if (seal) {
> + ret = sys_mseal(ptr, 4 * page_size);
> + assert(!ret);
> + }
> +
> + /* mprotect is sealed. */
> + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +
> +
> + ret = sys_mprotect(ptr + 2 * page_size, 2 * page_size, PROT_READ);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +}
> +
> +static void test_seal_mprotect_merge(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> +
> + /* use mprotect to split one page. */
> + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
> + assert(!ret);
> +
> + /* seal first two pages. */
> + if (seal) {
> + ret = sys_mseal(ptr, 2 * page_size);
> + assert(!ret);
> + }
> +
> + ret = sys_madvise(ptr, page_size, MADV_WILLNEED);
> + assert(!ret);
> +
> + /* 2 pages are sealed. */
> + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +
> + /* last 2 pages are not sealed. */
> + ret = sys_mprotect(ptr + 2 * page_size, 2 * page_size, PROT_READ);
> + assert(ret == 0);
> +}
> +
> +static void test_seal_munmap(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> +
> + if (seal) {
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> + }
> +
> + /* 4 pages are sealed. */
> + ret = sys_munmap(ptr, size);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +}
> +
> +/*
> + * allocate 4 pages,
> + * use mprotect to split it as two VMAs
> + * seal the whole range
> + * munmap will fail on both
> + */
ksft* macros can print this information to console for better view of which
test failed.
> +static void test_seal_munmap_two_vma(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> +
> + /* use mprotect to split */
> + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
> + assert(!ret);
> +
> + if (seal) {
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> + }
> +
> + ret = sys_munmap(ptr, page_size * 2);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +
> + ret = sys_munmap(ptr + page_size, page_size * 2);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +}
> +
> +/*
> + * allocate a VMA with 4 pages.
> + * munmap the middle 2 pages.
> + * seal the whole 4 pages, will fail.
> + * note: one of the pages are sealed
> + * munmap the first page will be OK.
> + * munmap the last page will be OK.
> + */
> +static void test_seal_munmap_vma_with_gap(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> +
> + ret = sys_munmap(ptr + page_size, page_size * 2);
> + assert(!ret);
> +
> + if (seal) {
> + /* can't have gap in the middle. */
> + ret = sys_mseal(ptr, size);
> + assert(ret < 0);
> + }
> +
> + ret = sys_munmap(ptr, page_size);
> + assert(!ret);
> +
> + ret = sys_munmap(ptr + page_size * 2, page_size);
> + assert(!ret);
> +
> + ret = sys_munmap(ptr, size);
> + assert(!ret);
> +}
> +
> +static void test_munmap_start_freed(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> +
> + /* unmap the first page. */
> + ret = sys_munmap(ptr, page_size);
> + assert(!ret);
> +
> + /* seal the last 3 pages. */
> + if (seal) {
> + ret = sys_mseal(ptr + page_size, 3 * page_size);
> + assert(!ret);
> + }
> +
> + /* unmap from the first page. */
> + ret = sys_munmap(ptr, size);
> + if (seal)
> + assert(ret < 0);
> + else
> + /* note: this will be OK, even the first page is */
> + /* already unmapped. */
> + assert(!ret);
> +}
> +
> +static void test_munmap_end_freed(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> + /* unmap last page. */
> + ret = sys_munmap(ptr + page_size * 3, page_size);
> + assert(!ret);
> +
> + /* seal the first 3 pages. */
> + if (seal) {
> + ret = sys_mseal(ptr, 3 * page_size);
> + assert(!ret);
> + }
> +
> + /* unmap all pages. */
> + ret = sys_munmap(ptr, size);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +}
> +
> +static void test_munmap_middle_freed(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> + /* unmap 2 pages in the middle. */
> + ret = sys_munmap(ptr + page_size, page_size * 2);
> + assert(!ret);
> +
> + /* seal the first page. */
> + if (seal) {
> + ret = sys_mseal(ptr, page_size);
> + assert(!ret);
> + }
> +
> + /* munmap all 4 pages. */
> + ret = sys_munmap(ptr, size);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +}
> +
> +static void test_seal_mremap_shrink(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> + void *ret2;
> +
> + setup_single_address(size, &ptr);
> +
> + if (seal) {
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> + }
> +
> + /* shrink from 4 pages to 2 pages. */
> + ret2 = mremap(ptr, size, 2 * page_size, 0, 0);
> + if (seal) {
> + assert(ret2 == MAP_FAILED);
> + assert(errno == EACCES);
> + } else {
> + assert(ret2 != MAP_FAILED);
> +
> + }
> +}
> +
> +static void test_seal_mremap_expand(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> + void *ret2;
> +
> + setup_single_address(size, &ptr);
> + /* ummap last 2 pages. */
> + ret = sys_munmap(ptr + 2 * page_size, 2 * page_size);
> + assert(!ret);
> +
> + if (seal) {
> + ret = sys_mseal(ptr, 2 * page_size);
> + assert(!ret);
> + }
> +
> + /* expand from 2 page to 4 pages. */
> + ret2 = mremap(ptr, 2 * page_size, 4 * page_size, 0, 0);
> + if (seal) {
> + assert(ret2 == MAP_FAILED);
> + assert(errno == EACCES);
> + } else {
> + assert(ret2 == ptr);
> +
> + }
> +}
> +
> +static void test_seal_mremap_move(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr, *newPtr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = page_size;
> + int ret;
> + void *ret2;
> +
> + setup_single_address(size, &ptr);
> + setup_single_address(size, &newPtr);
> + clean_single_address(newPtr, size);
> +
> + if (seal) {
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> + }
> +
> + /* move from ptr to fixed address. */
> + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newPtr);
> + if (seal) {
> + assert(ret2 == MAP_FAILED);
> + assert(errno == EACCES);
> + } else {
> + assert(ret2 != MAP_FAILED);
> +
> + }
> +}
> +
> +static void test_seal_mmap_overwrite_prot(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = page_size;
> + int ret;
> + void *ret2;
> +
> + setup_single_address(size, &ptr);
> +
> + if (seal) {
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> + }
> +
> + /* use mmap to change protection. */
> + ret2 = sys_mmap(ptr, size, PROT_NONE,
> + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
> + if (seal) {
> + assert(ret2 == MAP_FAILED);
> + assert(errno == EACCES);
> + } else
> + assert(ret2 == ptr);
> +}
> +
> +static void test_seal_mmap_expand(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 12 * page_size;
> + int ret;
> + void *ret2;
> +
> + setup_single_address(size, &ptr);
> + /* ummap last 4 pages. */
> + ret = sys_munmap(ptr + 8 * page_size, 4 * page_size);
> + assert(!ret);
> +
> + if (seal) {
> + ret = sys_mseal(ptr, 8 * page_size);
> + assert(!ret);
> + }
> +
> + /* use mmap to expand. */
> + ret2 = sys_mmap(ptr, size, PROT_READ,
> + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
> + if (seal) {
> + assert(ret2 == MAP_FAILED);
> + assert(errno == EACCES);
> + } else
> + assert(ret2 == ptr);
> +}
> +
> +static void test_seal_mmap_shrink(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 12 * page_size;
> + int ret;
> + void *ret2;
> +
> + setup_single_address(size, &ptr);
> +
> + if (seal) {
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> + }
> +
> + /* use mmap to shrink. */
> + ret2 = sys_mmap(ptr, 8 * page_size, PROT_READ,
> + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
> + if (seal) {
> + assert(ret2 == MAP_FAILED);
> + assert(errno == EACCES);
> + } else
> + assert(ret2 == ptr);
> +}
> +
> +static void test_seal_mremap_shrink_fixed(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + void *newAddr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> + void *ret2;
> +
> + setup_single_address(size, &ptr);
> + setup_single_address(size, &newAddr);
> +
> + if (seal) {
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> + }
> +
> + /* mremap to move and shrink to fixed address */
> + ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
> + newAddr);
> + if (seal) {
> + assert(ret2 == MAP_FAILED);
> + assert(errno == EACCES);
> + } else
> + assert(ret2 == newAddr);
> +}
> +
> +static void test_seal_mremap_expand_fixed(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + void *newAddr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> + void *ret2;
> +
> + setup_single_address(page_size, &ptr);
> + setup_single_address(size, &newAddr);
> +
> + if (seal) {
> + ret = sys_mseal(newAddr, size);
> + assert(!ret);
> + }
> +
> + /* mremap to move and expand to fixed address */
> + ret2 = mremap(ptr, page_size, size, MREMAP_MAYMOVE | MREMAP_FIXED,
> + newAddr);
> + if (seal) {
> + assert(ret2 == MAP_FAILED);
> + assert(errno == EACCES);
> + } else
> + assert(ret2 == newAddr);
> +}
> +
> +static void test_seal_mremap_move_fixed(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + void *newAddr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> + void *ret2;
> +
> + setup_single_address(size, &ptr);
> + setup_single_address(size, &newAddr);
> +
> + if (seal) {
> + ret = sys_mseal(newAddr, size);
> + assert(!ret);
> + }
> +
> + /* mremap to move to fixed address */
> + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newAddr);
> + if (seal) {
> + assert(ret2 == MAP_FAILED);
> + assert(errno == EACCES);
> + } else
> + assert(ret2 == newAddr);
> +}
> +
> +static void test_seal_mremap_move_fixed_zero(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + void *newAddr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> + void *ret2;
> +
> + setup_single_address(size, &ptr);
> +
> + if (seal) {
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> + }
> +
> + /*
> + * MREMAP_FIXED can move the mapping to zero address
> + */
> + ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
> + 0);
> + if (seal) {
> + assert(ret2 == MAP_FAILED);
> + assert(errno == EACCES);
> + } else {
> + assert(ret2 == 0);
> +
> + }
> +}
> +
> +static void test_seal_mremap_move_dontunmap(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + void *newAddr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> + void *ret2;
> +
> + setup_single_address(size, &ptr);
> +
> + if (seal) {
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> + }
> +
> + /* mremap to move, and don't unmap src addr. */
> + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 0);
> + if (seal) {
> + assert(ret2 == MAP_FAILED);
> + assert(errno == EACCES);
> + } else {
> + assert(ret2 != MAP_FAILED);
> +
> + }
> +}
> +
> +static void test_seal_mremap_move_dontunmap_anyaddr(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + void *newAddr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> + void *ret2;
> +
> + setup_single_address(size, &ptr);
> +
> + if (seal) {
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> + }
> +
> + /*
> + * The 0xdeaddead should not have effect on dest addr
> + * when MREMAP_DONTUNMAP is set.
> + */
> + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
> + 0xdeaddead);
> + if (seal) {
> + assert(ret2 == MAP_FAILED);
> + assert(errno == EACCES);
> + } else {
> + assert(ret2 != MAP_FAILED);
> + assert((long)ret2 != 0xdeaddead);
> +
> + }
> +}
> +
> +
> +static void test_seal_mmap_seal(void)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> + void *ret2;
> +
> + ptr = sys_mmap(NULL, size, PROT_READ | PROT_SEAL, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> + assert(ptr != (void *)-1);
> +
> + ret = sys_munmap(ptr, size);
> + assert(ret < 0);
> +
> + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
> + assert(ret < 0);
> +}
> +
> +static void test_seal_merge_and_split(void)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size;
> + int ret;
> + void *ret2;
> +
> + /* (24 RO) */
> + setup_single_address(24 * page_size, &ptr);
> +
> + /* use mprotect(NONE) to set out boundary */
> + /* (1 NONE) (22 RO) (1 NONE) */
> + ret = sys_mprotect(ptr, page_size, PROT_NONE);
> + assert(!ret);
> + ret = sys_mprotect(ptr + 23 * page_size, page_size, PROT_NONE);
> + assert(!ret);
> + size = get_vma_size(ptr + page_size);
> + assert(size == 22 * page_size);
> +
> + /* use mseal to split from beginning */
> + /* (1 NONE) (1 RO_SEAL) (21 RO) (1 NONE) */
> + ret = sys_mseal(ptr + page_size, page_size);
> + assert(!ret);
> + size = get_vma_size(ptr + page_size);
> + assert(size == page_size);
> + size = get_vma_size(ptr + 2 * page_size);
> + assert(size == 21 * page_size);
> +
> + /* use mseal to split from the end. */
> + /* (1 NONE) (1 RO_SEAL) (20 RO) (1 RO_SEAL) (1 NONE) */
> + ret = sys_mseal(ptr + 22 * page_size, page_size);
> + assert(!ret);
> + size = get_vma_size(ptr + 22 * page_size);
> + assert(size == page_size);
> + size = get_vma_size(ptr + 2 * page_size);
> + assert(size == 20 * page_size);
> +
> + /* merge with prev. */
> + /* (1 NONE) (2 RO_SEAL) (19 RO) (1 RO_SEAL) (1 NONE) */
> + ret = sys_mseal(ptr + 2 * page_size, page_size);
> + assert(!ret);
> + size = get_vma_size(ptr + page_size);
> + assert(size == 2 * page_size);
> +
> + /* merge with after. */
> + /* (1 NONE) (2 RO_SEAL) (18 RO) (2 RO_SEALS) (1 NONE) */
> + ret = sys_mseal(ptr + 21 * page_size, page_size);
> + assert(!ret);
> + size = get_vma_size(ptr + 21 * page_size);
> + assert(size == 2 * page_size);
> +
> + /* split and merge from prev */
> + /* (1 NONE) (3 RO_SEAL) (17 RO) (2 RO_SEALS) (1 NONE) */
> + ret = sys_mseal(ptr + 2 * page_size, 2 * page_size);
> + assert(!ret);
> + size = get_vma_size(ptr + 1 * page_size);
> + assert(size == 3 * page_size);
> + ret = sys_munmap(ptr + page_size, page_size);
> + assert(ret < 0);
> + ret = sys_mprotect(ptr + 2 * page_size, page_size, PROT_NONE);
> + assert(ret < 0);
> +
> + /* split and merge from next */
> + /* (1 NONE) (3 RO_SEAL) (16 RO) (3 RO_SEALS) (1 NONE) */
> + ret = sys_mseal(ptr + 20 * page_size, 2 * page_size);
> + assert(!ret);
> + size = get_vma_size(ptr + 20 * page_size);
> + assert(size == 3 * page_size);
> +
> + /* merge from middle of prev and middle of next. */
> + /* (1 NONE) (22 RO_SEAL) (1 NONE) */
> + ret = sys_mseal(ptr + 2 * page_size, 20 * page_size);
> + assert(!ret);
> + size = get_vma_size(ptr + page_size);
> + assert(size == 22 * page_size);
> +}
> +
> +static void test_seal_mmap_merge(void)
> +{
> + LOG_TEST_ENTER();
> +
> + void *ptr, *ptr2;
> + unsigned long page_size = getpagesize();
> + unsigned long size;
> + int ret;
> + void *ret2;
> +
> + /* (24 RO) */
> + setup_single_address(24 * page_size, &ptr);
> +
> + /* use mprotect(NONE) to set out boundary */
> + /* (1 NONE) (22 RO) (1 NONE) */
> + ret = sys_mprotect(ptr, page_size, PROT_NONE);
> + assert(!ret);
> + ret = sys_mprotect(ptr + 23 * page_size, page_size, PROT_NONE);
> + assert(!ret);
> + size = get_vma_size(ptr + page_size);
> + assert(size == 22 * page_size);
> +
> + /* use munmap to free 2 segment of memory. */
> + /* (1 NONE) (1 free) (20 RO) (1 free) (1 NONE) */
> + ret = sys_munmap(ptr + page_size, page_size);
> + assert(!ret);
> +
> + ret = sys_munmap(ptr + 22 * page_size, page_size);
> + assert(!ret);
> +
> + /* apply seal to the middle */
> + /* (1 NONE) (1 free) (20 RO_SEAL) (1 free) (1 NONE) */
> + ret = sys_mseal(ptr + 2 * page_size, 20 * page_size);
> + assert(!ret);
> + size = get_vma_size(ptr + 2 * page_size);
> + assert(size == 20 * page_size);
> +
> + /* allocate a mapping at beginning, and make sure it merges. */
> + /* (1 NONE) (21 RO_SEAL) (1 free) (1 NONE) */
> + ptr2 = sys_mmap(ptr + page_size, page_size, PROT_READ | PROT_SEAL,
> + MAP_FIXED | MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> + assert(ptr != (void *)-1);
> + size = get_vma_size(ptr + page_size);
> + assert(size == 21 * page_size);
> +
> + /* allocate a mapping at end, and make sure it merges. */
> + /* (1 NONE) (22 RO_SEAL) (1 NONE) */
> + ptr2 = sys_mmap(ptr + 22 * page_size, page_size, PROT_READ | PROT_SEAL,
> + MAP_FIXED | MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> + assert(ptr != (void *)-1);
> + size = get_vma_size(ptr + page_size);
> + assert(size == 22 * page_size);
> +}
> +
> +static void test_not_sealable(void)
> +{
> + LOG_TEST_ENTER();
> + int ret;
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> +
> + ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> + assert(ptr != (void *)-1);
> +
> + ret = sys_mseal(ptr, size);
> + assert(ret < 0);
> +}
> +
> +static void test_mmap_fixed_change_to_sealable(void)
> +{
> + LOG_TEST_ENTER();
> + int ret;
> + void *ptr, *ptr2;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> +
> + ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> + assert(ptr != (void *)-1);
> +
> + ret = sys_mseal(ptr, size);
> + assert(ret < 0);
> +
> + ptr2 = sys_mmap(ptr, size, PROT_READ,
> + MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
> + assert(ptr2 == ptr);
> +
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> +}
> +
> +static void test_mmap_fixed_change_to_not_sealable(void)
> +{
> + LOG_TEST_ENTER();
> + int ret;
> + void *ptr, *ptr2;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> +
> + ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
> + assert(ptr != (void *)-1);
> +
> + ptr2 = sys_mmap(ptr, size, PROT_READ,
> + MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> + assert(ptr2 == ptr);
> +
> + ret = sys_mseal(ptr, size);
> + assert(ret < 0);
> +}
> +
> +static void test_merge_sealable(void)
> +{
> + LOG_TEST_ENTER();
> + int ret;
> + void *ptr, *ptr2;
> + unsigned long page_size = getpagesize();
> + unsigned long size;
> +
> + /* (24 RO) */
> + setup_single_address(24 * page_size, &ptr);
> +
> + /* use mprotect(NONE) to set out boundary */
> + /* (1 NONE) (22 RO) (1 NONE) */
> + ret = sys_mprotect(ptr, page_size, PROT_NONE);
> + assert(!ret);
> + ret = sys_mprotect(ptr + 23 * page_size, page_size, PROT_NONE);
> + assert(!ret);
> + size = get_vma_size(ptr + page_size);
> + assert(size == 22 * page_size);
> +
> + /* (1 NONE) (RO) (4 free) (17 RO) (1 NONE) */
> + ret = sys_munmap(ptr + 2 * page_size, 4 * page_size);
> + assert(!ret);
> + size = get_vma_size(ptr + page_size);
> + assert(size == 1 * page_size);
> + size = get_vma_size(ptr + 6 * page_size);
> + assert(size == 17 * page_size);
> +
> + /* (1 NONE) (RO) (1 free) (2 RO) (1 free) (17 RO) (1 NONE) */
> + ptr2 = sys_mmap(ptr + 3 * page_size, 2 * page_size, PROT_READ,
> + MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
> + size = get_vma_size(ptr + 3 * page_size);
> + assert(size == 2 * page_size);
> +
> + /* (1 NONE) (RO) (1 free) (20 RO) (1 NONE) */
> + ptr2 = sys_mmap(ptr + 5 * page_size, 1 * page_size, PROT_READ,
> + MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
> + assert(ptr2 != (void *)-1);
> + size = get_vma_size(ptr + 3 * page_size);
> + assert(size == 20 * page_size);
> +
> + /* (1 NONE) (RO) (1 free) (19 RO) (1 RO_SB) (1 NONE) */
> + ret = sys_mseal(ptr + 22 * page_size, page_size);
> + assert(!ret);
> +
> + /* (1 NONE) (RO) (not sealable) (19 RO) (1 RO_SB) (1 NONE) */
> + ptr2 = sys_mmap(ptr + 2 * page_size, page_size, PROT_READ,
> + MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> + assert(ptr2 != (void *)-1);
> + size = get_vma_size(ptr + page_size);
> + assert(size == page_size);
> + size = get_vma_size(ptr + 2 * page_size);
> + assert(size == page_size);
> +}
> +
> +static void test_seal_discard_ro_anon_on_rw(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address_rw_sealable(size, &ptr, seal);
> + assert(ptr != (void *)-1);
> +
> + if (seal) {
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> + }
> +
> + /* sealing doesn't take effect on RW memory. */
> + ret = sys_madvise(ptr, size, MADV_DONTNEED);
> + assert(!ret);
> +
> + /* base seal still apply. */
> + ret = sys_munmap(ptr, size);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +}
> +
> +static void test_seal_discard_ro_anon_on_pkey(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> + int pkey;
> +
> + setup_single_address_rw_sealable(size, &ptr, seal);
> + assert(ptr != (void *)-1);
> +
> + pkey = sys_pkey_alloc(0, 0);
> + assert(pkey > 0);
> +
> + ret = sys_mprotect_pkey((void *)ptr, size, PROT_READ | PROT_WRITE, pkey);
> + assert(!ret);
> +
> + if (seal) {
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> + }
> +
> + /* sealing doesn't take effect if PKRU allow write. */
> + set_pkey(pkey, 0);
> + ret = sys_madvise(ptr, size, MADV_DONTNEED);
> + assert(!ret);
> +
> + /* sealing will take effect if PKRU deny write. */
> + set_pkey(pkey, PKEY_DISABLE_WRITE);
> + ret = sys_madvise(ptr, size, MADV_DONTNEED);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +
> + /* base seal still apply. */
> + ret = sys_munmap(ptr, size);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +}
> +
> +static void test_seal_discard_ro_anon_on_filebacked(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> + int fd;
> + unsigned long mapflags = MAP_PRIVATE;
> +
> + if (seal)
> + mapflags |= MAP_SEALABLE;
> +
> + fd = memfd_create("test", 0);
> + assert(fd > 0);
> +
> + ret = fallocate(fd, 0, 0, size);
> + assert(!ret);
> +
> + ptr = sys_mmap(NULL, size, PROT_READ, mapflags, fd, 0);
> + assert(ptr != MAP_FAILED);
> +
> + if (seal) {
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> + }
> +
> + /* sealing doesn't apply for file backed mapping. */
> + ret = sys_madvise(ptr, size, MADV_DONTNEED);
> + assert(!ret);
> +
> + ret = sys_munmap(ptr, size);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> + close(fd);
> +}
> +
> +static void test_seal_discard_ro_anon_on_shared(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> + unsigned long mapflags = MAP_ANONYMOUS | MAP_SHARED;
> +
> + if (seal)
> + mapflags |= MAP_SEALABLE;
> +
> + ptr = sys_mmap(NULL, size, PROT_READ, mapflags, -1, 0);
> + assert(ptr != (void *)-1);
> +
> + if (seal) {
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> + }
> +
> + /* sealing doesn't apply for shared mapping. */
> + ret = sys_madvise(ptr, size, MADV_DONTNEED);
> + assert(!ret);
> +
> + ret = sys_munmap(ptr, size);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +}
> +
> +static void test_seal_discard_ro_anon_invalid_shared(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> + int fd;
> +
> + fd = open("/proc/self/maps", O_RDONLY);
> + ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, fd, 0);
> + assert(ptr != (void *)-1);
> +
> + if (seal) {
> + ret = sys_mseal(ptr, size);
> + assert(!ret);
> + }
> +
> + ret = sys_madvise(ptr, size, MADV_DONTNEED);
> + assert(!ret);
> +
> + ret = sys_munmap(ptr, size);
> + assert(ret < 0);
> + close(fd);
> +}
> +
> +static void test_seal_discard_ro_anon(bool seal)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + setup_single_address(size, &ptr);
> +
> + if (seal)
> + seal_single_address(ptr, size);
> +
> + ret = sys_madvise(ptr, size, MADV_DONTNEED);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +
> + ret = sys_munmap(ptr, size);
> + if (seal)
> + assert(ret < 0);
> + else
> + assert(!ret);
> +}
> +
> +static void test_mmap_seal_discard_ro_anon(void)
> +{
> + LOG_TEST_ENTER();
> + void *ptr;
> + unsigned long page_size = getpagesize();
> + unsigned long size = 4 * page_size;
> + int ret;
> +
> + ptr = sys_mmap(NULL, size, PROT_READ | PROT_WRITE | PROT_SEAL,
> + MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> + assert(ptr != (void *)-1);
> +
> + ret = sys_mprotect(ptr, size, PROT_READ);
> + assert(!ret);
> +
> + ret = sys_madvise(ptr, size, MADV_DONTNEED);
> + assert(ret < 0);
> +
> + ret = sys_munmap(ptr, size);
> + assert(ret < 0);
> +}
> +
> +bool seal_support(void)
> +{
> + int ret;
> + void *ptr;
> + unsigned long page_size = getpagesize();
> +
> + ptr = sys_mmap(NULL, page_size, PROT_READ | PROT_SEAL, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> + if (ptr == (void *) -1)
> + return false;
> +
> + ret = sys_mseal(ptr, page_size);
> + if (ret < 0)
> + return false;
> +
> + return true;
> +}
> +
> +bool pkey_supported(void)
> +{
> + int pkey = sys_pkey_alloc(0, 0);
> +
> + if (pkey > 0)
> + return true;
> + return false;
> +}
> +
> +int main(int argc, char **argv)
> +{
ksft_print_header();
if (test_seal())
return ksft_exit_pass();
ksft_set_plan(10);
> + bool test_seal = seal_support();
> +
> + if (!test_seal) {
> + ksft_print_msg("sealing not supported (check CONFIG_64BIT)\n");
> + return 0;
> + }
> +
> + test_seal_addseal();
> +
> + test_seal_unmapped_start();
> + test_seal_unmapped_middle();
> + test_seal_unmapped_end();
> + test_seal_multiple_vmas();
> + test_seal_split_start();
> + test_seal_split_end();
> + test_seal_invalid_input();
> + test_seal_zero_length();
> + test_seal_twice();
> +
> + test_seal_mprotect(false);
> + test_seal_mprotect(true);
> +
> + test_seal_start_mprotect(false);
> + test_seal_start_mprotect(true);
> +
> + test_seal_end_mprotect(false);
> + test_seal_end_mprotect(true);
> +
> + test_seal_mprotect_unalign_len(false);
> + test_seal_mprotect_unalign_len(true);
> +
> + test_seal_mprotect_unalign_len_variant_2(false);
> + test_seal_mprotect_unalign_len_variant_2(true);
> +
> + test_seal_mprotect_two_vma(false);
> + test_seal_mprotect_two_vma(true);
> +
> + test_seal_mprotect_two_vma_with_split(false);
> + test_seal_mprotect_two_vma_with_split(true);
> +
> + test_seal_mprotect_partial_mprotect(false);
> + test_seal_mprotect_partial_mprotect(true);
> +
> + test_seal_mprotect_two_vma_with_gap(false);
> + test_seal_mprotect_two_vma_with_gap(true);
> +
> + test_seal_mprotect_merge(false);
> + test_seal_mprotect_merge(true);
> +
> + test_seal_mprotect_split(false);
> + test_seal_mprotect_split(true);
> +
> + test_seal_munmap(false);
> + test_seal_munmap(true);
> + test_seal_munmap_two_vma(false);
> + test_seal_munmap_two_vma(true);
> + test_seal_munmap_vma_with_gap(false);
> + test_seal_munmap_vma_with_gap(true);
> +
> + test_munmap_start_freed(false);
> + test_munmap_start_freed(true);
> + test_munmap_middle_freed(false);
> + test_munmap_middle_freed(true);
> + test_munmap_end_freed(false);
> + test_munmap_end_freed(true);
> +
> + test_seal_mremap_shrink(false);
> + test_seal_mremap_shrink(true);
> + test_seal_mremap_expand(false);
> + test_seal_mremap_expand(true);
> + test_seal_mremap_move(false);
> + test_seal_mremap_move(true);
> +
> + test_seal_mremap_shrink_fixed(false);
> + test_seal_mremap_shrink_fixed(true);
> + test_seal_mremap_expand_fixed(false);
> + test_seal_mremap_expand_fixed(true);
> + test_seal_mremap_move_fixed(false);
> + test_seal_mremap_move_fixed(true);
> + test_seal_mremap_move_dontunmap(false);
> + test_seal_mremap_move_dontunmap(true);
> + test_seal_mremap_move_fixed_zero(false);
> + test_seal_mremap_move_fixed_zero(true);
> + test_seal_mremap_move_dontunmap_anyaddr(false);
> + test_seal_mremap_move_dontunmap_anyaddr(true);
> + test_seal_discard_ro_anon(false);
> + test_seal_discard_ro_anon(true);
> + test_seal_discard_ro_anon_on_rw(false);
> + test_seal_discard_ro_anon_on_rw(true);
> + test_seal_discard_ro_anon_on_shared(false);
> + test_seal_discard_ro_anon_on_shared(true);
> + test_seal_discard_ro_anon_on_filebacked(false);
> + test_seal_discard_ro_anon_on_filebacked(true);
> + test_seal_mmap_overwrite_prot(false);
> + test_seal_mmap_overwrite_prot(true);
> + test_seal_mmap_expand(false);
> + test_seal_mmap_expand(true);
> + test_seal_mmap_shrink(false);
> + test_seal_mmap_shrink(true);
> +
> + test_seal_mmap_seal();
> + test_seal_merge_and_split();
> + test_seal_mmap_merge();
> +
> + test_not_sealable();
> + test_merge_sealable();
> + test_mmap_fixed_change_to_sealable();
> + test_mmap_fixed_change_to_not_sealable();
> +
> + if (pkey_supported()) {
> + test_seal_discard_ro_anon_on_pkey(false);
> + test_seal_discard_ro_anon_on_pkey(true);
> + } else
> + ksft_print_msg("PKEY not supported, skip pkey related test\n");
> +
> + ksft_print_msg("Done\n");
> + return 0;
Rather ksft_finishsed();
> +}
--
BR,
Muhammad Usama Anjum
On Mon, Jan 8, 2024 at 11:17 PM Muhammad Usama Anjum
<[email protected]> wrote:
>
> You have used ksft_print_msg() only. Can you please udpate the test to use
> more TAP macros for increased readability and printing test output. We
> don't want to terminate the test if one test fails because of assert. We
> want the sub-tests to get executed indepdendent of other tests.
>
> ksft_test_result(condition, fmt, ...);
> ksft_test_result_pass(fmt, ...);
>
> See some examples below:
>
> On 1/4/24 11:51 PM, [email protected] wrote:
> > From: Jeff Xu <[email protected]>
> >
> > selftest for memory sealing change in mmap() and mseal().
> >
> > Signed-off-by: Jeff Xu <[email protected]>
> > ---
> > tools/testing/selftests/mm/.gitignore | 1 +
> > tools/testing/selftests/mm/Makefile | 1 +
> > tools/testing/selftests/mm/mseal_test.c | 1971 +++++++++++++++++++++++
> > 3 files changed, 1973 insertions(+)
> > create mode 100644 tools/testing/selftests/mm/mseal_test.c
> >
> > diff --git a/tools/testing/selftests/mm/.gitignore b/tools/testing/selftests/mm/.gitignore
> > index 4ff10ea61461..76474c51c786 100644
> > --- a/tools/testing/selftests/mm/.gitignore
> > +++ b/tools/testing/selftests/mm/.gitignore
> > @@ -46,3 +46,4 @@ gup_longterm
> > mkdirty
> > va_high_addr_switch
> > hugetlb_fault_after_madv
> > +mseal_test
> > diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
> > index dede0bcf97a3..652c07ff81f5 100644
> > --- a/tools/testing/selftests/mm/Makefile
> > +++ b/tools/testing/selftests/mm/Makefile
> > @@ -59,6 +59,7 @@ TEST_GEN_FILES += mlock2-tests
> > TEST_GEN_FILES += mrelease_test
> > TEST_GEN_FILES += mremap_dontunmap
> > TEST_GEN_FILES += mremap_test
> > +TEST_GEN_FILES += mseal_test
> > TEST_GEN_FILES += on-fault-limit
> > TEST_GEN_FILES += pagemap_ioctl
> > TEST_GEN_FILES += thuge-gen
> > diff --git a/tools/testing/selftests/mm/mseal_test.c b/tools/testing/selftests/mm/mseal_test.c
> > new file mode 100644
> > index 000000000000..a916d30e97c2
> > --- /dev/null
> > +++ b/tools/testing/selftests/mm/mseal_test.c
> > @@ -0,0 +1,1971 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +#define _GNU_SOURCE
> > +#include <sys/mman.h>
> > +#include <stdint.h>
> > +#include <unistd.h>
> > +#include <string.h>
> > +#include <sys/time.h>
> > +#include <sys/resource.h>
> > +#include <stdbool.h>
> > +#include "../kselftest.h"
> > +#include <syscall.h>
> > +#include <errno.h>
> > +#include <stdio.h>
> > +#include <stdlib.h>
> > +#include <assert.h>
> > +#include <fcntl.h>
> > +#include <assert.h>
> > +#include <sys/ioctl.h>
> > +#include <sys/vfs.h>
> > +#include <sys/stat.h>
> > +
> > +/*
> > + * need those definition for manually build using gcc.
> > + * gcc -I ../../../../usr/include -DDEBUG -O3 -DDEBUG -O3 mseal_test.c -o mseal_test
> > + */
> > +#ifndef MAP_SEALABLE
> > +#define MAP_SEALABLE 0x8000000
> > +#endif
> > +
> > +#ifndef PROT_SEAL
> > +#define PROT_SEAL 0x04000000
> > +#endif
> > +
> > +#ifndef PKEY_DISABLE_ACCESS
> > +# define PKEY_DISABLE_ACCESS 0x1
> > +#endif
> > +
> > +#ifndef PKEY_DISABLE_WRITE
> > +# define PKEY_DISABLE_WRITE 0x2
> > +#endif
> > +
> > +#ifndef PKEY_BITS_PER_KEY
> > +#define PKEY_BITS_PER_PKEY 2
> > +#endif
> > +
> > +#ifndef PKEY_MASK
> > +#define PKEY_MASK (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE)
> > +#endif
> > +
> > +#ifndef DEBUG
> > +#define LOG_TEST_ENTER() {}
> > +#else
> > +#define LOG_TEST_ENTER() {ksft_print_msg("%s\n", __func__); }
> > +#endif
> > +
> > +#ifndef u64
> > +#define u64 unsigned long long
> > +#endif
> > +
> > +static unsigned long get_vma_size(void *addr)
> > +{
> > + FILE *maps;
> > + char line[256];
> > + int size = 0;
> > + uintptr_t addr_start, addr_end;
> > +
> > + maps = fopen("/proc/self/maps", "r");
> > + if (!maps)
> > + return 0;
> > +
> > + while (fgets(line, sizeof(line), maps)) {
> > + if (sscanf(line, "%lx-%lx", &addr_start, &addr_end) == 2) {
> > + if (addr_start == (uintptr_t) addr) {
> > + size = addr_end - addr_start;
> > + break;
> > + }
> > + }
> > + }
> > + fclose(maps);
> > + return size;
> > +}
> > +
> > +/*
> > + * define sys_xyx to call syscall directly.
> > + */
> > +static int sys_mseal(void *start, size_t len)
> > +{
> > + int sret;
> > +
> > + errno = 0;
> > + sret = syscall(__NR_mseal, start, len, 0);
> > + return sret;
> > +}
> > +
> > +static int sys_mprotect(void *ptr, size_t size, unsigned long prot)
> > +{
> > + int sret;
> > +
> > + errno = 0;
> > + sret = syscall(SYS_mprotect, ptr, size, prot);
> > + return sret;
> > +}
> > +
> > +static int sys_mprotect_pkey(void *ptr, size_t size, unsigned long orig_prot,
> > + unsigned long pkey)
> > +{
> > + int sret;
> > +
> > + errno = 0;
> > + sret = syscall(__NR_pkey_mprotect, ptr, size, orig_prot, pkey);
> > + return sret;
> > +}
> > +
> > +/*
> > + * define sys_xyx to call syscall directly.
> > + */
> > +static void *sys_mmap(void *addr, unsigned long len, unsigned long prot,
> > + unsigned long flags, unsigned long fd, unsigned long offset)
> > +{
> > + void *sret;
> > +
> > + errno = 0;
> > + sret = (void *) syscall(__NR_mmap, addr, len, prot,
> > + flags, fd, offset);
> > + return sret;
> > +}
> > +
> > +static int sys_munmap(void *ptr, size_t size)
> > +{
> > + int sret;
> > +
> > + errno = 0;
> > + sret = syscall(SYS_munmap, ptr, size);
> > + return sret;
> > +}
> > +
> > +static int sys_madvise(void *start, size_t len, int types)
> > +{
> > + int sret;
> > +
> > + errno = 0;
> > + sret = syscall(__NR_madvise, start, len, types);
> > + return sret;
> > +}
> > +
> > +static int sys_pkey_alloc(unsigned long flags, unsigned long init_val)
> > +{
> > + int ret = syscall(SYS_pkey_alloc, flags, init_val);
> > +
> > + return ret;
> > +}
> > +
> > +static unsigned int __read_pkey_reg(void)
> > +{
> > + unsigned int eax, edx;
> > + unsigned int ecx = 0;
> > + unsigned int pkey_reg;
> > +
> > + asm volatile(".byte 0x0f,0x01,0xee\n\t"
> > + : "=a" (eax), "=d" (edx)
> > + : "c" (ecx));
> > + pkey_reg = eax;
> > + return pkey_reg;
> > +}
> > +
> > +static void __write_pkey_reg(u64 pkey_reg)
> > +{
> > + unsigned int eax = pkey_reg;
> > + unsigned int ecx = 0;
> > + unsigned int edx = 0;
> > +
> > + asm volatile(".byte 0x0f,0x01,0xef\n\t"
> > + : : "a" (eax), "c" (ecx), "d" (edx));
> > + assert(pkey_reg == __read_pkey_reg());
> > +}
> > +
> > +static unsigned long pkey_bit_position(int pkey)
> > +{
> > + return pkey * PKEY_BITS_PER_PKEY;
> > +}
> > +
> > +static u64 set_pkey_bits(u64 reg, int pkey, u64 flags)
> > +{
> > + unsigned long shift = pkey_bit_position(pkey);
> > +
> > + /* mask out bits from pkey in old value */
> > + reg &= ~((u64)PKEY_MASK << shift);
> > + /* OR in new bits for pkey */
> > + reg |= (flags & PKEY_MASK) << shift;
> > + return reg;
> > +}
> > +
> > +static void set_pkey(int pkey, unsigned long pkey_value)
> > +{
> > + unsigned long mask = (PKEY_DISABLE_ACCESS | PKEY_DISABLE_WRITE);
> > + u64 new_pkey_reg;
> > +
> > + assert(!(pkey_value & ~mask));
> > + new_pkey_reg = set_pkey_bits(__read_pkey_reg(), pkey, pkey_value);
> > + __write_pkey_reg(new_pkey_reg);
> > +}
> > +
> > +static void setup_single_address(int size, void **ptrOut)
> > +{
> > + void *ptr;
> > +
> > + ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
> > + assert(ptr != (void *)-1);
> ksft_exit_fail_msg() here as this this fetal error and test should abort.
>
> > + *ptrOut = ptr;
> > +}
> > +
> > +static void setup_single_address_sealable(int size, void **ptrOut, bool sealable)
> > +{
> > + void *ptr;
> > + unsigned long mapflags = MAP_ANONYMOUS | MAP_PRIVATE;
> > +
> > + if (sealable)
> > + mapflags |= MAP_SEALABLE;
> > +
> > + ptr = sys_mmap(NULL, size, PROT_READ, mapflags, -1, 0);
> > + assert(ptr != (void *)-1);
> > + *ptrOut = ptr;
> > +}
> > +
> > +static void setup_single_address_rw_sealable(int size, void **ptrOut, bool sealable)
> > +{
> > + void *ptr;
> > + unsigned long mapflags = MAP_ANONYMOUS | MAP_PRIVATE;
> > +
> > + if (sealable)
> > + mapflags |= MAP_SEALABLE;
> > +
> > + ptr = sys_mmap(NULL, size, PROT_READ | PROT_WRITE, mapflags, -1, 0);
> > + assert(ptr != (void *)-1);
> > + *ptrOut = ptr;
> > +}
> > +
> > +static void clean_single_address(void *ptr, int size)
> > +{
> > + int ret;
> > +
> > + ret = munmap(ptr, size);
> > + assert(!ret);
> > +}
> > +
> > +static void seal_single_address(void *ptr, int size)
> > +{
> > + int ret;
> > +
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_addseal(void)
> > +{
> > + LOG_TEST_ENTER();
> Move this to after variable declarations. Not sure why you aren't getting
> warning already that you are writing code before variable declarations.
>
> > + int ret;
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + ret = sys_mseal(ptr, size);
> ksft_test_result(!ret, "mseal syscall\n");
>
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_unmapped_start(void)
> > +{
> > + LOG_TEST_ENTER();
> > + int ret;
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + /* munmap 2 pages from ptr. */
> > + ret = sys_munmap(ptr, 2 * page_size);
> > + assert(!ret);
> > +
> > + /* mprotect will fail because 2 pages from ptr are unmapped. */
> > + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
> > + assert(ret < 0);
> > +
> > + /* mseal will fail because 2 pages from ptr are unmapped. */
> > + ret = sys_mseal(ptr, size);
> > + assert(ret < 0);
> > +
> > + ret = sys_mseal(ptr + 2 * page_size, 2 * page_size);
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_unmapped_middle(void)
> > +{
> > + LOG_TEST_ENTER();
> > + int ret;
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + /* munmap 2 pages from ptr + page. */
> > + ret = sys_munmap(ptr + page_size, 2 * page_size);
> > + assert(!ret);
> > +
> > + /* mprotect will fail, since size is 4 pages. */
> > + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
> > + assert(ret < 0);
> > +
> > + /* mseal will fail as well. */
> > + ret = sys_mseal(ptr, size);
> > + assert(ret < 0);
> > +
> > + /* we still can add seal to the first page and last page*/
> > + ret = sys_mseal(ptr, page_size);
> > + assert(!ret);
> > +
> > + ret = sys_mseal(ptr + 3 * page_size, page_size);
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_unmapped_end(void)
> > +{
> > + LOG_TEST_ENTER();
> > + int ret;
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + /* unmap last 2 pages. */
> > + ret = sys_munmap(ptr + 2 * page_size, 2 * page_size);
> > + assert(!ret);
> > +
> > + /* mprotect will fail since last 2 pages are unmapped. */
> > + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
> > + assert(ret < 0);
> > +
> > + /* mseal will fail as well. */
> > + ret = sys_mseal(ptr, size);
> > + assert(ret < 0);
> > +
> > + /* The first 2 pages is not sealed, and can add seals */
> > + ret = sys_mseal(ptr, 2 * page_size);
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_multiple_vmas(void)
> > +{
> > + LOG_TEST_ENTER();
> > + int ret;
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + /* use mprotect to split the vma into 3. */
> > + ret = sys_mprotect(ptr + page_size, 2 * page_size,
> > + PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +
> > + /* mprotect will get applied to all 4 pages - 3 VMAs. */
> > + ret = sys_mprotect(ptr, size, PROT_READ);
> > + assert(!ret);
> > +
> > + /* use mprotect to split the vma into 3. */
> > + ret = sys_mprotect(ptr + page_size, 2 * page_size,
> > + PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +
> > + /* mseal get applied to all 4 pages - 3 VMAs. */
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_split_start(void)
> > +{
> > + LOG_TEST_ENTER();
> > + int ret;
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + /* use mprotect to split at middle */
> > + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +
> > + /* seal the first page, this will split the VMA */
> > + ret = sys_mseal(ptr, page_size);
> > + assert(!ret);
> > +
> > + /* add seal to the remain 3 pages */
> > + ret = sys_mseal(ptr + page_size, 3 * page_size);
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_split_end(void)
> > +{
> > + LOG_TEST_ENTER();
> > + int ret;
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + /* use mprotect to split at middle */
> > + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +
> > + /* seal the last page */
> > + ret = sys_mseal(ptr + 3 * page_size, page_size);
> > + assert(!ret);
> > +
> > + /* Adding seals to the first 3 pages */
> > + ret = sys_mseal(ptr, 3 * page_size);
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_invalid_input(void)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(8 * page_size, &ptr);
> > + clean_single_address(ptr + 4 * page_size, 4 * page_size);
> > +
> > + /* invalid flag */
> > + ret = syscall(__NR_mseal, ptr, size, 0x20);
> > + assert(ret < 0);
> > +
> > + /* unaligned address */
> > + ret = sys_mseal(ptr + 1, 2 * page_size);
> > + assert(ret < 0);
> > +
> > + /* length too big */
> > + ret = sys_mseal(ptr, 5 * page_size);
> > + assert(ret < 0);
> > +
> > + /* start is not in a valid VMA */
> > + ret = sys_mseal(ptr - page_size, 5 * page_size);
> > + assert(ret < 0);
> > +}
> > +
> > +static void test_seal_zero_length(void)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + ret = sys_mprotect(ptr, 0, PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +
> > + /* seal 0 length will be OK, same as mprotect */
> > + ret = sys_mseal(ptr, 0);
> > + assert(!ret);
> > +
> > + /* verify the 4 pages are not sealed by previous call. */
> > + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_twice(void)
> > +{
> > + LOG_TEST_ENTER();
> > + int ret;
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > +
> > + /* apply the same seal will be OK. idempotent. */
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_mprotect(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + if (seal)
> > + seal_single_address(ptr, size);
> > +
> > + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_start_mprotect(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + if (seal)
> > + seal_single_address(ptr, page_size);
> > +
> > + /* the first page is sealed. */
> > + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +
> > + /* pages after the first page is not sealed. */
> > + ret = sys_mprotect(ptr + page_size, page_size * 3,
> > + PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_end_mprotect(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + if (seal)
> > + seal_single_address(ptr + page_size, 3 * page_size);
> > +
> > + /* first page is not sealed */
> > + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +
> > + /* last 3 page are sealed */
> > + ret = sys_mprotect(ptr + page_size, page_size * 3,
> > + PROT_READ | PROT_WRITE);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_mprotect_unalign_len(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + if (seal)
> > + seal_single_address(ptr, page_size * 2 - 1);
> > +
> > + /* 2 pages are sealed. */
> > + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +
> > + ret = sys_mprotect(ptr + page_size * 2, page_size,
> > + PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_mprotect_unalign_len_variant_2(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > + if (seal)
> > + seal_single_address(ptr, page_size * 2 + 1);
> > +
> > + /* 3 pages are sealed. */
> > + ret = sys_mprotect(ptr, page_size * 3, PROT_READ | PROT_WRITE);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +
> > + ret = sys_mprotect(ptr + page_size * 3, page_size,
> > + PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_mprotect_two_vma(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + /* use mprotect to split */
> > + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +
> > + if (seal)
> > + seal_single_address(ptr, page_size * 4);
> > +
> > + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +
> > + ret = sys_mprotect(ptr + page_size * 2, page_size * 2,
> > + PROT_READ | PROT_WRITE);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_mprotect_two_vma_with_split(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + /* use mprotect to split as two vma. */
> > + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +
> > + /* mseal can apply across 2 vma, also split them. */
> > + if (seal)
> > + seal_single_address(ptr + page_size, page_size * 2);
> > +
> > + /* the first page is not sealed. */
> > + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +
> > + /* the second page is sealed. */
> > + ret = sys_mprotect(ptr + page_size, page_size, PROT_READ | PROT_WRITE);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +
> > + /* the third page is sealed. */
> > + ret = sys_mprotect(ptr + 2 * page_size, page_size,
> > + PROT_READ | PROT_WRITE);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +
> > + /* the fouth page is not sealed. */
> > + ret = sys_mprotect(ptr + 3 * page_size, page_size,
> > + PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_mprotect_partial_mprotect(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + /* seal one page. */
> > + if (seal)
> > + seal_single_address(ptr, page_size);
> > +
> > + /* mprotect first 2 page will fail, since the first page are sealed. */
> > + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ | PROT_WRITE);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_mprotect_two_vma_with_gap(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + /* use mprotect to split. */
> > + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +
> > + /* use mprotect to split. */
> > + ret = sys_mprotect(ptr + 3 * page_size, page_size,
> > + PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +
> > + /* use munmap to free two pages in the middle */
> > + ret = sys_munmap(ptr + page_size, 2 * page_size);
> > + assert(!ret);
> > +
> > + /* mprotect will fail, because there is a gap in the address. */
> > + /* notes, internally mprotect still updated the first page. */
> > + ret = sys_mprotect(ptr, 4 * page_size, PROT_READ);
> > + assert(ret < 0);
> > +
> > + /* mseal will fail as well. */
> > + ret = sys_mseal(ptr, 4 * page_size);
> > + assert(ret < 0);
> > +
> > + /* unlike mprotect, the first page is not sealed. */
> > + ret = sys_mprotect(ptr, page_size, PROT_READ);
> > + assert(ret == 0);
> > +
> > + /* the last page is not sealed. */
> > + ret = sys_mprotect(ptr + 3 * page_size, page_size, PROT_READ);
> > + assert(ret == 0);
> > +}
> > +
> > +static void test_seal_mprotect_split(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + /* use mprotect to split. */
> > + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +
> > + /* seal all 4 pages. */
> > + if (seal) {
> > + ret = sys_mseal(ptr, 4 * page_size);
> > + assert(!ret);
> > + }
> > +
> > + /* mprotect is sealed. */
> > + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +
> > +
> > + ret = sys_mprotect(ptr + 2 * page_size, 2 * page_size, PROT_READ);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_mprotect_merge(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + /* use mprotect to split one page. */
> > + ret = sys_mprotect(ptr, page_size, PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +
> > + /* seal first two pages. */
> > + if (seal) {
> > + ret = sys_mseal(ptr, 2 * page_size);
> > + assert(!ret);
> > + }
> > +
> > + ret = sys_madvise(ptr, page_size, MADV_WILLNEED);
> > + assert(!ret);
> > +
> > + /* 2 pages are sealed. */
> > + ret = sys_mprotect(ptr, 2 * page_size, PROT_READ);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +
> > + /* last 2 pages are not sealed. */
> > + ret = sys_mprotect(ptr + 2 * page_size, 2 * page_size, PROT_READ);
> > + assert(ret == 0);
> > +}
> > +
> > +static void test_seal_munmap(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + if (seal) {
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > + }
> > +
> > + /* 4 pages are sealed. */
> > + ret = sys_munmap(ptr, size);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +}
> > +
> > +/*
> > + * allocate 4 pages,
> > + * use mprotect to split it as two VMAs
> > + * seal the whole range
> > + * munmap will fail on both
> > + */
> ksft* macros can print this information to console for better view of which
> test failed.
>
> > +static void test_seal_munmap_two_vma(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + /* use mprotect to split */
> > + ret = sys_mprotect(ptr, page_size * 2, PROT_READ | PROT_WRITE);
> > + assert(!ret);
> > +
> > + if (seal) {
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > + }
> > +
> > + ret = sys_munmap(ptr, page_size * 2);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +
> > + ret = sys_munmap(ptr + page_size, page_size * 2);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +}
> > +
> > +/*
> > + * allocate a VMA with 4 pages.
> > + * munmap the middle 2 pages.
> > + * seal the whole 4 pages, will fail.
> > + * note: one of the pages are sealed
> > + * munmap the first page will be OK.
> > + * munmap the last page will be OK.
> > + */
> > +static void test_seal_munmap_vma_with_gap(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + ret = sys_munmap(ptr + page_size, page_size * 2);
> > + assert(!ret);
> > +
> > + if (seal) {
> > + /* can't have gap in the middle. */
> > + ret = sys_mseal(ptr, size);
> > + assert(ret < 0);
> > + }
> > +
> > + ret = sys_munmap(ptr, page_size);
> > + assert(!ret);
> > +
> > + ret = sys_munmap(ptr + page_size * 2, page_size);
> > + assert(!ret);
> > +
> > + ret = sys_munmap(ptr, size);
> > + assert(!ret);
> > +}
> > +
> > +static void test_munmap_start_freed(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + /* unmap the first page. */
> > + ret = sys_munmap(ptr, page_size);
> > + assert(!ret);
> > +
> > + /* seal the last 3 pages. */
> > + if (seal) {
> > + ret = sys_mseal(ptr + page_size, 3 * page_size);
> > + assert(!ret);
> > + }
> > +
> > + /* unmap from the first page. */
> > + ret = sys_munmap(ptr, size);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + /* note: this will be OK, even the first page is */
> > + /* already unmapped. */
> > + assert(!ret);
> > +}
> > +
> > +static void test_munmap_end_freed(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > + /* unmap last page. */
> > + ret = sys_munmap(ptr + page_size * 3, page_size);
> > + assert(!ret);
> > +
> > + /* seal the first 3 pages. */
> > + if (seal) {
> > + ret = sys_mseal(ptr, 3 * page_size);
> > + assert(!ret);
> > + }
> > +
> > + /* unmap all pages. */
> > + ret = sys_munmap(ptr, size);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +}
> > +
> > +static void test_munmap_middle_freed(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > + /* unmap 2 pages in the middle. */
> > + ret = sys_munmap(ptr + page_size, page_size * 2);
> > + assert(!ret);
> > +
> > + /* seal the first page. */
> > + if (seal) {
> > + ret = sys_mseal(ptr, page_size);
> > + assert(!ret);
> > + }
> > +
> > + /* munmap all 4 pages. */
> > + ret = sys_munmap(ptr, size);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_mremap_shrink(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > + void *ret2;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + if (seal) {
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > + }
> > +
> > + /* shrink from 4 pages to 2 pages. */
> > + ret2 = mremap(ptr, size, 2 * page_size, 0, 0);
> > + if (seal) {
> > + assert(ret2 == MAP_FAILED);
> > + assert(errno == EACCES);
> > + } else {
> > + assert(ret2 != MAP_FAILED);
> > +
> > + }
> > +}
> > +
> > +static void test_seal_mremap_expand(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > + void *ret2;
> > +
> > + setup_single_address(size, &ptr);
> > + /* ummap last 2 pages. */
> > + ret = sys_munmap(ptr + 2 * page_size, 2 * page_size);
> > + assert(!ret);
> > +
> > + if (seal) {
> > + ret = sys_mseal(ptr, 2 * page_size);
> > + assert(!ret);
> > + }
> > +
> > + /* expand from 2 page to 4 pages. */
> > + ret2 = mremap(ptr, 2 * page_size, 4 * page_size, 0, 0);
> > + if (seal) {
> > + assert(ret2 == MAP_FAILED);
> > + assert(errno == EACCES);
> > + } else {
> > + assert(ret2 == ptr);
> > +
> > + }
> > +}
> > +
> > +static void test_seal_mremap_move(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr, *newPtr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = page_size;
> > + int ret;
> > + void *ret2;
> > +
> > + setup_single_address(size, &ptr);
> > + setup_single_address(size, &newPtr);
> > + clean_single_address(newPtr, size);
> > +
> > + if (seal) {
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > + }
> > +
> > + /* move from ptr to fixed address. */
> > + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newPtr);
> > + if (seal) {
> > + assert(ret2 == MAP_FAILED);
> > + assert(errno == EACCES);
> > + } else {
> > + assert(ret2 != MAP_FAILED);
> > +
> > + }
> > +}
> > +
> > +static void test_seal_mmap_overwrite_prot(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = page_size;
> > + int ret;
> > + void *ret2;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + if (seal) {
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > + }
> > +
> > + /* use mmap to change protection. */
> > + ret2 = sys_mmap(ptr, size, PROT_NONE,
> > + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
> > + if (seal) {
> > + assert(ret2 == MAP_FAILED);
> > + assert(errno == EACCES);
> > + } else
> > + assert(ret2 == ptr);
> > +}
> > +
> > +static void test_seal_mmap_expand(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 12 * page_size;
> > + int ret;
> > + void *ret2;
> > +
> > + setup_single_address(size, &ptr);
> > + /* ummap last 4 pages. */
> > + ret = sys_munmap(ptr + 8 * page_size, 4 * page_size);
> > + assert(!ret);
> > +
> > + if (seal) {
> > + ret = sys_mseal(ptr, 8 * page_size);
> > + assert(!ret);
> > + }
> > +
> > + /* use mmap to expand. */
> > + ret2 = sys_mmap(ptr, size, PROT_READ,
> > + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
> > + if (seal) {
> > + assert(ret2 == MAP_FAILED);
> > + assert(errno == EACCES);
> > + } else
> > + assert(ret2 == ptr);
> > +}
> > +
> > +static void test_seal_mmap_shrink(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 12 * page_size;
> > + int ret;
> > + void *ret2;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + if (seal) {
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > + }
> > +
> > + /* use mmap to shrink. */
> > + ret2 = sys_mmap(ptr, 8 * page_size, PROT_READ,
> > + MAP_ANONYMOUS | MAP_PRIVATE | MAP_FIXED, -1, 0);
> > + if (seal) {
> > + assert(ret2 == MAP_FAILED);
> > + assert(errno == EACCES);
> > + } else
> > + assert(ret2 == ptr);
> > +}
> > +
> > +static void test_seal_mremap_shrink_fixed(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + void *newAddr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > + void *ret2;
> > +
> > + setup_single_address(size, &ptr);
> > + setup_single_address(size, &newAddr);
> > +
> > + if (seal) {
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > + }
> > +
> > + /* mremap to move and shrink to fixed address */
> > + ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
> > + newAddr);
> > + if (seal) {
> > + assert(ret2 == MAP_FAILED);
> > + assert(errno == EACCES);
> > + } else
> > + assert(ret2 == newAddr);
> > +}
> > +
> > +static void test_seal_mremap_expand_fixed(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + void *newAddr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > + void *ret2;
> > +
> > + setup_single_address(page_size, &ptr);
> > + setup_single_address(size, &newAddr);
> > +
> > + if (seal) {
> > + ret = sys_mseal(newAddr, size);
> > + assert(!ret);
> > + }
> > +
> > + /* mremap to move and expand to fixed address */
> > + ret2 = mremap(ptr, page_size, size, MREMAP_MAYMOVE | MREMAP_FIXED,
> > + newAddr);
> > + if (seal) {
> > + assert(ret2 == MAP_FAILED);
> > + assert(errno == EACCES);
> > + } else
> > + assert(ret2 == newAddr);
> > +}
> > +
> > +static void test_seal_mremap_move_fixed(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + void *newAddr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > + void *ret2;
> > +
> > + setup_single_address(size, &ptr);
> > + setup_single_address(size, &newAddr);
> > +
> > + if (seal) {
> > + ret = sys_mseal(newAddr, size);
> > + assert(!ret);
> > + }
> > +
> > + /* mremap to move to fixed address */
> > + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_FIXED, newAddr);
> > + if (seal) {
> > + assert(ret2 == MAP_FAILED);
> > + assert(errno == EACCES);
> > + } else
> > + assert(ret2 == newAddr);
> > +}
> > +
> > +static void test_seal_mremap_move_fixed_zero(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + void *newAddr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > + void *ret2;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + if (seal) {
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > + }
> > +
> > + /*
> > + * MREMAP_FIXED can move the mapping to zero address
> > + */
> > + ret2 = mremap(ptr, size, 2 * page_size, MREMAP_MAYMOVE | MREMAP_FIXED,
> > + 0);
> > + if (seal) {
> > + assert(ret2 == MAP_FAILED);
> > + assert(errno == EACCES);
> > + } else {
> > + assert(ret2 == 0);
> > +
> > + }
> > +}
> > +
> > +static void test_seal_mremap_move_dontunmap(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + void *newAddr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > + void *ret2;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + if (seal) {
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > + }
> > +
> > + /* mremap to move, and don't unmap src addr. */
> > + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP, 0);
> > + if (seal) {
> > + assert(ret2 == MAP_FAILED);
> > + assert(errno == EACCES);
> > + } else {
> > + assert(ret2 != MAP_FAILED);
> > +
> > + }
> > +}
> > +
> > +static void test_seal_mremap_move_dontunmap_anyaddr(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + void *newAddr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > + void *ret2;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + if (seal) {
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > + }
> > +
> > + /*
> > + * The 0xdeaddead should not have effect on dest addr
> > + * when MREMAP_DONTUNMAP is set.
> > + */
> > + ret2 = mremap(ptr, size, size, MREMAP_MAYMOVE | MREMAP_DONTUNMAP,
> > + 0xdeaddead);
> > + if (seal) {
> > + assert(ret2 == MAP_FAILED);
> > + assert(errno == EACCES);
> > + } else {
> > + assert(ret2 != MAP_FAILED);
> > + assert((long)ret2 != 0xdeaddead);
> > +
> > + }
> > +}
> > +
> > +
> > +static void test_seal_mmap_seal(void)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > + void *ret2;
> > +
> > + ptr = sys_mmap(NULL, size, PROT_READ | PROT_SEAL, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> > + assert(ptr != (void *)-1);
> > +
> > + ret = sys_munmap(ptr, size);
> > + assert(ret < 0);
> > +
> > + ret = sys_mprotect(ptr, size, PROT_READ | PROT_WRITE);
> > + assert(ret < 0);
> > +}
> > +
> > +static void test_seal_merge_and_split(void)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size;
> > + int ret;
> > + void *ret2;
> > +
> > + /* (24 RO) */
> > + setup_single_address(24 * page_size, &ptr);
> > +
> > + /* use mprotect(NONE) to set out boundary */
> > + /* (1 NONE) (22 RO) (1 NONE) */
> > + ret = sys_mprotect(ptr, page_size, PROT_NONE);
> > + assert(!ret);
> > + ret = sys_mprotect(ptr + 23 * page_size, page_size, PROT_NONE);
> > + assert(!ret);
> > + size = get_vma_size(ptr + page_size);
> > + assert(size == 22 * page_size);
> > +
> > + /* use mseal to split from beginning */
> > + /* (1 NONE) (1 RO_SEAL) (21 RO) (1 NONE) */
> > + ret = sys_mseal(ptr + page_size, page_size);
> > + assert(!ret);
> > + size = get_vma_size(ptr + page_size);
> > + assert(size == page_size);
> > + size = get_vma_size(ptr + 2 * page_size);
> > + assert(size == 21 * page_size);
> > +
> > + /* use mseal to split from the end. */
> > + /* (1 NONE) (1 RO_SEAL) (20 RO) (1 RO_SEAL) (1 NONE) */
> > + ret = sys_mseal(ptr + 22 * page_size, page_size);
> > + assert(!ret);
> > + size = get_vma_size(ptr + 22 * page_size);
> > + assert(size == page_size);
> > + size = get_vma_size(ptr + 2 * page_size);
> > + assert(size == 20 * page_size);
> > +
> > + /* merge with prev. */
> > + /* (1 NONE) (2 RO_SEAL) (19 RO) (1 RO_SEAL) (1 NONE) */
> > + ret = sys_mseal(ptr + 2 * page_size, page_size);
> > + assert(!ret);
> > + size = get_vma_size(ptr + page_size);
> > + assert(size == 2 * page_size);
> > +
> > + /* merge with after. */
> > + /* (1 NONE) (2 RO_SEAL) (18 RO) (2 RO_SEALS) (1 NONE) */
> > + ret = sys_mseal(ptr + 21 * page_size, page_size);
> > + assert(!ret);
> > + size = get_vma_size(ptr + 21 * page_size);
> > + assert(size == 2 * page_size);
> > +
> > + /* split and merge from prev */
> > + /* (1 NONE) (3 RO_SEAL) (17 RO) (2 RO_SEALS) (1 NONE) */
> > + ret = sys_mseal(ptr + 2 * page_size, 2 * page_size);
> > + assert(!ret);
> > + size = get_vma_size(ptr + 1 * page_size);
> > + assert(size == 3 * page_size);
> > + ret = sys_munmap(ptr + page_size, page_size);
> > + assert(ret < 0);
> > + ret = sys_mprotect(ptr + 2 * page_size, page_size, PROT_NONE);
> > + assert(ret < 0);
> > +
> > + /* split and merge from next */
> > + /* (1 NONE) (3 RO_SEAL) (16 RO) (3 RO_SEALS) (1 NONE) */
> > + ret = sys_mseal(ptr + 20 * page_size, 2 * page_size);
> > + assert(!ret);
> > + size = get_vma_size(ptr + 20 * page_size);
> > + assert(size == 3 * page_size);
> > +
> > + /* merge from middle of prev and middle of next. */
> > + /* (1 NONE) (22 RO_SEAL) (1 NONE) */
> > + ret = sys_mseal(ptr + 2 * page_size, 20 * page_size);
> > + assert(!ret);
> > + size = get_vma_size(ptr + page_size);
> > + assert(size == 22 * page_size);
> > +}
> > +
> > +static void test_seal_mmap_merge(void)
> > +{
> > + LOG_TEST_ENTER();
> > +
> > + void *ptr, *ptr2;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size;
> > + int ret;
> > + void *ret2;
> > +
> > + /* (24 RO) */
> > + setup_single_address(24 * page_size, &ptr);
> > +
> > + /* use mprotect(NONE) to set out boundary */
> > + /* (1 NONE) (22 RO) (1 NONE) */
> > + ret = sys_mprotect(ptr, page_size, PROT_NONE);
> > + assert(!ret);
> > + ret = sys_mprotect(ptr + 23 * page_size, page_size, PROT_NONE);
> > + assert(!ret);
> > + size = get_vma_size(ptr + page_size);
> > + assert(size == 22 * page_size);
> > +
> > + /* use munmap to free 2 segment of memory. */
> > + /* (1 NONE) (1 free) (20 RO) (1 free) (1 NONE) */
> > + ret = sys_munmap(ptr + page_size, page_size);
> > + assert(!ret);
> > +
> > + ret = sys_munmap(ptr + 22 * page_size, page_size);
> > + assert(!ret);
> > +
> > + /* apply seal to the middle */
> > + /* (1 NONE) (1 free) (20 RO_SEAL) (1 free) (1 NONE) */
> > + ret = sys_mseal(ptr + 2 * page_size, 20 * page_size);
> > + assert(!ret);
> > + size = get_vma_size(ptr + 2 * page_size);
> > + assert(size == 20 * page_size);
> > +
> > + /* allocate a mapping at beginning, and make sure it merges. */
> > + /* (1 NONE) (21 RO_SEAL) (1 free) (1 NONE) */
> > + ptr2 = sys_mmap(ptr + page_size, page_size, PROT_READ | PROT_SEAL,
> > + MAP_FIXED | MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> > + assert(ptr != (void *)-1);
> > + size = get_vma_size(ptr + page_size);
> > + assert(size == 21 * page_size);
> > +
> > + /* allocate a mapping at end, and make sure it merges. */
> > + /* (1 NONE) (22 RO_SEAL) (1 NONE) */
> > + ptr2 = sys_mmap(ptr + 22 * page_size, page_size, PROT_READ | PROT_SEAL,
> > + MAP_FIXED | MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
> > + assert(ptr != (void *)-1);
> > + size = get_vma_size(ptr + page_size);
> > + assert(size == 22 * page_size);
> > +}
> > +
> > +static void test_not_sealable(void)
> > +{
> > + LOG_TEST_ENTER();
> > + int ret;
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > +
> > + ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> > + assert(ptr != (void *)-1);
> > +
> > + ret = sys_mseal(ptr, size);
> > + assert(ret < 0);
> > +}
> > +
> > +static void test_mmap_fixed_change_to_sealable(void)
> > +{
> > + LOG_TEST_ENTER();
> > + int ret;
> > + void *ptr, *ptr2;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > +
> > + ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> > + assert(ptr != (void *)-1);
> > +
> > + ret = sys_mseal(ptr, size);
> > + assert(ret < 0);
> > +
> > + ptr2 = sys_mmap(ptr, size, PROT_READ,
> > + MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
> > + assert(ptr2 == ptr);
> > +
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > +}
> > +
> > +static void test_mmap_fixed_change_to_not_sealable(void)
> > +{
> > + LOG_TEST_ENTER();
> > + int ret;
> > + void *ptr, *ptr2;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > +
> > + ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
> > + assert(ptr != (void *)-1);
> > +
> > + ptr2 = sys_mmap(ptr, size, PROT_READ,
> > + MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> > + assert(ptr2 == ptr);
> > +
> > + ret = sys_mseal(ptr, size);
> > + assert(ret < 0);
> > +}
> > +
> > +static void test_merge_sealable(void)
> > +{
> > + LOG_TEST_ENTER();
> > + int ret;
> > + void *ptr, *ptr2;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size;
> > +
> > + /* (24 RO) */
> > + setup_single_address(24 * page_size, &ptr);
> > +
> > + /* use mprotect(NONE) to set out boundary */
> > + /* (1 NONE) (22 RO) (1 NONE) */
> > + ret = sys_mprotect(ptr, page_size, PROT_NONE);
> > + assert(!ret);
> > + ret = sys_mprotect(ptr + 23 * page_size, page_size, PROT_NONE);
> > + assert(!ret);
> > + size = get_vma_size(ptr + page_size);
> > + assert(size == 22 * page_size);
> > +
> > + /* (1 NONE) (RO) (4 free) (17 RO) (1 NONE) */
> > + ret = sys_munmap(ptr + 2 * page_size, 4 * page_size);
> > + assert(!ret);
> > + size = get_vma_size(ptr + page_size);
> > + assert(size == 1 * page_size);
> > + size = get_vma_size(ptr + 6 * page_size);
> > + assert(size == 17 * page_size);
> > +
> > + /* (1 NONE) (RO) (1 free) (2 RO) (1 free) (17 RO) (1 NONE) */
> > + ptr2 = sys_mmap(ptr + 3 * page_size, 2 * page_size, PROT_READ,
> > + MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
> > + size = get_vma_size(ptr + 3 * page_size);
> > + assert(size == 2 * page_size);
> > +
> > + /* (1 NONE) (RO) (1 free) (20 RO) (1 NONE) */
> > + ptr2 = sys_mmap(ptr + 5 * page_size, 1 * page_size, PROT_READ,
> > + MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE | MAP_SEALABLE, -1, 0);
> > + assert(ptr2 != (void *)-1);
> > + size = get_vma_size(ptr + 3 * page_size);
> > + assert(size == 20 * page_size);
> > +
> > + /* (1 NONE) (RO) (1 free) (19 RO) (1 RO_SB) (1 NONE) */
> > + ret = sys_mseal(ptr + 22 * page_size, page_size);
> > + assert(!ret);
> > +
> > + /* (1 NONE) (RO) (not sealable) (19 RO) (1 RO_SB) (1 NONE) */
> > + ptr2 = sys_mmap(ptr + 2 * page_size, page_size, PROT_READ,
> > + MAP_FIXED | MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> > + assert(ptr2 != (void *)-1);
> > + size = get_vma_size(ptr + page_size);
> > + assert(size == page_size);
> > + size = get_vma_size(ptr + 2 * page_size);
> > + assert(size == page_size);
> > +}
> > +
> > +static void test_seal_discard_ro_anon_on_rw(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address_rw_sealable(size, &ptr, seal);
> > + assert(ptr != (void *)-1);
> > +
> > + if (seal) {
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > + }
> > +
> > + /* sealing doesn't take effect on RW memory. */
> > + ret = sys_madvise(ptr, size, MADV_DONTNEED);
> > + assert(!ret);
> > +
> > + /* base seal still apply. */
> > + ret = sys_munmap(ptr, size);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_discard_ro_anon_on_pkey(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > + int pkey;
> > +
> > + setup_single_address_rw_sealable(size, &ptr, seal);
> > + assert(ptr != (void *)-1);
> > +
> > + pkey = sys_pkey_alloc(0, 0);
> > + assert(pkey > 0);
> > +
> > + ret = sys_mprotect_pkey((void *)ptr, size, PROT_READ | PROT_WRITE, pkey);
> > + assert(!ret);
> > +
> > + if (seal) {
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > + }
> > +
> > + /* sealing doesn't take effect if PKRU allow write. */
> > + set_pkey(pkey, 0);
> > + ret = sys_madvise(ptr, size, MADV_DONTNEED);
> > + assert(!ret);
> > +
> > + /* sealing will take effect if PKRU deny write. */
> > + set_pkey(pkey, PKEY_DISABLE_WRITE);
> > + ret = sys_madvise(ptr, size, MADV_DONTNEED);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +
> > + /* base seal still apply. */
> > + ret = sys_munmap(ptr, size);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_discard_ro_anon_on_filebacked(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > + int fd;
> > + unsigned long mapflags = MAP_PRIVATE;
> > +
> > + if (seal)
> > + mapflags |= MAP_SEALABLE;
> > +
> > + fd = memfd_create("test", 0);
> > + assert(fd > 0);
> > +
> > + ret = fallocate(fd, 0, 0, size);
> > + assert(!ret);
> > +
> > + ptr = sys_mmap(NULL, size, PROT_READ, mapflags, fd, 0);
> > + assert(ptr != MAP_FAILED);
> > +
> > + if (seal) {
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > + }
> > +
> > + /* sealing doesn't apply for file backed mapping. */
> > + ret = sys_madvise(ptr, size, MADV_DONTNEED);
> > + assert(!ret);
> > +
> > + ret = sys_munmap(ptr, size);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > + close(fd);
> > +}
> > +
> > +static void test_seal_discard_ro_anon_on_shared(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > + unsigned long mapflags = MAP_ANONYMOUS | MAP_SHARED;
> > +
> > + if (seal)
> > + mapflags |= MAP_SEALABLE;
> > +
> > + ptr = sys_mmap(NULL, size, PROT_READ, mapflags, -1, 0);
> > + assert(ptr != (void *)-1);
> > +
> > + if (seal) {
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > + }
> > +
> > + /* sealing doesn't apply for shared mapping. */
> > + ret = sys_madvise(ptr, size, MADV_DONTNEED);
> > + assert(!ret);
> > +
> > + ret = sys_munmap(ptr, size);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +}
> > +
> > +static void test_seal_discard_ro_anon_invalid_shared(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > + int fd;
> > +
> > + fd = open("/proc/self/maps", O_RDONLY);
> > + ptr = sys_mmap(NULL, size, PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, fd, 0);
> > + assert(ptr != (void *)-1);
> > +
> > + if (seal) {
> > + ret = sys_mseal(ptr, size);
> > + assert(!ret);
> > + }
> > +
> > + ret = sys_madvise(ptr, size, MADV_DONTNEED);
> > + assert(!ret);
> > +
> > + ret = sys_munmap(ptr, size);
> > + assert(ret < 0);
> > + close(fd);
> > +}
> > +
> > +static void test_seal_discard_ro_anon(bool seal)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + setup_single_address(size, &ptr);
> > +
> > + if (seal)
> > + seal_single_address(ptr, size);
> > +
> > + ret = sys_madvise(ptr, size, MADV_DONTNEED);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +
> > + ret = sys_munmap(ptr, size);
> > + if (seal)
> > + assert(ret < 0);
> > + else
> > + assert(!ret);
> > +}
> > +
> > +static void test_mmap_seal_discard_ro_anon(void)
> > +{
> > + LOG_TEST_ENTER();
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > + unsigned long size = 4 * page_size;
> > + int ret;
> > +
> > + ptr = sys_mmap(NULL, size, PROT_READ | PROT_WRITE | PROT_SEAL,
> > + MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> > + assert(ptr != (void *)-1);
> > +
> > + ret = sys_mprotect(ptr, size, PROT_READ);
> > + assert(!ret);
> > +
> > + ret = sys_madvise(ptr, size, MADV_DONTNEED);
> > + assert(ret < 0);
> > +
> > + ret = sys_munmap(ptr, size);
> > + assert(ret < 0);
> > +}
> > +
> > +bool seal_support(void)
> > +{
> > + int ret;
> > + void *ptr;
> > + unsigned long page_size = getpagesize();
> > +
> > + ptr = sys_mmap(NULL, page_size, PROT_READ | PROT_SEAL, MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
> > + if (ptr == (void *) -1)
> > + return false;
> > +
> > + ret = sys_mseal(ptr, page_size);
> > + if (ret < 0)
> > + return false;
> > +
> > + return true;
> > +}
> > +
> > +bool pkey_supported(void)
> > +{
> > + int pkey = sys_pkey_alloc(0, 0);
> > +
> > + if (pkey > 0)
> > + return true;
> > + return false;
> > +}
> > +
> > +int main(int argc, char **argv)
> > +{
>
> ksft_print_header();
> if (test_seal())
> return ksft_exit_pass();
> ksft_set_plan(10);
>
> > + bool test_seal = seal_support();
> > +
> > + if (!test_seal) {
> > + ksft_print_msg("sealing not supported (check CONFIG_64BIT)\n");
> > + return 0;
> > + }
> > +
> > + test_seal_addseal();
> > +
> > + test_seal_unmapped_start();
> > + test_seal_unmapped_middle();
> > + test_seal_unmapped_end();
> > + test_seal_multiple_vmas();
> > + test_seal_split_start();
> > + test_seal_split_end();
> > + test_seal_invalid_input();
> > + test_seal_zero_length();
> > + test_seal_twice();
> > +
> > + test_seal_mprotect(false);
> > + test_seal_mprotect(true);
> > +
> > + test_seal_start_mprotect(false);
> > + test_seal_start_mprotect(true);
> > +
> > + test_seal_end_mprotect(false);
> > + test_seal_end_mprotect(true);
> > +
> > + test_seal_mprotect_unalign_len(false);
> > + test_seal_mprotect_unalign_len(true);
> > +
> > + test_seal_mprotect_unalign_len_variant_2(false);
> > + test_seal_mprotect_unalign_len_variant_2(true);
> > +
> > + test_seal_mprotect_two_vma(false);
> > + test_seal_mprotect_two_vma(true);
> > +
> > + test_seal_mprotect_two_vma_with_split(false);
> > + test_seal_mprotect_two_vma_with_split(true);
> > +
> > + test_seal_mprotect_partial_mprotect(false);
> > + test_seal_mprotect_partial_mprotect(true);
> > +
> > + test_seal_mprotect_two_vma_with_gap(false);
> > + test_seal_mprotect_two_vma_with_gap(true);
> > +
> > + test_seal_mprotect_merge(false);
> > + test_seal_mprotect_merge(true);
> > +
> > + test_seal_mprotect_split(false);
> > + test_seal_mprotect_split(true);
> > +
> > + test_seal_munmap(false);
> > + test_seal_munmap(true);
> > + test_seal_munmap_two_vma(false);
> > + test_seal_munmap_two_vma(true);
> > + test_seal_munmap_vma_with_gap(false);
> > + test_seal_munmap_vma_with_gap(true);
> > +
> > + test_munmap_start_freed(false);
> > + test_munmap_start_freed(true);
> > + test_munmap_middle_freed(false);
> > + test_munmap_middle_freed(true);
> > + test_munmap_end_freed(false);
> > + test_munmap_end_freed(true);
> > +
> > + test_seal_mremap_shrink(false);
> > + test_seal_mremap_shrink(true);
> > + test_seal_mremap_expand(false);
> > + test_seal_mremap_expand(true);
> > + test_seal_mremap_move(false);
> > + test_seal_mremap_move(true);
> > +
> > + test_seal_mremap_shrink_fixed(false);
> > + test_seal_mremap_shrink_fixed(true);
> > + test_seal_mremap_expand_fixed(false);
> > + test_seal_mremap_expand_fixed(true);
> > + test_seal_mremap_move_fixed(false);
> > + test_seal_mremap_move_fixed(true);
> > + test_seal_mremap_move_dontunmap(false);
> > + test_seal_mremap_move_dontunmap(true);
> > + test_seal_mremap_move_fixed_zero(false);
> > + test_seal_mremap_move_fixed_zero(true);
> > + test_seal_mremap_move_dontunmap_anyaddr(false);
> > + test_seal_mremap_move_dontunmap_anyaddr(true);
> > + test_seal_discard_ro_anon(false);
> > + test_seal_discard_ro_anon(true);
> > + test_seal_discard_ro_anon_on_rw(false);
> > + test_seal_discard_ro_anon_on_rw(true);
> > + test_seal_discard_ro_anon_on_shared(false);
> > + test_seal_discard_ro_anon_on_shared(true);
> > + test_seal_discard_ro_anon_on_filebacked(false);
> > + test_seal_discard_ro_anon_on_filebacked(true);
> > + test_seal_mmap_overwrite_prot(false);
> > + test_seal_mmap_overwrite_prot(true);
> > + test_seal_mmap_expand(false);
> > + test_seal_mmap_expand(true);
> > + test_seal_mmap_shrink(false);
> > + test_seal_mmap_shrink(true);
> > +
> > + test_seal_mmap_seal();
> > + test_seal_merge_and_split();
> > + test_seal_mmap_merge();
> > +
> > + test_not_sealable();
> > + test_merge_sealable();
> > + test_mmap_fixed_change_to_sealable();
> > + test_mmap_fixed_change_to_not_sealable();
> > +
> > + if (pkey_supported()) {
> > + test_seal_discard_ro_anon_on_pkey(false);
> > + test_seal_discard_ro_anon_on_pkey(true);
> > + } else
> > + ksft_print_msg("PKEY not supported, skip pkey related test\n");
> > +
> > + ksft_print_msg("Done\n");
> > + return 0;
> Rather ksft_finishsed();
>
OK.
I will update the selftest using more of ksft macro.
-Jeff
> > +}
>
> --
> BR,
> Muhammad Usama Anjum
[email protected] writes:
> From: Jeff Xu <[email protected]>
>
> Add documentation for mseal().
>
> Signed-off-by: Jeff Xu <[email protected]>
> ---
> Documentation/userspace-api/mseal.rst | 181 ++++++++++++++++++++++++++
> 1 file changed, 181 insertions(+)
> create mode 100644 Documentation/userspace-api/mseal.rst
You need to add this file to index.rst or it won't be part of the docs
build. Sphinx should have warned you about that when you did your test
build.
Thanks,
jon
On 1/16/24 12:13, Jonathan Corbet wrote:
> [email protected] writes:
>
>> From: Jeff Xu <[email protected]>
>>
>> Add documentation for mseal().
>>
>> Signed-off-by: Jeff Xu <[email protected]>
>> ---
>> Documentation/userspace-api/mseal.rst | 181 ++++++++++++++++++++++++++
>> 1 file changed, 181 insertions(+)
>> create mode 100644 Documentation/userspace-api/mseal.rst
>
> You need to add this file to index.rst or it won't be part of the docs
> build. Sphinx should have warned you about that when you did your test
> build.
Yes, I have already asked Jeff to add this to his patch:
diff -- a/Documentation/userspace-api/index.rst b/Documentation/userspace-api/index.rst
--- a/Documentation/userspace-api/index.rst
+++ b/Documentation/userspace-api/index.rst
@@ -23,6 +23,7 @@ place where this information is gathered
ebpf/index
ELF
ioctl/index
+ mseal
iommu
iommufd
media/index
--
#Randy