This patchset contains some abstractions needed by the Rust
implementation of the Binder driver for passing data between userspace,
kernelspace, and directly into other processes.
These abstractions do not exactly match what was included in the Rust
Binder RFC - I have made various improvements and simplifications since
then. Nonetheless, please see the Rust Binder RFC [1] to get an
understanding for how this will be used:
Users of "rust: add userspace pointers"
and "rust: add typed accessors for userspace pointers":
rust_binder: add binderfs support to Rust binder
rust_binder: add threading support
rust_binder: add nodes and context managers
rust_binder: add oneway transactions
rust_binder: add death notifications
rust_binder: send nodes in transactions
rust_binder: add BINDER_TYPE_PTR support
rust_binder: add BINDER_TYPE_FDA support
rust_binder: add process freezing
Users of "rust: add abstraction for `struct page`":
rust_binder: add oneway transactions
rust_binder: add vma shrinker
Links: https://lore.kernel.org/rust-for-linux/[email protected]/ [1]
Signed-off-by: Alice Ryhl <[email protected]>
---
Changes in v5:
- Fix casts in declarations of PAGE_* constants.
- Fix formatting of PAGE_MASK.
- Reformat comments at 100 line length.
- Minor fixes to safety comments of `read_raw` and `write_slice`.
- Link to v4: https://lore.kernel.org/rust-for-linux/[email protected]/
Changes in v4:
- Rephrase when we fail with EFAULT.
- Remove `pub` from examples.
- Use slices for raw uaccess methods.
- Fix PAGE_MASK constant.
- Rephrase most safety comments in Page abstraction.
- Make with_pointer_into_page and with_page_mapped private.
- Explain how raw pointers into pages are used correctly.
- Other minor doc improvements.
- Link to v3: https://lore.kernel.org/rust-for-linux/[email protected]/
Changes in v3:
- Fix bug in read_all.
- Add missing `#include <linux/nospec.h>`.
- Mention that the second patch passes CONFIG_TEST_USER_COPY.
- Add gfp flags for Page.
- Minor documentation adjustments.
- Link to v2: https://lore.kernel.org/rust-for-linux/[email protected]/
Changes in v2:
- Rename user_ptr module to uaccess.
- Use srctree-relative links.
- Improve documentation.
- Rename UserSlicePtr to UserSlice.
- Make read_to_end append to the buffer.
- Use named fields for uaccess types.
- Add examples.
- Use _copy_from/to_user to skip check_object_size.
- Rename traits and move to kernel::types.
- Remove PAGE_MASK constant.
- Rename page methods to say _raw.
- Link to v1: https://lore.kernel.org/rust-for-linux/[email protected]/
---
Alice Ryhl (2):
rust: uaccess: add typed accessors for userspace pointers
rust: add abstraction for `struct page`
Arnd Bergmann (1):
uaccess: always export _copy_[from|to]_user with CONFIG_RUST
Wedson Almeida Filho (1):
rust: uaccess: add userspace pointers
include/linux/uaccess.h | 38 ++--
lib/usercopy.c | 30 +---
rust/bindings/bindings_helper.h | 2 +
rust/helpers.c | 34 ++++
rust/kernel/lib.rs | 2 +
rust/kernel/page.rs | 240 ++++++++++++++++++++++++++
rust/kernel/types.rs | 63 +++++++
rust/kernel/uaccess.rs | 371 ++++++++++++++++++++++++++++++++++++++++
8 files changed, 740 insertions(+), 40 deletions(-)
---
base-commit: 4cece764965020c22cff7665b18a012006359095
change-id: 20231128-alice-mm-bc533456cee8
Best regards,
--
Alice Ryhl <[email protected]>
From: Arnd Bergmann <[email protected]>
Rust code needs to be able to access _copy_from_user and _copy_to_user
so that it can skip the check_copy_size check in cases where the length
is known at compile-time, mirroring the logic for when C code will skip
check_copy_size. To do this, we ensure that exported versions of these
methods are available when CONFIG_RUST is enabled.
Alice has verified that this patch passes the CONFIG_TEST_USER_COPY test
on x86 using the Android cuttlefish emulator.
Signed-off-by: Arnd Bergmann <[email protected]>
Tested-by: Alice Ryhl <[email protected]>
Reviewed-by: Boqun Feng <[email protected]>
Signed-off-by: Alice Ryhl <[email protected]>
---
include/linux/uaccess.h | 38 ++++++++++++++++++++++++--------------
lib/usercopy.c | 30 ++++--------------------------
2 files changed, 28 insertions(+), 40 deletions(-)
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index 3064314f4832..2ebfce98b5cc 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -5,6 +5,7 @@
#include <linux/fault-inject-usercopy.h>
#include <linux/instrumented.h>
#include <linux/minmax.h>
+#include <linux/nospec.h>
#include <linux/sched.h>
#include <linux/thread_info.h>
@@ -138,13 +139,18 @@ __copy_to_user(void __user *to, const void *from, unsigned long n)
return raw_copy_to_user(to, from, n);
}
-#ifdef INLINE_COPY_FROM_USER
static inline __must_check unsigned long
-_copy_from_user(void *to, const void __user *from, unsigned long n)
+_inline_copy_from_user(void *to, const void __user *from, unsigned long n)
{
unsigned long res = n;
might_fault();
if (!should_fail_usercopy() && likely(access_ok(from, n))) {
+ /*
+ * Ensure that bad access_ok() speculation will not
+ * lead to nasty side effects *after* the copy is
+ * finished:
+ */
+ barrier_nospec();
instrument_copy_from_user_before(to, from, n);
res = raw_copy_from_user(to, from, n);
instrument_copy_from_user_after(to, from, n, res);
@@ -153,14 +159,11 @@ _copy_from_user(void *to, const void __user *from, unsigned long n)
memset(to + (n - res), 0, res);
return res;
}
-#else
extern __must_check unsigned long
_copy_from_user(void *, const void __user *, unsigned long);
-#endif
-#ifdef INLINE_COPY_TO_USER
static inline __must_check unsigned long
-_copy_to_user(void __user *to, const void *from, unsigned long n)
+_inline_copy_to_user(void __user *to, const void *from, unsigned long n)
{
might_fault();
if (should_fail_usercopy())
@@ -171,25 +174,32 @@ _copy_to_user(void __user *to, const void *from, unsigned long n)
}
return n;
}
-#else
extern __must_check unsigned long
_copy_to_user(void __user *, const void *, unsigned long);
-#endif
static __always_inline unsigned long __must_check
copy_from_user(void *to, const void __user *from, unsigned long n)
{
- if (check_copy_size(to, n, false))
- n = _copy_from_user(to, from, n);
- return n;
+ if (!check_copy_size(to, n, false))
+ return n;
+#ifdef INLINE_COPY_FROM_USER
+ return _inline_copy_from_user(to, from, n);
+#else
+ return _copy_from_user(to, from, n);
+#endif
}
static __always_inline unsigned long __must_check
copy_to_user(void __user *to, const void *from, unsigned long n)
{
- if (check_copy_size(from, n, true))
- n = _copy_to_user(to, from, n);
- return n;
+ if (!check_copy_size(from, n, true))
+ return n;
+
+#ifdef INLINE_COPY_TO_USER
+ return _inline_copy_to_user(to, from, n);
+#else
+ return _copy_to_user(to, from, n);
+#endif
}
#ifndef copy_mc_to_kernel
diff --git a/lib/usercopy.c b/lib/usercopy.c
index d29fe29c6849..de7f30618293 100644
--- a/lib/usercopy.c
+++ b/lib/usercopy.c
@@ -7,40 +7,18 @@
/* out-of-line parts */
-#ifndef INLINE_COPY_FROM_USER
+#if !defined(INLINE_COPY_FROM_USER) || defined(CONFIG_RUST)
unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n)
{
- unsigned long res = n;
- might_fault();
- if (!should_fail_usercopy() && likely(access_ok(from, n))) {
- /*
- * Ensure that bad access_ok() speculation will not
- * lead to nasty side effects *after* the copy is
- * finished:
- */
- barrier_nospec();
- instrument_copy_from_user_before(to, from, n);
- res = raw_copy_from_user(to, from, n);
- instrument_copy_from_user_after(to, from, n, res);
- }
- if (unlikely(res))
- memset(to + (n - res), 0, res);
- return res;
+ return _inline_copy_from_user(to, from, n);
}
EXPORT_SYMBOL(_copy_from_user);
#endif
-#ifndef INLINE_COPY_TO_USER
+#if !defined(INLINE_COPY_TO_USER) || defined(CONFIG_RUST)
unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n)
{
- might_fault();
- if (should_fail_usercopy())
- return n;
- if (likely(access_ok(to, n))) {
- instrument_copy_to_user(to, from, n);
- n = raw_copy_to_user(to, from, n);
- }
- return n;
+ return _inline_copy_to_user(to, from, n);
}
EXPORT_SYMBOL(_copy_to_user);
#endif
--
2.44.0.683.g7961c838ac-goog
Add safe methods for reading and writing Rust values to and from
userspace pointers.
The C methods for copying to/from userspace use a function called
`check_object_size` to verify that the kernel pointer is not dangling.
However, this check is skipped when the length is a compile-time
constant, with the assumption that such cases trivially have a correct
kernel pointer.
In this patch, we apply the same optimization to the typed accessors.
For both methods, the size of the operation is known at compile time to
be size_of of the type being read or written. Since the C side doesn't
provide a variant that skips only this check, we create custom helpers
for this purpose.
The majority of reads and writes to userspace pointers in the Rust
Binder driver uses these accessor methods. Benchmarking has found that
skipping the `check_object_size` check makes a big difference for the
cases being skipped here. (And that the check doesn't make a difference
for the cases that use the raw read/write methods.)
This code is based on something that was originally written by Wedson on
the old rust branch. It was modified by Alice to skip the
`check_object_size` check, and to update various comments, including the
notes about kernel pointers in `WritableToBytes`.
Co-developed-by: Wedson Almeida Filho <[email protected]>
Signed-off-by: Wedson Almeida Filho <[email protected]>
Reviewed-by: Benno Lossin <[email protected]>
Reviewed-by: Boqun Feng <[email protected]>
Signed-off-by: Alice Ryhl <[email protected]>
---
rust/kernel/types.rs | 63 ++++++++++++++++++++++++++++++++++++++++++++
rust/kernel/uaccess.rs | 71 ++++++++++++++++++++++++++++++++++++++++++++++++--
2 files changed, 132 insertions(+), 2 deletions(-)
diff --git a/rust/kernel/types.rs b/rust/kernel/types.rs
index aa77bad9bce4..414ba602fc5b 100644
--- a/rust/kernel/types.rs
+++ b/rust/kernel/types.rs
@@ -409,3 +409,66 @@ pub enum Either<L, R> {
/// Constructs an instance of [`Either`] containing a value of type `R`.
Right(R),
}
+
+/// Types for which any bit pattern is valid.
+///
+/// Not all types are valid for all values. For example, a `bool` must be either zero or one, so
+/// reading arbitrary bytes into something that contains a `bool` is not okay.
+///
+/// It's okay for the type to have padding, as initializing those bytes has no effect.
+///
+/// # Safety
+///
+/// All bit-patterns must be valid for this type.
+pub unsafe trait FromBytes {}
+
+// SAFETY: All bit patterns are acceptable values of the types below.
+unsafe impl FromBytes for u8 {}
+unsafe impl FromBytes for u16 {}
+unsafe impl FromBytes for u32 {}
+unsafe impl FromBytes for u64 {}
+unsafe impl FromBytes for usize {}
+unsafe impl FromBytes for i8 {}
+unsafe impl FromBytes for i16 {}
+unsafe impl FromBytes for i32 {}
+unsafe impl FromBytes for i64 {}
+unsafe impl FromBytes for isize {}
+// SAFETY: If all bit patterns are acceptable for individual values in an array, then all bit
+// patterns are also acceptable for arrays of that type.
+unsafe impl<T: FromBytes> FromBytes for [T] {}
+unsafe impl<T: FromBytes, const N: usize> FromBytes for [T; N] {}
+
+/// Types that can be viewed as an immutable slice of initialized bytes.
+///
+/// If a struct implements this trait, then it is okay to copy it byte-for-byte to userspace. This
+/// means that it should not have any padding, as padding bytes are uninitialized. Reading
+/// uninitialized memory is not just undefined behavior, it may even lead to leaking sensitive
+/// information on the stack to userspace.
+///
+/// The struct should also not hold kernel pointers, as kernel pointer addresses are also considered
+/// sensitive. However, leaking kernel pointers is not considered undefined behavior by Rust, so
+/// this is a correctness requirement, but not a safety requirement.
+///
+/// # Safety
+///
+/// Values of this type may not contain any uninitialized bytes.
+pub unsafe trait AsBytes {}
+
+// SAFETY: Instances of the following types have no uninitialized portions.
+unsafe impl AsBytes for u8 {}
+unsafe impl AsBytes for u16 {}
+unsafe impl AsBytes for u32 {}
+unsafe impl AsBytes for u64 {}
+unsafe impl AsBytes for usize {}
+unsafe impl AsBytes for i8 {}
+unsafe impl AsBytes for i16 {}
+unsafe impl AsBytes for i32 {}
+unsafe impl AsBytes for i64 {}
+unsafe impl AsBytes for isize {}
+unsafe impl AsBytes for bool {}
+unsafe impl AsBytes for char {}
+unsafe impl AsBytes for str {}
+// SAFETY: If individual values in an array have no uninitialized portions, then the array itself
+// does not have any uninitialized portions either.
+unsafe impl<T: AsBytes> AsBytes for [T] {}
+unsafe impl<T: AsBytes, const N: usize> AsBytes for [T; N] {}
diff --git a/rust/kernel/uaccess.rs b/rust/kernel/uaccess.rs
index c97029cdeba1..e3953eec61a3 100644
--- a/rust/kernel/uaccess.rs
+++ b/rust/kernel/uaccess.rs
@@ -4,10 +4,15 @@
//!
//! C header: [`include/linux/uaccess.h`](srctree/include/linux/uaccess.h)
-use crate::{bindings, error::code::*, error::Result};
+use crate::{
+ bindings,
+ error::code::*,
+ error::Result,
+ types::{AsBytes, FromBytes},
+};
use alloc::vec::Vec;
use core::ffi::{c_ulong, c_void};
-use core::mem::MaybeUninit;
+use core::mem::{size_of, MaybeUninit};
/// A pointer to an area in userspace memory, which can be either read-only or read-write.
///
@@ -238,6 +243,38 @@ pub fn read_slice(&mut self, out: &mut [u8]) -> Result {
self.read_raw(out)
}
+ /// Reads a value of the specified type.
+ ///
+ /// Fails with `EFAULT` if the read encounters a page fault.
+ pub fn read<T: FromBytes>(&mut self) -> Result<T> {
+ let len = size_of::<T>();
+ if len > self.length {
+ return Err(EFAULT);
+ }
+ let Ok(len_ulong) = c_ulong::try_from(len) else {
+ return Err(EFAULT);
+ };
+ let mut out: MaybeUninit<T> = MaybeUninit::uninit();
+ // SAFETY: The local variable `out` is valid for writing `size_of::<T>()` bytes.
+ //
+ // By using the _copy_from_user variant, we skip the check_object_size check that verifies
+ // the kernel pointer. This mirrors the logic on the C side that skips the check when the
+ // length is a compile-time constant.
+ let res = unsafe {
+ bindings::_copy_from_user(out.as_mut_ptr().cast::<c_void>(), self.ptr, len_ulong)
+ };
+ if res != 0 {
+ return Err(EFAULT);
+ }
+ // Since this is not a pointer to a valid object in our program, we cannot use `add`, which
+ // has C-style rules for defined behavior.
+ self.ptr = self.ptr.wrapping_byte_add(len);
+ self.length -= len;
+ // SAFETY: The read above has initialized all bytes in `out`, and since `T` implements
+ // `FromBytes`, any bit-pattern is a valid value for this type.
+ Ok(unsafe { out.assume_init() })
+ }
+
/// Reads the entirety of the user slice, appending it to the end of the provided buffer.
///
/// Fails with `EFAULT` if the read happens on a bad address.
@@ -301,4 +338,34 @@ pub fn write_slice(&mut self, data: &[u8]) -> Result {
self.length -= len;
Ok(())
}
+
+ /// Writes the provided Rust value to this userspace pointer.
+ ///
+ /// Fails with `EFAULT` if the write encounters a page fault.
+ pub fn write<T: AsBytes>(&mut self, value: &T) -> Result {
+ let len = size_of::<T>();
+ if len > self.length {
+ return Err(EFAULT);
+ }
+ let Ok(len_ulong) = c_ulong::try_from(len) else {
+ return Err(EFAULT);
+ };
+ // SAFETY: The reference points to a value of type `T`, so it is valid for reading
+ // `size_of::<T>()` bytes.
+ //
+ // By using the _copy_to_user variant, we skip the check_object_size check that verifies the
+ // kernel pointer. This mirrors the logic on the C side that skips the check when the length
+ // is a compile-time constant.
+ let res = unsafe {
+ bindings::_copy_to_user(self.ptr, (value as *const T).cast::<c_void>(), len_ulong)
+ };
+ if res != 0 {
+ return Err(EFAULT);
+ }
+ // Since this is not a pointer to a valid object in our program, we cannot use `add`, which
+ // has C-style rules for defined behavior.
+ self.ptr = self.ptr.wrapping_byte_add(len);
+ self.length -= len;
+ Ok(())
+ }
}
--
2.44.0.683.g7961c838ac-goog
Adds a new struct called `Page` that wraps a pointer to `struct page`.
This struct is assumed to hold ownership over the page, so that Rust
code can allocate and manage pages directly.
The page type has various methods for reading and writing into the page.
These methods will temporarily map the page to allow the operation. All
of these methods use a helper that takes an offset and length, performs
bounds checks, and returns a pointer to the given offset in the page.
This patch only adds support for pages of order zero, as that is all
Rust Binder needs. However, it is written to make it easy to add support
for higher-order pages in the future. To do that, you would add a const
generic parameter to `Page` that specifies the order. Most of the
methods do not need to be adjusted, as the logic for dealing with
mapping multiple pages at once can be isolated to just the
`with_pointer_into_page` method.
Rust Binder needs to manage pages directly as that is how transactions
are delivered: Each process has an mmap'd region for incoming
transactions. When an incoming transaction arrives, the Binder driver
will choose a region in the mmap, allocate and map the relevant pages
manually, and copy the incoming transaction directly into the page. This
architecture allows the driver to copy transactions directly from the
address space of one process to another, without an intermediate copy
to a kernel buffer.
This code is based on Wedson's page abstractions from the old rust
branch, but it has been modified by Alice by removing the incomplete
support for higher-order pages, by introducing the `with_*` helpers
to consolidate the bounds checking logic into a single place, and by
introducing gfp flags.
Co-developed-by: Wedson Almeida Filho <[email protected]>
Signed-off-by: Wedson Almeida Filho <[email protected]>
Signed-off-by: Alice Ryhl <[email protected]>
---
rust/bindings/bindings_helper.h | 2 +
rust/helpers.c | 20 ++++
rust/kernel/lib.rs | 1 +
rust/kernel/page.rs | 240 ++++++++++++++++++++++++++++++++++++++++
4 files changed, 263 insertions(+)
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 65b98831b975..da1e97871419 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -20,5 +20,7 @@
/* `bindgen` gets confused at certain things. */
const size_t RUST_CONST_HELPER_ARCH_SLAB_MINALIGN = ARCH_SLAB_MINALIGN;
+const size_t RUST_CONST_HELPER_PAGE_SIZE = PAGE_SIZE;
const gfp_t RUST_CONST_HELPER_GFP_KERNEL = GFP_KERNEL;
const gfp_t RUST_CONST_HELPER___GFP_ZERO = __GFP_ZERO;
+const gfp_t RUST_CONST_HELPER___GFP_HIGHMEM = ___GFP_HIGHMEM;
diff --git a/rust/helpers.c b/rust/helpers.c
index 312b6fcb49d5..72361003ba91 100644
--- a/rust/helpers.c
+++ b/rust/helpers.c
@@ -25,6 +25,8 @@
#include <linux/build_bug.h>
#include <linux/err.h>
#include <linux/errname.h>
+#include <linux/gfp.h>
+#include <linux/highmem.h>
#include <linux/mutex.h>
#include <linux/refcount.h>
#include <linux/sched/signal.h>
@@ -93,6 +95,24 @@ int rust_helper_signal_pending(struct task_struct *t)
}
EXPORT_SYMBOL_GPL(rust_helper_signal_pending);
+struct page *rust_helper_alloc_pages(gfp_t gfp_mask, unsigned int order)
+{
+ return alloc_pages(gfp_mask, order);
+}
+EXPORT_SYMBOL_GPL(rust_helper_alloc_pages);
+
+void *rust_helper_kmap_local_page(struct page *page)
+{
+ return kmap_local_page(page);
+}
+EXPORT_SYMBOL_GPL(rust_helper_kmap_local_page);
+
+void rust_helper_kunmap_local(const void *addr)
+{
+ kunmap_local(addr);
+}
+EXPORT_SYMBOL_GPL(rust_helper_kunmap_local);
+
refcount_t rust_helper_REFCOUNT_INIT(int n)
{
return (refcount_t)REFCOUNT_INIT(n);
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index 37f84223b83f..667fc67fa24f 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -39,6 +39,7 @@
pub mod kunit;
#[cfg(CONFIG_NET)]
pub mod net;
+pub mod page;
pub mod prelude;
pub mod print;
mod static_assert;
diff --git a/rust/kernel/page.rs b/rust/kernel/page.rs
new file mode 100644
index 000000000000..f7f8870ddb66
--- /dev/null
+++ b/rust/kernel/page.rs
@@ -0,0 +1,240 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Kernel page allocation and management.
+
+use crate::{bindings, error::code::*, error::Result, uaccess::UserSliceReader};
+use core::{
+ alloc::AllocError,
+ ptr::{self, NonNull},
+};
+
+/// A bitwise shift for the page size.
+pub const PAGE_SHIFT: usize = bindings::PAGE_SHIFT as usize;
+
+/// The number of bytes in a page.
+pub const PAGE_SIZE: usize = bindings::PAGE_SIZE;
+
+/// A bitmask that gives the page containing a given address.
+pub const PAGE_MASK: usize = !(PAGE_SIZE - 1);
+
+/// Flags for the "get free page" function that underlies all memory allocations.
+pub mod flags {
+ /// gfp flags.
+ #[allow(non_camel_case_types)]
+ pub type gfp_t = bindings::gfp_t;
+
+ /// `GFP_KERNEL` is typical for kernel-internal allocations. The caller requires `ZONE_NORMAL`
+ /// or a lower zone for direct access but can direct reclaim.
+ pub const GFP_KERNEL: gfp_t = bindings::GFP_KERNEL;
+ /// `GFP_ZERO` returns a zeroed page on success.
+ pub const __GFP_ZERO: gfp_t = bindings::__GFP_ZERO;
+ /// `GFP_HIGHMEM` indicates that the allocated memory may be located in high memory.
+ pub const __GFP_HIGHMEM: gfp_t = bindings::__GFP_HIGHMEM;
+}
+
+/// A pointer to a page that owns the page allocation.
+///
+/// # Invariants
+///
+/// The pointer is valid, and has ownership over the page.
+pub struct Page {
+ page: NonNull<bindings::page>,
+}
+
+// SAFETY: Pages have no logic that relies on them staying on a given thread, so moving them across
+// threads is safe.
+unsafe impl Send for Page {}
+
+// SAFETY: Pages have no logic that relies on them not being accessed concurrently, so accessing
+// them concurrently is safe.
+unsafe impl Sync for Page {}
+
+impl Page {
+ /// Allocates a new page.
+ pub fn alloc_page(gfp_flags: flags::gfp_t) -> Result<Self, AllocError> {
+ // SAFETY: Depending on the value of `gfp_flags`, this call may sleep. Other than that, it
+ // is always safe to call this method.
+ let page = unsafe { bindings::alloc_pages(gfp_flags, 0) };
+ let page = NonNull::new(page).ok_or(AllocError)?;
+ // INVARIANT: We just successfully allocated a page, so we now have ownership of the newly
+ // allocated page. We transfer that ownership to the new `Page` object.
+ Ok(Self { page })
+ }
+
+ /// Returns a raw pointer to the page.
+ pub fn as_ptr(&self) -> *mut bindings::page {
+ self.page.as_ptr()
+ }
+
+ /// Runs a piece of code with this page mapped to an address.
+ ///
+ /// The page is unmapped when this call returns.
+ ///
+ /// # Using the raw pointer
+ ///
+ /// It is up to the caller to use the provided raw pointer correctly. The pointer is valid for
+ /// `PAGE_SIZE` bytes and for the duration in which the closure is called. The pointer might
+ /// only be mapped on the current thread, and when that is the case, dereferencing it on other
+ /// threads is UB. Other than that, the usual rules for dereferencing a raw pointer apply: don't
+ /// cause data races, the memory may be uninitialized, and so on.
+ ///
+ /// If multiple threads map the same page at the same time, then they may reference with
+ /// different addresses. However, even if the addresses are different, the underlying memory is
+ /// still the same for these purposes (e.g., it's still a data race if they both write to the
+ /// same underlying byte at the same time).
+ fn with_page_mapped<T>(&self, f: impl FnOnce(*mut u8) -> T) -> T {
+ // SAFETY: `page` is valid due to the type invariants on `Page`.
+ let mapped_addr = unsafe { bindings::kmap_local_page(self.as_ptr()) };
+
+ let res = f(mapped_addr.cast());
+
+ // This unmaps the page mapped above.
+ //
+ // SAFETY: Since this API takes the user code as a closure, it can only be used in a manner
+ // where the pages are unmapped in reverse order. This is as required by `kunmap_local`.
+ //
+ // In other words, if this call to `kunmap_local` happens when a different page should be
+ // unmapped first, then there must necessarily be a call to `kmap_local_page` other than the
+ // call just above in `with_page_mapped` that made that possible. In this case, it is the
+ // unsafe block that wraps that other call that is incorrect.
+ unsafe { bindings::kunmap_local(mapped_addr) };
+
+ res
+ }
+
+ /// Runs a piece of code with a raw pointer to a slice of this page, with bounds checking.
+ ///
+ /// If `f` is called, then it will be called with a pointer that points at `off` bytes into the
+ /// page, and the pointer will be valid for at least `len` bytes. The pointer is only valid on
+ /// this task, as this method uses a local mapping.
+ ///
+ /// If `off` and `len` refers to a region outside of this page, then this method returns
+ /// `EINVAL` and does not call `f`.
+ ///
+ /// # Using the raw pointer
+ ///
+ /// It is up to the caller to use the provided raw pointer correctly. The pointer is valid for
+ /// `len` bytes and for the duration in which the closure is called. The pointer might only be
+ /// mapped on the current thread, and when that is the case, dereferencing it on other threads
+ /// is UB. Other than that, the usual rules for dereferencing a raw pointer apply: don't cause
+ /// data races, the memory may be uninitialized, and so on.
+ ///
+ /// If multiple threads map the same page at the same time, then they may reference with
+ /// different addresses. However, even if the addresses are different, the underlying memory is
+ /// still the same for these purposes (e.g., it's still a data race if they both write to the
+ /// same underlying byte at the same time).
+ fn with_pointer_into_page<T>(
+ &self,
+ off: usize,
+ len: usize,
+ f: impl FnOnce(*mut u8) -> Result<T>,
+ ) -> Result<T> {
+ let bounds_ok = off <= PAGE_SIZE && len <= PAGE_SIZE && (off + len) <= PAGE_SIZE;
+
+ if bounds_ok {
+ self.with_page_mapped(move |page_addr| {
+ // SAFETY: The `off` integer is at most `PAGE_SIZE`, so this pointer offset will
+ // result in a pointer that is in bounds or one off the end of the page.
+ f(unsafe { page_addr.add(off) })
+ })
+ } else {
+ Err(EINVAL)
+ }
+ }
+
+ /// Maps the page and reads from it into the given buffer.
+ ///
+ /// This method will perform bounds checks on the page offset. If `offset .. offset+len` goes
+ /// outside ot the page, then this call returns `EINVAL`.
+ ///
+ /// # Safety
+ ///
+ /// * Callers must ensure that `dst` is valid for writing `len` bytes.
+ /// * Callers must ensure that this call does not race with a write to the same page that
+ /// overlaps with this read.
+ pub unsafe fn read_raw(&self, dst: *mut u8, offset: usize, len: usize) -> Result {
+ self.with_pointer_into_page(offset, len, move |src| {
+ // SAFETY: If `with_pointer_into_page` calls into this closure, then
+ // it has performed a bounds check and guarantees that `src` is
+ // valid for `len` bytes.
+ //
+ // There caller guarantees that there is no data race.
+ unsafe { ptr::copy_nonoverlapping(src, dst, len) };
+ Ok(())
+ })
+ }
+
+ /// Maps the page and writes into it from the given buffer.
+ ///
+ /// This method will perform bounds checks on the page offset. If `offset .. offset+len` goes
+ /// outside ot the page, then this call returns `EINVAL`.
+ ///
+ /// # Safety
+ ///
+ /// * Callers must ensure that `src` is valid for reading `len` bytes.
+ /// * Callers must ensure that this call does not race with a read or write to the same page
+ /// that overlaps with this write.
+ pub unsafe fn write_raw(&self, src: *const u8, offset: usize, len: usize) -> Result {
+ self.with_pointer_into_page(offset, len, move |dst| {
+ // SAFETY: If `with_pointer_into_page` calls into this closure, then it has performed a
+ // bounds check and guarantees that `dst` is valid for `len` bytes.
+ //
+ // There caller guarantees that there is no data race.
+ unsafe { ptr::copy_nonoverlapping(src, dst, len) };
+ Ok(())
+ })
+ }
+
+ /// Maps the page and zeroes the given slice.
+ ///
+ /// This method will perform bounds checks on the page offset. If `offset .. offset+len` goes
+ /// outside ot the page, then this call returns `EINVAL`.
+ ///
+ /// # Safety
+ ///
+ /// Callers must ensure that this call does not race with a read or write to the same page that
+ /// overlaps with this write.
+ pub unsafe fn fill_zero(&self, offset: usize, len: usize) -> Result {
+ self.with_pointer_into_page(offset, len, move |dst| {
+ // SAFETY: If `with_pointer_into_page` calls into this closure, then it has performed a
+ // bounds check and guarantees that `dst` is valid for `len` bytes.
+ //
+ // There caller guarantees that there is no data race.
+ unsafe { ptr::write_bytes(dst, 0u8, len) };
+ Ok(())
+ })
+ }
+
+ /// Copies data from userspace into this page.
+ ///
+ /// This method will perform bounds checks on the page offset. If `offset .. offset+len` goes
+ /// outside ot the page, then this call returns `EINVAL`.
+ ///
+ /// Like the other `UserSliceReader` methods, data races are allowed on the userspace address.
+ /// However, they are not allowed on the page you are copying into.
+ ///
+ /// # Safety
+ ///
+ /// Callers must ensure that this call does not race with a read or write to the same page that
+ /// overlaps with this write.
+ pub unsafe fn copy_from_user_slice(
+ &self,
+ reader: &mut UserSliceReader,
+ offset: usize,
+ len: usize,
+ ) -> Result {
+ self.with_pointer_into_page(offset, len, move |dst| {
+ // SAFETY: If `with_pointer_into_page` calls into this closure, then it has performed a
+ // bounds check and guarantees that `dst` is valid for `len` bytes. Furthermore, we have
+ // exclusive access to the slice since the caller guarantees that there are no races.
+ reader.read_raw(unsafe { core::slice::from_raw_parts_mut(dst.cast(), len) })
+ })
+ }
+}
+
+impl Drop for Page {
+ fn drop(&mut self) {
+ // SAFETY: By the type invariants, we have ownership of the page and can free it.
+ unsafe { bindings::__free_pages(self.page.as_ptr(), 0) };
+ }
+}
--
2.44.0.683.g7961c838ac-goog
From: Wedson Almeida Filho <[email protected]>
A pointer to an area in userspace memory, which can be either read-only
or read-write.
All methods on this struct are safe: attempting to read or write on bad
addresses (either out of the bound of the slice or unmapped addresses)
will return `EFAULT`. Concurrent access, *including data races to/from
userspace memory*, is permitted, because fundamentally another userspace
thread/process could always be modifying memory at the same time (in the
same way that userspace Rust's `std::io` permits data races with the
contents of files on disk). In the presence of a race, the exact byte
values read/written are unspecified but the operation is well-defined.
Kernelspace code should validate its copy of data after completing a
read, and not expect that multiple reads of the same address will return
the same value.
These APIs are designed to make it difficult to accidentally write
TOCTOU bugs. Every time you read from a memory location, the pointer is
advanced by the length so that you cannot use that reader to read the
same memory location twice. Preventing double-fetches avoids TOCTOU
bugs. This is accomplished by taking `self` by value to prevent
obtaining multiple readers on a given `UserSlicePtr`, and the readers
only permitting forward reads. If double-fetching a memory location is
necessary for some reason, then that is done by creating multiple
readers to the same memory location.
Constructing a `UserSlicePtr` performs no checks on the provided
address and length, it can safely be constructed inside a kernel thread
with no current userspace process. Reads and writes wrap the kernel APIs
`copy_from_user` and `copy_to_user`, which check the memory map of the
current process and enforce that the address range is within the user
range (no additional calls to `access_ok` are needed).
This code is based on something that was originally written by Wedson on
the old rust branch. It was modified by Alice by removing the
`IoBufferReader` and `IoBufferWriter` traits, and various other changes.
Signed-off-by: Wedson Almeida Filho <[email protected]>
Co-developed-by: Alice Ryhl <[email protected]>
Signed-off-by: Alice Ryhl <[email protected]>
---
rust/helpers.c | 14 +++
rust/kernel/lib.rs | 1 +
rust/kernel/uaccess.rs | 304 +++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 319 insertions(+)
diff --git a/rust/helpers.c b/rust/helpers.c
index 70e59efd92bc..312b6fcb49d5 100644
--- a/rust/helpers.c
+++ b/rust/helpers.c
@@ -38,6 +38,20 @@ __noreturn void rust_helper_BUG(void)
}
EXPORT_SYMBOL_GPL(rust_helper_BUG);
+unsigned long rust_helper_copy_from_user(void *to, const void __user *from,
+ unsigned long n)
+{
+ return copy_from_user(to, from, n);
+}
+EXPORT_SYMBOL_GPL(rust_helper_copy_from_user);
+
+unsigned long rust_helper_copy_to_user(void __user *to, const void *from,
+ unsigned long n)
+{
+ return copy_to_user(to, from, n);
+}
+EXPORT_SYMBOL_GPL(rust_helper_copy_to_user);
+
void rust_helper_mutex_lock(struct mutex *lock)
{
mutex_lock(lock);
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index be68d5e567b1..37f84223b83f 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -49,6 +49,7 @@
pub mod task;
pub mod time;
pub mod types;
+pub mod uaccess;
pub mod workqueue;
#[doc(hidden)]
diff --git a/rust/kernel/uaccess.rs b/rust/kernel/uaccess.rs
new file mode 100644
index 000000000000..c97029cdeba1
--- /dev/null
+++ b/rust/kernel/uaccess.rs
@@ -0,0 +1,304 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Slices to user space memory regions.
+//!
+//! C header: [`include/linux/uaccess.h`](srctree/include/linux/uaccess.h)
+
+use crate::{bindings, error::code::*, error::Result};
+use alloc::vec::Vec;
+use core::ffi::{c_ulong, c_void};
+use core::mem::MaybeUninit;
+
+/// A pointer to an area in userspace memory, which can be either read-only or read-write.
+///
+/// All methods on this struct are safe: attempting to read or write on bad addresses (either out of
+/// the bound of the slice or unmapped addresses) will return `EFAULT`. Concurrent access,
+/// *including data races to/from userspace memory*, is permitted, because fundamentally another
+/// userspace thread/process could always be modifying memory at the same time (in the same way that
+/// userspace Rust's [`std::io`] permits data races with the contents of files on disk). In the
+/// presence of a race, the exact byte values read/written are unspecified but the operation is
+/// well-defined. Kernelspace code should validate its copy of data after completing a read, and not
+/// expect that multiple reads of the same address will return the same value.
+///
+/// These APIs are designed to make it difficult to accidentally write TOCTOU (time-of-check to
+/// time-of-use) bugs. Every time a memory location is read, the reader's position is advanced by
+/// the read length and the next read will start from there. This helps prevent accidentally reading
+/// the same location twice and causing a TOCTOU bug.
+///
+/// Creating a [`UserSliceReader`] and/or [`UserSliceWriter`] consumes the `UserSlice`, helping
+/// ensure that there aren't multiple readers or writers to the same location.
+///
+/// If double-fetching a memory location is necessary for some reason, then that is done by creating
+/// multiple readers to the same memory location, e.g. using [`clone_reader`].
+///
+/// # Examples
+///
+/// Takes a region of userspace memory from the current process, and modify it by adding one to
+/// every byte in the region.
+///
+/// ```no_run
+/// use alloc::vec::Vec;
+/// use core::ffi::c_void;
+/// use kernel::error::Result;
+/// use kernel::uaccess::UserSlice;
+///
+/// fn bytes_add_one(uptr: *mut c_void, len: usize) -> Result<()> {
+/// let (read, mut write) = UserSlice::new(uptr, len).reader_writer();
+///
+/// let mut buf = Vec::new();
+/// read.read_all(&mut buf)?;
+///
+/// for b in &mut buf {
+/// *b = b.wrapping_add(1);
+/// }
+///
+/// write.write_slice(&buf)?;
+/// Ok(())
+/// }
+/// ```
+///
+/// Example illustrating a TOCTOU (time-of-check to time-of-use) bug.
+///
+/// ```no_run
+/// use alloc::vec::Vec;
+/// use core::ffi::c_void;
+/// use kernel::error::{code::EINVAL, Result};
+/// use kernel::uaccess::UserSlice;
+///
+/// /// Returns whether the data in this region is valid.
+/// fn is_valid(uptr: *mut c_void, len: usize) -> Result<bool> {
+/// let read = UserSlice::new(uptr, len).reader();
+///
+/// let mut buf = Vec::new();
+/// read.read_all(&mut buf)?;
+///
+/// todo!()
+/// }
+///
+/// /// Returns the bytes behind this user pointer if they are valid.
+/// fn get_bytes_if_valid(uptr: *mut c_void, len: usize) -> Result<Vec<u8>> {
+/// if !is_valid(uptr, len)? {
+/// return Err(EINVAL);
+/// }
+///
+/// let read = UserSlice::new(uptr, len).reader();
+///
+/// let mut buf = Vec::new();
+/// read.read_all(&mut buf)?;
+///
+/// // THIS IS A BUG! The bytes could have changed since we checked them.
+/// //
+/// // To avoid this kind of bug, don't call `UserSlice::new` multiple
+/// // times with the same address.
+/// Ok(buf)
+/// }
+/// ```
+///
+/// [`std::io`]: https://doc.rust-lang.org/std/io/index.html
+/// [`clone_reader`]: UserSliceReader::clone_reader
+pub struct UserSlice {
+ ptr: *mut c_void,
+ length: usize,
+}
+
+impl UserSlice {
+ /// Constructs a user slice from a raw pointer and a length in bytes.
+ ///
+ /// Constructing a [`UserSlice`] performs no checks on the provided address and length, it can
+ /// safely be constructed inside a kernel thread with no current userspace process. Reads and
+ /// writes wrap the kernel APIs `copy_from_user` and `copy_to_user`, which check the memory map
+ /// of the current process and enforce that the address range is within the user range (no
+ /// additional calls to `access_ok` are needed).
+ ///
+ /// Callers must be careful to avoid time-of-check-time-of-use (TOCTOU) issues. The simplest way
+ /// is to create a single instance of [`UserSlice`] per user memory block as it reads each byte
+ /// at most once.
+ pub fn new(ptr: *mut c_void, length: usize) -> Self {
+ UserSlice { ptr, length }
+ }
+
+ /// Reads the entirety of the user slice, appending it to the end of the provided buffer.
+ ///
+ /// Fails with `EFAULT` if the read happens on a bad address.
+ pub fn read_all(self, buf: &mut Vec<u8>) -> Result {
+ self.reader().read_all(buf)
+ }
+
+ /// Constructs a [`UserSliceReader`].
+ pub fn reader(self) -> UserSliceReader {
+ UserSliceReader {
+ ptr: self.ptr,
+ length: self.length,
+ }
+ }
+
+ /// Constructs a [`UserSliceWriter`].
+ pub fn writer(self) -> UserSliceWriter {
+ UserSliceWriter {
+ ptr: self.ptr,
+ length: self.length,
+ }
+ }
+
+ /// Constructs both a [`UserSliceReader`] and a [`UserSliceWriter`].
+ ///
+ /// Usually when this is used, you will first read the data, and then overwrite it afterwards.
+ pub fn reader_writer(self) -> (UserSliceReader, UserSliceWriter) {
+ (
+ UserSliceReader {
+ ptr: self.ptr,
+ length: self.length,
+ },
+ UserSliceWriter {
+ ptr: self.ptr,
+ length: self.length,
+ },
+ )
+ }
+}
+
+/// A reader for [`UserSlice`].
+///
+/// Used to incrementally read from the user slice.
+pub struct UserSliceReader {
+ ptr: *mut c_void,
+ length: usize,
+}
+
+impl UserSliceReader {
+ /// Skip the provided number of bytes.
+ ///
+ /// Returns an error if skipping more than the length of the buffer.
+ pub fn skip(&mut self, num_skip: usize) -> Result {
+ // Update `self.length` first since that's the fallible part of this operation.
+ self.length = self.length.checked_sub(num_skip).ok_or(EFAULT)?;
+ self.ptr = self.ptr.wrapping_byte_add(num_skip);
+ Ok(())
+ }
+
+ /// Create a reader that can access the same range of data.
+ ///
+ /// Reading from the clone does not advance the current reader.
+ ///
+ /// The caller should take care to not introduce TOCTOU issues, as described in the
+ /// documentation for [`UserSlice`].
+ pub fn clone_reader(&self) -> UserSliceReader {
+ UserSliceReader {
+ ptr: self.ptr,
+ length: self.length,
+ }
+ }
+
+ /// Returns the number of bytes left to be read from this reader.
+ ///
+ /// Note that even reading less than this number of bytes may fail.
+ pub fn len(&self) -> usize {
+ self.length
+ }
+
+ /// Returns `true` if no data is available in the io buffer.
+ pub fn is_empty(&self) -> bool {
+ self.length == 0
+ }
+
+ /// Reads raw data from the user slice into a kernel buffer.
+ ///
+ /// After a successful call to this method, all bytes in `out` are initialized.
+ ///
+ /// Fails with `EFAULT` if the read happens on a bad address.
+ pub fn read_raw(&mut self, out: &mut [MaybeUninit<u8>]) -> Result {
+ let len = out.len();
+ let out_ptr = out.as_mut_ptr().cast::<c_void>();
+ if len > self.length {
+ return Err(EFAULT);
+ }
+ let Ok(len_ulong) = c_ulong::try_from(len) else {
+ return Err(EFAULT);
+ };
+ // SAFETY: `out_ptr` points into a mutable slice of length `len_ulong`, so we may write
+ // that many bytes to it.
+ let res = unsafe { bindings::copy_from_user(out_ptr, self.ptr, len_ulong) };
+ if res != 0 {
+ return Err(EFAULT);
+ }
+ // Userspace pointers are not directly dereferencable by the kernel, so we cannot use `add`,
+ // which has C-style rules for defined behavior.
+ self.ptr = self.ptr.wrapping_byte_add(len);
+ self.length -= len;
+ Ok(())
+ }
+
+ /// Reads raw data from the user slice into a kernel buffer.
+ ///
+ /// Fails with `EFAULT` if the read happens on a bad address.
+ pub fn read_slice(&mut self, out: &mut [u8]) -> Result {
+ // SAFETY: The types are compatible and `read_raw` doesn't write uninitialized bytes to
+ // `out`.
+ let out = unsafe { &mut *(out as *mut [u8] as *mut [MaybeUninit<u8>]) };
+ self.read_raw(out)
+ }
+
+ /// Reads the entirety of the user slice, appending it to the end of the provided buffer.
+ ///
+ /// Fails with `EFAULT` if the read happens on a bad address.
+ pub fn read_all(mut self, buf: &mut Vec<u8>) -> Result {
+ let len = self.length;
+ buf.try_reserve(len)?;
+
+ // The call to `try_reserve` was successful, so the spare capacity is at least `len` bytes
+ // long.
+ self.read_raw(&mut buf.spare_capacity_mut()[..len])?;
+
+ // SAFETY: Since the call to `read_raw` was successful, so the next `len` bytes of the
+ // vector have been initialized.
+ unsafe { buf.set_len(buf.len() + len) };
+ Ok(())
+ }
+}
+
+/// A writer for [`UserSlice`].
+///
+/// Used to incrementally write into the user slice.
+pub struct UserSliceWriter {
+ ptr: *mut c_void,
+ length: usize,
+}
+
+impl UserSliceWriter {
+ /// Returns the amount of space remaining in this buffer.
+ ///
+ /// Note that even writing less than this number of bytes may fail.
+ pub fn len(&self) -> usize {
+ self.length
+ }
+
+ /// Returns `true` if no more data can be written to this buffer.
+ pub fn is_empty(&self) -> bool {
+ self.length == 0
+ }
+
+ /// Writes raw data to this user pointer from a kernel buffer.
+ ///
+ /// Fails with `EFAULT` if the write happens on a bad address.
+ pub fn write_slice(&mut self, data: &[u8]) -> Result {
+ let len = data.len();
+ let data_ptr = data.as_ptr().cast::<c_void>();
+ if len > self.length {
+ return Err(EFAULT);
+ }
+ let Ok(len_ulong) = c_ulong::try_from(len) else {
+ return Err(EFAULT);
+ };
+ // SAFETY: `data_ptr` points into an immutable slice of length `len_ulong`, so we may read
+ // that many bytes from it.
+ let res = unsafe { bindings::copy_to_user(self.ptr, data_ptr, len_ulong) };
+ if res != 0 {
+ return Err(EFAULT);
+ }
+ // Userspace pointers are not directly dereferencable by the kernel, so
+ // we cannot use `add`, which has C-style rules for defined behavior.
+ self.ptr = self.ptr.wrapping_byte_add(len);
+ self.length -= len;
+ Ok(())
+ }
+}
--
2.44.0.683.g7961c838ac-goog
Alice Ryhl <[email protected]> writes:
> Adds a new struct called `Page` that wraps a pointer to `struct page`.
> This struct is assumed to hold ownership over the page, so that Rust
> code can allocate and manage pages directly.
>
> The page type has various methods for reading and writing into the page.
> These methods will temporarily map the page to allow the operation. All
> of these methods use a helper that takes an offset and length, performs
> bounds checks, and returns a pointer to the given offset in the page.
>
> This patch only adds support for pages of order zero, as that is all
> Rust Binder needs. However, it is written to make it easy to add support
> for higher-order pages in the future. To do that, you would add a const
> generic parameter to `Page` that specifies the order. Most of the
> methods do not need to be adjusted, as the logic for dealing with
> mapping multiple pages at once can be isolated to just the
> `with_pointer_into_page` method.
>
> Rust Binder needs to manage pages directly as that is how transactions
> are delivered: Each process has an mmap'd region for incoming
> transactions. When an incoming transaction arrives, the Binder driver
> will choose a region in the mmap, allocate and map the relevant pages
> manually, and copy the incoming transaction directly into the page. This
> architecture allows the driver to copy transactions directly from the
> address space of one process to another, without an intermediate copy
> to a kernel buffer.
>
> This code is based on Wedson's page abstractions from the old rust
> branch, but it has been modified by Alice by removing the incomplete
> support for higher-order pages, by introducing the `with_*` helpers
> to consolidate the bounds checking logic into a single place, and by
> introducing gfp flags.
>
> Co-developed-by: Wedson Almeida Filho <[email protected]>
> Signed-off-by: Wedson Almeida Filho <[email protected]>
> Signed-off-by: Alice Ryhl <[email protected]>
> ---
Reviewed-by: Andreas Hindborg <[email protected]>
On 15.04.24 09:13, Alice Ryhl wrote:
> +impl UserSlice {
> + /// Constructs a user slice from a raw pointer and a length in bytes.
> + ///
> + /// Constructing a [`UserSlice`] performs no checks on the provided address and length, it can
> + /// safely be constructed inside a kernel thread with no current userspace process. Reads and
> + /// writes wrap the kernel APIs `copy_from_user` and `copy_to_user`, which check the memory map
> + /// of the current process and enforce that the address range is within the user range (no
> + /// additional calls to `access_ok` are needed).
> + ///
> + /// Callers must be careful to avoid time-of-check-time-of-use (TOCTOU) issues. The simplest way
> + /// is to create a single instance of [`UserSlice`] per user memory block as it reads each byte
> + /// at most once.
> + pub fn new(ptr: *mut c_void, length: usize) -> Self {
What would happen if I call this with a kernel pointer and then
read/write to it? For example
let mut arr = [MaybeUninit::uninit(); 64];
let ptr: *mut [MaybeUninit<u8>] = &mut arr;
let ptr = ptr.cast::<c_void>();
let slice = UserSlice::new(ptr, 64);
let (mut r, mut w) = slice.reader_writer();
r.read_raw(&mut arr)?;
// SAFETY: `arr` was initialized above.
w.write_slice(unsafe { MaybeUninit::slice_assume_init_ref(&arr) })?;
I think this would violate the exclusivity of `&mut` without any
`unsafe` code. (the `unsafe` block at the end cannot possibly be wrong)
> + UserSlice { ptr, length }
> + }
[...]
> + /// Returns `true` if no data is available in the io buffer.
> + pub fn is_empty(&self) -> bool {
> + self.length == 0
> + }
> +
> + /// Reads raw data from the user slice into a kernel buffer.
> + ///
> + /// After a successful call to this method, all bytes in `out` are initialized.
I think we should put things like this into a `# Guarantees` section.
--
Cheers,
Benno
> + ///
> + /// Fails with `EFAULT` if the read happens on a bad address.
> + pub fn read_raw(&mut self, out: &mut [MaybeUninit<u8>]) -> Result {
> + let len = out.len();
> + let out_ptr = out.as_mut_ptr().cast::<c_void>();
> + if len > self.length {
> + return Err(EFAULT);
> + }
> + let Ok(len_ulong) = c_ulong::try_from(len) else {
> + return Err(EFAULT);
> + };
> + // SAFETY: `out_ptr` points into a mutable slice of length `len_ulong`, so we may write
> + // that many bytes to it.
> + let res = unsafe { bindings::copy_from_user(out_ptr, self.ptr, len_ulong) };
> + if res != 0 {
> + return Err(EFAULT);
> + }
> + // Userspace pointers are not directly dereferencable by the kernel, so we cannot use `add`,
> + // which has C-style rules for defined behavior.
> + self.ptr = self.ptr.wrapping_byte_add(len);
> + self.length -= len;
> + Ok(())
> + }
On Mon, Apr 15, 2024 at 11:37 AM Benno Lossin <[email protected]> wrote:
>
> On 15.04.24 09:13, Alice Ryhl wrote:
> > +impl UserSlice {
> > + /// Constructs a user slice from a raw pointer and a length in bytes.
> > + ///
> > + /// Constructing a [`UserSlice`] performs no checks on the provided address and length, it can
> > + /// safely be constructed inside a kernel thread with no current userspace process. Reads and
> > + /// writes wrap the kernel APIs `copy_from_user` and `copy_to_user`, which check the memory map
> > + /// of the current process and enforce that the address range is within the user range (no
> > + /// additional calls to `access_ok` are needed).
> > + ///
> > + /// Callers must be careful to avoid time-of-check-time-of-use (TOCTOU) issues. The simplest way
> > + /// is to create a single instance of [`UserSlice`] per user memory block as it reads each byte
> > + /// at most once.
> > + pub fn new(ptr: *mut c_void, length: usize) -> Self {
>
> What would happen if I call this with a kernel pointer and then
> read/write to it? For example
>
> let mut arr = [MaybeUninit::uninit(); 64];
> let ptr: *mut [MaybeUninit<u8>] = &mut arr;
> let ptr = ptr.cast::<c_void>();
>
> let slice = UserSlice::new(ptr, 64);
> let (mut r, mut w) = slice.reader_writer();
>
> r.read_raw(&mut arr)?;
> // SAFETY: `arr` was initialized above.
> w.write_slice(unsafe { MaybeUninit::slice_assume_init_ref(&arr) })?;
>
> I think this would violate the exclusivity of `&mut` without any
> `unsafe` code. (the `unsafe` block at the end cannot possibly be wrong)
This will fail with an EFAULT error. There is a check on the C side
that verifies that the address is in userspace. (The access_ok call.)
Alice
On 15.04.24 09:13, Alice Ryhl wrote:
> Adds a new struct called `Page` that wraps a pointer to `struct page`.
> This struct is assumed to hold ownership over the page, so that Rust
> code can allocate and manage pages directly.
>
> The page type has various methods for reading and writing into the page.
> These methods will temporarily map the page to allow the operation. All
> of these methods use a helper that takes an offset and length, performs
> bounds checks, and returns a pointer to the given offset in the page.
>
> This patch only adds support for pages of order zero, as that is all
> Rust Binder needs. However, it is written to make it easy to add support
> for higher-order pages in the future. To do that, you would add a const
> generic parameter to `Page` that specifies the order. Most of the
> methods do not need to be adjusted, as the logic for dealing with
> mapping multiple pages at once can be isolated to just the
> `with_pointer_into_page` method.
>
> Rust Binder needs to manage pages directly as that is how transactions
> are delivered: Each process has an mmap'd region for incoming
> transactions. When an incoming transaction arrives, the Binder driver
> will choose a region in the mmap, allocate and map the relevant pages
> manually, and copy the incoming transaction directly into the page. This
> architecture allows the driver to copy transactions directly from the
> address space of one process to another, without an intermediate copy
> to a kernel buffer.
>
> This code is based on Wedson's page abstractions from the old rust
> branch, but it has been modified by Alice by removing the incomplete
> support for higher-order pages, by introducing the `with_*` helpers
> to consolidate the bounds checking logic into a single place, and by
> introducing gfp flags.
>
> Co-developed-by: Wedson Almeida Filho <[email protected]>
> Signed-off-by: Wedson Almeida Filho <[email protected]>
> Signed-off-by: Alice Ryhl <[email protected]>
> ---
> rust/bindings/bindings_helper.h | 2 +
> rust/helpers.c | 20 ++++
> rust/kernel/lib.rs | 1 +
> rust/kernel/page.rs | 240 ++++++++++++++++++++++++++++++++++++++++
> 4 files changed, 263 insertions(+)
Reviewed-by: Benno Lossin <[email protected]>
--
Cheers,
Benno
On 15.04.24 11:44, Alice Ryhl wrote:
> On Mon, Apr 15, 2024 at 11:37 AM Benno Lossin <[email protected]> wrote:
>>
>> On 15.04.24 09:13, Alice Ryhl wrote:
>>> +impl UserSlice {
>>> + /// Constructs a user slice from a raw pointer and a length in bytes.
>>> + ///
>>> + /// Constructing a [`UserSlice`] performs no checks on the provided address and length, it can
>>> + /// safely be constructed inside a kernel thread with no current userspace process. Reads and
>>> + /// writes wrap the kernel APIs `copy_from_user` and `copy_to_user`, which check the memory map
>>> + /// of the current process and enforce that the address range is within the user range (no
>>> + /// additional calls to `access_ok` are needed).
>>> + ///
>>> + /// Callers must be careful to avoid time-of-check-time-of-use (TOCTOU) issues. The simplest way
>>> + /// is to create a single instance of [`UserSlice`] per user memory block as it reads each byte
>>> + /// at most once.
>>> + pub fn new(ptr: *mut c_void, length: usize) -> Self {
>>
>> What would happen if I call this with a kernel pointer and then
>> read/write to it? For example
>>
>> let mut arr = [MaybeUninit::uninit(); 64];
>> let ptr: *mut [MaybeUninit<u8>] = &mut arr;
>> let ptr = ptr.cast::<c_void>();
>>
>> let slice = UserSlice::new(ptr, 64);
>> let (mut r, mut w) = slice.reader_writer();
>>
>> r.read_raw(&mut arr)?;
>> // SAFETY: `arr` was initialized above.
>> w.write_slice(unsafe { MaybeUninit::slice_assume_init_ref(&arr) })?;
>>
>> I think this would violate the exclusivity of `&mut` without any
>> `unsafe` code. (the `unsafe` block at the end cannot possibly be wrong)
>
> This will fail with an EFAULT error. There is a check on the C side
> that verifies that the address is in userspace. (The access_ok call.)
I see, that makes a lot of sense.
Regardless of whether you fix the nit about the guarantees section:
Reviewed-by: Benno Lossin <[email protected]>
--
Cheers,
Benno
On Mon, Apr 15, 2024 at 07:13:53AM +0000, Alice Ryhl wrote:
> From: Wedson Almeida Filho <[email protected]>
>
> A pointer to an area in userspace memory, which can be either read-only
> or read-write.
>
> All methods on this struct are safe: attempting to read or write on bad
> addresses (either out of the bound of the slice or unmapped addresses)
> will return `EFAULT`. Concurrent access, *including data races to/from
> userspace memory*, is permitted, because fundamentally another userspace
> thread/process could always be modifying memory at the same time (in the
> same way that userspace Rust's `std::io` permits data races with the
> contents of files on disk). In the presence of a race, the exact byte
> values read/written are unspecified but the operation is well-defined.
> Kernelspace code should validate its copy of data after completing a
> read, and not expect that multiple reads of the same address will return
> the same value.
>
> These APIs are designed to make it difficult to accidentally write
> TOCTOU bugs. Every time you read from a memory location, the pointer is
> advanced by the length so that you cannot use that reader to read the
> same memory location twice. Preventing double-fetches avoids TOCTOU
> bugs. This is accomplished by taking `self` by value to prevent
> obtaining multiple readers on a given `UserSlicePtr`, and the readers
> only permitting forward reads. If double-fetching a memory location is
> necessary for some reason, then that is done by creating multiple
> readers to the same memory location.
>
> Constructing a `UserSlicePtr` performs no checks on the provided
> address and length, it can safely be constructed inside a kernel thread
> with no current userspace process. Reads and writes wrap the kernel APIs
> `copy_from_user` and `copy_to_user`, which check the memory map of the
> current process and enforce that the address range is within the user
> range (no additional calls to `access_ok` are needed).
>
> This code is based on something that was originally written by Wedson on
> the old rust branch. It was modified by Alice by removing the
> `IoBufferReader` and `IoBufferWriter` traits, and various other changes.
>
> Signed-off-by: Wedson Almeida Filho <[email protected]>
> Co-developed-by: Alice Ryhl <[email protected]>
> Signed-off-by: Alice Ryhl <[email protected]>
Thanks!
Reviewed-by: Boqun Feng <[email protected]>
Two small nits below..
> ---
> rust/helpers.c | 14 +++
> rust/kernel/lib.rs | 1 +
> rust/kernel/uaccess.rs | 304 +++++++++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 319 insertions(+)
>
[...]
> + /// Reads raw data from the user slice into a kernel buffer.
> + ///
> + /// Fails with `EFAULT` if the read happens on a bad address.
.. we probably want to mention that `out` may get modified even in
failure cases.
> + pub fn read_slice(&mut self, out: &mut [u8]) -> Result {
> + // SAFETY: The types are compatible and `read_raw` doesn't write uninitialized bytes to
> + // `out`.
> + let out = unsafe { &mut *(out as *mut [u8] as *mut [MaybeUninit<u8>]) };
> + self.read_raw(out)
> + }
> +
[...]
> +
> +impl UserSliceWriter {
[...]
> +
> + /// Writes raw data to this user pointer from a kernel buffer.
> + ///
> + /// Fails with `EFAULT` if the write happens on a bad address.
Same here, probably mention that: the userspace memory may be modified
even in failure cases.
Anyway, they are not correctness critical, so we can do these in later
patches.
Regards,
Boqun
> + pub fn write_slice(&mut self, data: &[u8]) -> Result {
> + let len = data.len();
> + let data_ptr = data.as_ptr().cast::<c_void>();
> + if len > self.length {
> + return Err(EFAULT);
> + }
> + let Ok(len_ulong) = c_ulong::try_from(len) else {
> + return Err(EFAULT);
> + };
> + // SAFETY: `data_ptr` points into an immutable slice of length `len_ulong`, so we may read
> + // that many bytes from it.
> + let res = unsafe { bindings::copy_to_user(self.ptr, data_ptr, len_ulong) };
> + if res != 0 {
> + return Err(EFAULT);
> + }
> + // Userspace pointers are not directly dereferencable by the kernel, so
> + // we cannot use `add`, which has C-style rules for defined behavior.
> + self.ptr = self.ptr.wrapping_byte_add(len);
> + self.length -= len;
> + Ok(())
> + }
> +}
>
> --
> 2.44.0.683.g7961c838ac-goog
>
>
On Mon, Apr 15, 2024 at 07:13:54AM +0000, Alice Ryhl wrote:
> From: Arnd Bergmann <[email protected]>
>
> Rust code needs to be able to access _copy_from_user and _copy_to_user
> so that it can skip the check_copy_size check in cases where the length
> is known at compile-time, mirroring the logic for when C code will skip
> check_copy_size. To do this, we ensure that exported versions of these
> methods are available when CONFIG_RUST is enabled.
>
> Alice has verified that this patch passes the CONFIG_TEST_USER_COPY test
> on x86 using the Android cuttlefish emulator.
>
> Signed-off-by: Arnd Bergmann <[email protected]>
Thanks for the updates and the comment on testing. :)
Reviewed-by: Kees Cook <[email protected]>
--
Kees Cook
On Mon, Apr 15, 2024 at 3:14 AM Alice Ryhl <[email protected]> wrote:
>
> From: Wedson Almeida Filho <[email protected]>
>
> A pointer to an area in userspace memory, which can be either read-only
> or read-write.
>
> All methods on this struct are safe: attempting to read or write on bad
> addresses (either out of the bound of the slice or unmapped addresses)
> will return `EFAULT`. Concurrent access, *including data races to/from
> userspace memory*, is permitted, because fundamentally another userspace
> thread/process could always be modifying memory at the same time (in the
> same way that userspace Rust's `std::io` permits data races with the
> contents of files on disk). In the presence of a race, the exact byte
> values read/written are unspecified but the operation is well-defined.
> Kernelspace code should validate its copy of data after completing a
> read, and not expect that multiple reads of the same address will return
> the same value.
>
> These APIs are designed to make it difficult to accidentally write
> TOCTOU bugs. Every time you read from a memory location, the pointer is
> advanced by the length so that you cannot use that reader to read the
> same memory location twice. Preventing double-fetches avoids TOCTOU
> bugs. This is accomplished by taking `self` by value to prevent
> obtaining multiple readers on a given `UserSlicePtr`, and the readers
> only permitting forward reads. If double-fetching a memory location is
> necessary for some reason, then that is done by creating multiple
> readers to the same memory location.
>
> Constructing a `UserSlicePtr` performs no checks on the provided
> address and length, it can safely be constructed inside a kernel thread
> with no current userspace process. Reads and writes wrap the kernel APIs
> `copy_from_user` and `copy_to_user`, which check the memory map of the
> current process and enforce that the address range is within the user
> range (no additional calls to `access_ok` are needed).
>
> This code is based on something that was originally written by Wedson on
> the old rust branch. It was modified by Alice by removing the
> `IoBufferReader` and `IoBufferWriter` traits, and various other changes.
>
> Signed-off-by: Wedson Almeida Filho <[email protected]>
> Co-developed-by: Alice Ryhl <[email protected]>
> Signed-off-by: Alice Ryhl <[email protected]>
Reviewed-by: Trevor Gross <[email protected]>
I left some suggestions for documentation improvements and one
question, but mostly LGTM.
> ---
> rust/helpers.c | 14 +++
> rust/kernel/lib.rs | 1 +
> rust/kernel/uaccess.rs | 304 +++++++++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 319 insertions(+)
>
> diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
> index be68d5e567b1..37f84223b83f 100644
> --- a/rust/kernel/lib.rs
> +++ b/rust/kernel/lib.rs
> @@ -49,6 +49,7 @@
> pub mod task;
> pub mod time;
> pub mod types;
> +pub mod uaccess;
> pub mod workqueue;
>
> #[doc(hidden)]
> diff --git a/rust/kernel/uaccess.rs b/rust/kernel/uaccess.rs
> new file mode 100644
> index 000000000000..c97029cdeba1
> --- /dev/null
> +++ b/rust/kernel/uaccess.rs
> @@ -0,0 +1,304 @@
> [...]
> +impl UserSlice {
> + /// Constructs a user slice from a raw pointer and a length in bytes.
> + ///
> + /// Constructing a [`UserSlice`] performs no checks on the provided address and length, it can
> + /// safely be constructed inside a kernel thread with no current userspace process. Reads and
> + /// writes wrap the kernel APIs `copy_from_user` and `copy_to_user`, which check the memory map
> + /// of the current process and enforce that the address range is within the user range (no
> + /// additional calls to `access_ok` are needed).
I would just add a note that pointer should be a valid userspace
pointer, but that gets checked at read/write time
> + /// Callers must be careful to avoid time-of-check-time-of-use (TOCTOU) issues. The simplest way
> + /// is to create a single instance of [`UserSlice`] per user memory block as it reads each byte
> + /// at most once.
> + pub fn new(ptr: *mut c_void, length: usize) -> Self {
> + UserSlice { ptr, length }
> + }
> +impl UserSliceReader {
> [...]
> + /// Reads raw data from the user slice into a kernel buffer.
> + ///
> + /// After a successful call to this method, all bytes in `out` are initialized.
If this is guaranteed, could it return `Result<&mut [u8]>`? So the
caller doesn't need to unsafely `assume_init` anything.
> + /// Fails with `EFAULT` if the read happens on a bad address.
This should also mention that the slice cannot be bigger than the
reader's length.
> + pub fn read_raw(&mut self, out: &mut [MaybeUninit<u8>]) -> Result {
> + let len = out.len();
> + let out_ptr = out.as_mut_ptr().cast::<c_void>();
> + if len > self.length {
> + return Err(EFAULT);
> + }
> + let Ok(len_ulong) = c_ulong::try_from(len) else {
> + return Err(EFAULT);
> + };
> + // SAFETY: `out_ptr` points into a mutable slice of length `len_ulong`, so we may write
> + // that many bytes to it.
> + let res = unsafe { bindings::copy_from_user(out_ptr, self.ptr, len_ulong) };
> + if res != 0 {
> + return Err(EFAULT);
> + }
> + // Userspace pointers are not directly dereferencable by the kernel, so we cannot use `add`,
> + // which has C-style rules for defined behavior.
> + self.ptr = self.ptr.wrapping_byte_add(len);
> + self.length -= len;
> + Ok(())
> + }
> +
> + /// Reads raw data from the user slice into a kernel buffer.
> + ///
> + /// Fails with `EFAULT` if the read happens on a bad address.
> + pub fn read_slice(&mut self, out: &mut [u8]) -> Result {
> + // SAFETY: The types are compatible and `read_raw` doesn't write uninitialized bytes to
> + // `out`.
> + let out = unsafe { &mut *(out as *mut [u8] as *mut [MaybeUninit<u8>]) };
> + self.read_raw(out)
> + }
If this is just a safe version of read_raw, could you crosslink the docs?
> +impl UserSliceWriter {
> +
> + /// Writes raw data to this user pointer from a kernel buffer.
> + ///
> + /// Fails with `EFAULT` if the write happens on a bad address.
> + pub fn write_slice(&mut self, data: &[u8]) -> Result {
> [...]
> + }
Could use a note about length like `read_raw`.
On Mon, Apr 15, 2024 at 3:15 AM Alice Ryhl <[email protected]> wrote:
>
> Adds a new struct called `Page` that wraps a pointer to `struct page`.
> This struct is assumed to hold ownership over the page, so that Rust
> code can allocate and manage pages directly.
>
> The page type has various methods for reading and writing into the page.
> These methods will temporarily map the page to allow the operation. All
> of these methods use a helper that takes an offset and length, performs
> bounds checks, and returns a pointer to the given offset in the page.
>
> This patch only adds support for pages of order zero, as that is all
> Rust Binder needs. However, it is written to make it easy to add support
> for higher-order pages in the future. To do that, you would add a const
> generic parameter to `Page` that specifies the order. Most of the
> methods do not need to be adjusted, as the logic for dealing with
> mapping multiple pages at once can be isolated to just the
> `with_pointer_into_page` method.
>
> Rust Binder needs to manage pages directly as that is how transactions
> are delivered: Each process has an mmap'd region for incoming
> transactions. When an incoming transaction arrives, the Binder driver
> will choose a region in the mmap, allocate and map the relevant pages
> manually, and copy the incoming transaction directly into the page. This
> architecture allows the driver to copy transactions directly from the
> address space of one process to another, without an intermediate copy
> to a kernel buffer.
>
> This code is based on Wedson's page abstractions from the old rust
> branch, but it has been modified by Alice by removing the incomplete
> support for higher-order pages, by introducing the `with_*` helpers
> to consolidate the bounds checking logic into a single place, and by
> introducing gfp flags.
>
> Co-developed-by: Wedson Almeida Filho <[email protected]>
> Signed-off-by: Wedson Almeida Filho <[email protected]>
> Signed-off-by: Alice Ryhl <[email protected]>
I have a couple questions about naming, and think an example would be
good for the functions that are trickier to use correctly. But I
wouldn't block on this, implementation looks good to me.
Reviewed-by: Trevor Gross <[email protected]>
> +++ b/rust/kernel/page.rs
> @@ -0,0 +1,240 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! Kernel page allocation and management.
> +
> +use crate::{bindings, error::code::*, error::Result, uaccess::UserSliceReader};
> +use core::{
> + alloc::AllocError,
> + ptr::{self, NonNull},
> +};
> +
> +/// A bitwise shift for the page size.
> +pub const PAGE_SHIFT: usize = bindings::PAGE_SHIFT as usize;
> +
> +/// The number of bytes in a page.
> +pub const PAGE_SIZE: usize = bindings::PAGE_SIZE;
> +
> +/// A bitmask that gives the page containing a given address.
> +pub const PAGE_MASK: usize = !(PAGE_SIZE - 1);
> +
> +/// Flags for the "get free page" function that underlies all memory allocations.
> +pub mod flags {
> + /// gfp flags.
Uppercase acronym, maybe with a description:
GFP (Get Free Page) flags.
> + #[allow(non_camel_case_types)]
> + pub type gfp_t = bindings::gfp_t;
Why not GfpFlags, do we do this elsewhere?
> + /// `GFP_KERNEL` is typical for kernel-internal allocations. The caller requires `ZONE_NORMAL`
> + /// or a lower zone for direct access but can direct reclaim.
> + pub const GFP_KERNEL: gfp_t = bindings::GFP_KERNEL;
> + /// `GFP_ZERO` returns a zeroed page on success.
> + pub const __GFP_ZERO: gfp_t = bindings::__GFP_ZERO;
> + /// `GFP_HIGHMEM` indicates that the allocated memory may be located in high memory.
> + pub const __GFP_HIGHMEM: gfp_t = bindings::__GFP_HIGHMEM;
It feels a bit weird to have dunder constants on the rust side that
aren't also `#[doc(hidden)]` or just nonpublic. Makes me think they
are an implementation detail or not really meant to be used - could
you update the docs if this is the case?
> +
> +impl Page {
> + /// Allocates a new page.
Could you add a small example here?
> + pub fn alloc_page(gfp_flags: flags::gfp_t) -> Result<Self, AllocError> {
> [...]
> + }
> +
> + /// Returns a raw pointer to the page.
Could you add a note about how the pointer needs to be used correctly,
if it is for anything more than interfacing with kernel APIs?
> + pub fn as_ptr(&self) -> *mut bindings::page {
> + self.page.as_ptr()
> + }
> +
> + /// Runs a piece of code with this page mapped to an address.
> + ///
> + /// The page is unmapped when this call returns.
> + ///
> + /// # Using the raw pointer
> + ///
> + /// It is up to the caller to use the provided raw pointer correctly The pointer is valid for
> + /// `PAGE_SIZE` bytes and for the duration in which the closure is called. The pointer might
> + /// only be mapped on the current thread, and when that is the case, dereferencing it on other
> + /// threads is UB. Other than that, the usual rules for dereferencing a raw pointer apply: don't
> + /// cause data races, the memory may be uninitialized, and so on.
> + ///
> + /// If multiple threads map the same page at the same time, then they may reference with
> + /// different addresses. However, even if the addresses are different, the underlying memory is
> + /// still the same for these purposes (e.g., it's still a data race if they both write to the
> + /// same underlying byte at the same time).
> + fn with_page_mapped<T>(&self, f: impl FnOnce(*mut u8) -> T) -> T {
> [...]
> + }
Could you add an example of how to use this correctly?
> + /// Runs a piece of code with a raw pointer to a slice of this page, with bounds checking.
> + ///
> + /// If `f` is called, then it will be called with a pointer that points at `off` bytes into the
> + /// page, and the pointer will be valid for at least `len` bytes. The pointer is only valid on
> + /// this task, as this method uses a local mapping.
> + ///
> + /// If `off` and `len` refers to a region outside of this page, then this method returns
> + /// `EINVAL` and does not call `f`.
> + ///
> + /// # Using the raw pointer
> + ///
> + /// It is up to the caller to use the provided raw pointer correctly The pointer is valid for
> + /// `len` bytes and for the duration in which the closure is called. The pointer might only be
> + /// mapped on the current thread, and when that is the case, dereferencing it on other threads
> + /// is UB. Other than that, the usual rules for dereferencing a raw pointer apply: don't cause
> + /// data races, the memory may be uninitialized, and so on.
> + ///
> + /// If multiple threads map the same page at the same time, then they may reference with
> + /// different addresses. However, even if the addresses are different, the underlying memory is
> + /// still the same for these purposes (e.g., it's still a data race if they both write to the
> + /// same underlying byte at the same time).
This could probably also use an example. A note about how to select
between with_pointer_into_page and with_page_mapped would also be nice
to guide usage, e.g. "prefer with_pointer_into_page for all cases
except when..."
> + fn with_pointer_into_page<T>(
> + &self,
> + off: usize,
> + len: usize,
> + f: impl FnOnce(*mut u8) -> Result<T>,
> + ) -> Result<T> {
> [...]
> + /// Maps the page and zeroes the given slice.
> + ///
> + /// This method will perform bounds checks on the page offset. If `offset .. offset+len` goes
> + /// outside ot the page, then this call returns `EINVAL`.
> + ///
> + /// # Safety
> + ///
> + /// Callers must ensure that this call does not race with a read or write to the same page that
> + /// overlaps with this write.
> + pub unsafe fn fill_zero(&self, offset: usize, len: usize) -> Result {
> + self.with_pointer_into_page(offset, len, move |dst| {
> + // SAFETY: If `with_pointer_into_page` calls into this closure, then it has performed a
> + // bounds check and guarantees that `dst` is valid for `len` bytes.
> + //
> + // There caller guarantees that there is no data race.
> + unsafe { ptr::write_bytes(dst, 0u8, len) };
> + Ok(())
> + })
> + }
Could this be named `fill_zero_raw` to leave room for a safe
`fill_zero(&mut self, ...)`?
> + /// Copies data from userspace into this page.
> + ///
> + /// This method will perform bounds checks on the page offset. If `offset .. offset+len` goes
> + /// outside ot the page, then this call returns `EINVAL`.
> + ///
> + /// Like the other `UserSliceReader` methods, data races are allowed on the userspace address.
> + /// However, they are not allowed on the page you are copying into.
> + ///
> + /// # Safety
> + ///
> + /// Callers must ensure that this call does not race with a read or write to the same page that
> + /// overlaps with this write.
> + pub unsafe fn copy_from_user_slice(
> + &self,
> + reader: &mut UserSliceReader,
> + offset: usize,
> + len: usize,
> + ) -> Result {
> + self.with_pointer_into_page(offset, len, move |dst| {
> + // SAFETY: If `with_pointer_into_page` calls into this closure, then it has performed a
> + // bounds check and guarantees that `dst` is valid for `len` bytes. Furthermore, we have
> + // exclusive access to the slice since the caller guarantees that there are no races.
> + reader.read_raw(unsafe { core::slice::from_raw_parts_mut(dstcast(), len) })
> + })
> + }
> +}
Same as above, `copy_from_user_slice_raw` would leave room for a safe API.
On Mon, Apr 15, 2024 at 3:15 AM Alice Ryhl <[email protected]> wrote:
>
> Add safe methods for reading and writing Rust values to and from
> userspace pointers.
>
> The C methods for copying to/from userspace use a function called
> `check_object_size` to verify that the kernel pointer is not dangling.
> However, this check is skipped when the length is a compile-time
> constant, with the assumption that such cases trivially have a correct
> kernel pointer.
>
> In this patch, we apply the same optimization to the typed accessors.
> For both methods, the size of the operation is known at compile time to
> be size_of of the type being read or written. Since the C side doesn't
> provide a variant that skips only this check, we create custom helpers
> for this purpose.
>
> The majority of reads and writes to userspace pointers in the Rust
> Binder driver uses these accessor methods. Benchmarking has found that
> skipping the `check_object_size` check makes a big difference for the
> cases being skipped here. (And that the check doesn't make a difference
> for the cases that use the raw read/write methods.)
>
> This code is based on something that was originally written by Wedson on
> the old rust branch. It was modified by Alice to skip the
> `check_object_size` check, and to update various comments, including the
> notes about kernel pointers in `WritableToBytes`.
>
> Co-developed-by: Wedson Almeida Filho <[email protected]>
> Signed-off-by: Wedson Almeida Filho <[email protected]>
> Reviewed-by: Benno Lossin <[email protected]>
> Reviewed-by: Boqun Feng <[email protected]>
> Signed-off-by: Alice Ryhl <[email protected]>
Couple of docs nits but this looks good to me.
Reviewed-by: Trevor Gross <[email protected]>
> +/// Types for which any bit pattern is valid.
> +///
> +/// Not all types are valid for all values. For example, a `bool` must be either zero or one, so
> +/// reading arbitrary bytes into something that contains a `bool` is not okay.
> +///
> +/// It's okay for the type to have padding, as initializing those bytes has no effect.
> +///
> +/// # Safety
> +///
> +/// All bit-patterns must be valid for this type.
> +pub unsafe trait FromBytes {}
No `UnsafeCell` is also a requirement in zerocopy/bytemuck
> +/// Types that can be viewed as an immutable slice of initialized bytes.
> +///
> +/// If a struct implements this trait, then it is okay to copy it byte-for-byte to userspace. This
> +/// means that it should not have any padding, as padding bytes are uninitialized. Reading
> +/// uninitialized memory is not just undefined behavior, it may even lead to leaking sensitive
> +/// information on the stack to userspace.
> +///
> +/// The struct should also not hold kernel pointers, as kernel pointer addresses are also considered
> +/// sensitive. However, leaking kernel pointers is not considered undefined behavior by Rust, so
> +/// this is a correctness requirement, but not a safety requirement.
I don't think mentions of userspace are relevant here since the trait
is more general. Maybe a `# Interfacing with userspace` section if
there is enough relevant information.
> +/// # Safety
> +///
> +/// Values of this type may not contain any uninitialized bytes.
No UnsafeCell
> +pub unsafe trait AsBytes {}
> diff --git a/rust/kernel/uaccess.rs b/rust/kernel/uaccess.rs
> index c97029cdeba1..e3953eec61a3 100644
> --- a/rust/kernel/uaccess.rs
> +++ b/rust/kernel/uaccess.rs
> @@ -4,10 +4,15 @@
> //!
> //! C header: [`include/linux/uaccess.h`](srctree/include/linux/uaccess.h)
>
> -use crate::{bindings, error::code::*, error::Result};
> +use crate::{
> + bindings,
> + error::code::*,
> + error::Result,
> + types::{AsBytes, FromBytes},
> +};
> use alloc::vec::Vec;
> use core::ffi::{c_ulong, c_void};
> -use core::mem::MaybeUninit;
> +use core::mem::{size_of, MaybeUninit};
>
> /// A pointer to an area in userspace memory, which can be either read-only or read-write.
> ///
> @@ -238,6 +243,38 @@ pub fn read_slice(&mut self, out: &mut [u8]) -> Result {
> self.read_raw(out)
> }
>
> + /// Reads a value of the specified type.
> + ///
> + /// Fails with `EFAULT` if the read encounters a page fault.
> + pub fn read<T: FromBytes>(&mut self) -> Result<T> {
> [...]
> + /// Writes the provided Rust value to this userspace pointer.
> + ///
> + /// Fails with `EFAULT` if the write encounters a page fault.
> + pub fn write<T: AsBytes>(&mut self, value: &T) -> Result {
Read & write could use an example if you are up for it
Boqun Feng <[email protected]> writes:
> On Mon, Apr 15, 2024 at 07:13:53AM +0000, Alice Ryhl wrote:
>> From: Wedson Almeida Filho <[email protected]>
>>
>> A pointer to an area in userspace memory, which can be either read-only
>> or read-write.
>>
>> All methods on this struct are safe: attempting to read or write on bad
>> addresses (either out of the bound of the slice or unmapped addresses)
>> will return `EFAULT`. Concurrent access, *including data races to/from
>> userspace memory*, is permitted, because fundamentally another userspace
>> thread/process could always be modifying memory at the same time (in the
>> same way that userspace Rust's `std::io` permits data races with the
>> contents of files on disk). In the presence of a race, the exact byte
>> values read/written are unspecified but the operation is well-defined.
>> Kernelspace code should validate its copy of data after completing a
>> read, and not expect that multiple reads of the same address will return
>> the same value.
>>
>> These APIs are designed to make it difficult to accidentally write
>> TOCTOU bugs. Every time you read from a memory location, the pointer is
>> advanced by the length so that you cannot use that reader to read the
>> same memory location twice. Preventing double-fetches avoids TOCTOU
>> bugs. This is accomplished by taking `self` by value to prevent
>> obtaining multiple readers on a given `UserSlicePtr`, and the readers
>> only permitting forward reads. If double-fetching a memory location is
>> necessary for some reason, then that is done by creating multiple
>> readers to the same memory location.
>>
>> Constructing a `UserSlicePtr` performs no checks on the provided
>> address and length, it can safely be constructed inside a kernel thread
>> with no current userspace process. Reads and writes wrap the kernel APIs
>> `copy_from_user` and `copy_to_user`, which check the memory map of the
>> current process and enforce that the address range is within the user
>> range (no additional calls to `access_ok` are needed).
>>
>> This code is based on something that was originally written by Wedson on
>> the old rust branch. It was modified by Alice by removing the
>> `IoBufferReader` and `IoBufferWriter` traits, and various other changes.
>>
>> Signed-off-by: Wedson Almeida Filho <[email protected]>
>> Co-developed-by: Alice Ryhl <[email protected]>
>> Signed-off-by: Alice Ryhl <[email protected]>
>
> Thanks!
>
> Reviewed-by: Boqun Feng <[email protected]>
Thanks for taking a look!
>> ---
>> rust/helpers.c | 14 +++
>> rust/kernel/lib.rs | 1 +
>> rust/kernel/uaccess.rs | 304 +++++++++++++++++++++++++++++++++++++++++++++++++
>> 3 files changed, 319 insertions(+)
>>
> [...]
>> + /// Reads raw data from the user slice into a kernel buffer.
>> + ///
>> + /// Fails with `EFAULT` if the read happens on a bad address.
>
> ... we probably want to mention that `out` may get modified even in
> failure cases.
Will do.
>> + pub fn read_slice(&mut self, out: &mut [u8]) -> Result {
>> + // SAFETY: The types are compatible and `read_raw` doesn't write uninitialized bytes to
>> + // `out`.
>> + let out = unsafe { &mut *(out as *mut [u8] as *mut [MaybeUninit<u8>]) };
>> + self.read_raw(out)
>> + }
>> +
> [...]
>> +
>> +impl UserSliceWriter {
> [...]
>> +
>> + /// Writes raw data to this user pointer from a kernel buffer.
>> + ///
>> + /// Fails with `EFAULT` if the write happens on a bad address.
>
> Same here, probably mention that: the userspace memory may be modified
> even in failure cases.
Will do.
> Anyway, they are not correctness critical, so we can do these in later
> patches.
It looks like I'll have to send another version anyway due to the
conflict with [1], so I can take care of it.
Alice
[1]: https://lore.kernel.org/rust-for-linux/[email protected]/
Trevor Gross <[email protected]> writes:
> On Mon, Apr 15, 2024 at 3:14 AM Alice Ryhl <[email protected]> wrote:
>>
>> From: Wedson Almeida Filho <[email protected]>
>>
>> A pointer to an area in userspace memory, which can be either read-only
>> or read-write.
>>
>> All methods on this struct are safe: attempting to read or write on bad
>> addresses (either out of the bound of the slice or unmapped addresses)
>> will return `EFAULT`. Concurrent access, *including data races to/from
>> userspace memory*, is permitted, because fundamentally another userspace
>> thread/process could always be modifying memory at the same time (in the
>> same way that userspace Rust's `std::io` permits data races with the
>> contents of files on disk). In the presence of a race, the exact byte
>> values read/written are unspecified but the operation is well-defined.
>> Kernelspace code should validate its copy of data after completing a
>> read, and not expect that multiple reads of the same address will return
>> the same value.
>>
>> These APIs are designed to make it difficult to accidentally write
>> TOCTOU bugs. Every time you read from a memory location, the pointer is
>> advanced by the length so that you cannot use that reader to read the
>> same memory location twice. Preventing double-fetches avoids TOCTOU
>> bugs. This is accomplished by taking `self` by value to prevent
>> obtaining multiple readers on a given `UserSlicePtr`, and the readers
>> only permitting forward reads. If double-fetching a memory location is
>> necessary for some reason, then that is done by creating multiple
>> readers to the same memory location.
>>
>> Constructing a `UserSlicePtr` performs no checks on the provided
>> address and length, it can safely be constructed inside a kernel thread
>> with no current userspace process. Reads and writes wrap the kernel APIs
>> `copy_from_user` and `copy_to_user`, which check the memory map of the
>> current process and enforce that the address range is within the user
>> range (no additional calls to `access_ok` are needed).
>>
>> This code is based on something that was originally written by Wedson on
>> the old rust branch. It was modified by Alice by removing the
>> `IoBufferReader` and `IoBufferWriter` traits, and various other changes.
>>
>> Signed-off-by: Wedson Almeida Filho <[email protected]>
>> Co-developed-by: Alice Ryhl <[email protected]>
>> Signed-off-by: Alice Ryhl <[email protected]>
>
> Reviewed-by: Trevor Gross <[email protected]>
>
> I left some suggestions for documentation improvements and one
> question, but mostly LGTM.
Thanks for taking a look!
>> +impl UserSlice {
>> + /// Constructs a user slice from a raw pointer and a length in bytes.
>> + ///
>> + /// Constructing a [`UserSlice`] performs no checks on the provided address and length, it can
>> + /// safely be constructed inside a kernel thread with no current userspace process. Reads and
>> + /// writes wrap the kernel APIs `copy_from_user` and `copy_to_user`, which check the memory map
>> + /// of the current process and enforce that the address range is within the user range (no
>> + /// additional calls to `access_ok` are needed).
>
> I would just add a note that pointer should be a valid userspace
> pointer, but that gets checked at read/write time
Will do.
>> + /// Callers must be careful to avoid time-of-check-time-of-use (TOCTOU) issues. The simplest way
>> + /// is to create a single instance of [`UserSlice`] per user memory block as it reads each byte
>> + /// at most once.
>> + pub fn new(ptr: *mut c_void, length: usize) -> Self {
>> + UserSlice { ptr, length }
>> + }
>
>> +impl UserSliceReader {
>> [...]
>> + /// Reads raw data from the user slice into a kernel buffer.
>> + ///
>> + /// After a successful call to this method, all bytes in `out` are initialized.
>
> If this is guaranteed, could it return `Result<&mut [u8]>`? So the
> caller doesn't need to unsafely `assume_init` anything.
It could, but I don't think it's that useful. All existing callers will
want to record it somewhere with something like `Vec::set_len`, which
this doesn't help with. There are ways to do something like that, but it
complicates the API further which I am not interested in.
>> + /// Fails with `EFAULT` if the read happens on a bad address.
>
> This should also mention that the slice cannot be bigger than the
> reader's length.
I can add a note.
>> + pub fn read_raw(&mut self, out: &mut [MaybeUninit<u8>]) -> Result {
>> + let len = out.len();
>> + let out_ptr = out.as_mut_ptr().cast::<c_void>();
>> + if len > self.length {
>> + return Err(EFAULT);
>> + }
>> + let Ok(len_ulong) = c_ulong::try_from(len) else {
>> + return Err(EFAULT);
>> + };
>> + // SAFETY: `out_ptr` points into a mutable slice of length `len_ulong`, so we may write
>> + // that many bytes to it.
>> + let res = unsafe { bindings::copy_from_user(out_ptr, self.ptr, len_ulong) };
>> + if res != 0 {
>> + return Err(EFAULT);
>> + }
>> + // Userspace pointers are not directly dereferencable by the kernel, so we cannot use `add`,
>> + // which has C-style rules for defined behavior.
>> + self.ptr = self.ptr.wrapping_byte_add(len);
>> + self.length -= len;
>> + Ok(())
>> + }
>> +
>> + /// Reads raw data from the user slice into a kernel buffer.
>> + ///
>> + /// Fails with `EFAULT` if the read happens on a bad address.
>> + pub fn read_slice(&mut self, out: &mut [u8]) -> Result {
>> + // SAFETY: The types are compatible and `read_raw` doesn't write uninitialized bytes to
>> + // `out`.
>> + let out = unsafe { &mut *(out as *mut [u8] as *mut [MaybeUninit<u8>]) };
>> + self.read_raw(out)
>> + }
>
> If this is just a safe version of read_raw, could you crosslink the docs?
Okay.
>> +impl UserSliceWriter {
>> +
>> + /// Writes raw data to this user pointer from a kernel buffer.
>> + ///
>> + /// Fails with `EFAULT` if the write happens on a bad address.
>> + pub fn write_slice(&mut self, data: &[u8]) -> Result {
>> [...]
>> + }
>
> Could use a note about length like `read_raw`.
Okay.
Alice
On Tue, Apr 16, 2024 at 1:05 AM Kees Cook <[email protected]> wrote:
>
> On Mon, Apr 15, 2024 at 07:13:54AM +0000, Alice Ryhl wrote:
> > From: Arnd Bergmann <[email protected]>
> >
> > Rust code needs to be able to access _copy_from_user and _copy_to_user
> > so that it can skip the check_copy_size check in cases where the length
> > is known at compile-time, mirroring the logic for when C code will skip
> > check_copy_size. To do this, we ensure that exported versions of these
> > methods are available when CONFIG_RUST is enabled.
> >
> > Alice has verified that this patch passes the CONFIG_TEST_USER_COPY test
> > on x86 using the Android cuttlefish emulator.
> >
> > Signed-off-by: Arnd Bergmann <[email protected]>
>
> Thanks for the updates and the comment on testing. :)
>
> Reviewed-by: Kees Cook <[email protected]>
Thanks for taking a look :)
Alice
On Tue, Apr 16, 2024 at 5:53 AM Alice Ryhl <[email protected]> wrote:
>
> >> +/// Flags for the "get free page" function that underlies all memory allocations.
> >> +pub mod flags {
> >> + /// gfp flags.
> >
> > Uppercase acronym, maybe with a description:
> >
> > GFP (Get Free Page) flags.
> >
> >> + #[allow(non_camel_case_types)]
> >> + pub type gfp_t = bindings::gfp_t;
> >
> > Why not GfpFlags, do we do this elsewhere?
> >
> >> + /// `GFP_KERNEL` is typical for kernel-internal allocations. The caller requires `ZONE_NORMAL`
> >> + /// or a lower zone for direct access but can direct reclaim.
> >> + pub const GFP_KERNEL: gfp_t = bindings::GFP_KERNEL;
> >> + /// `GFP_ZERO` returns a zeroed page on success.
> >> + pub const __GFP_ZERO: gfp_t = bindings::__GFP_ZERO;
> >> + /// `GFP_HIGHMEM` indicates that the allocated memory may be located in high memory.
> >> + pub const __GFP_HIGHMEM: gfp_t = bindings::__GFP_HIGHMEM;
> >
> > It feels a bit weird to have dunder constants on the rust side that
> > aren't also `#[doc(hidden)]` or just nonpublic. Makes me think they
> > are an implementation detail or not really meant to be used - could
> > you update the docs if this is the case?
>
> All of this is going away in the next version because it will be based
> on [1], which defines the gfp flags type for us.
>
> [1]: https://lore.kernel.org/rust-for-linux/[email protected]/
Great, thanks for the link.
> > Could you add an example of how to use this correctly?
>
> This is a private function, you're not supposed to use it directly.
> Anyone who is modifying this file directly can look at the existing
> users for examples.
Ah you're right, missed this bit.
Thanks for the followup.
Trevor
On Wed, Apr 17, 2024 at 4:28 PM Gary Guo <[email protected]> wrote:
>
> On Mon, 15 Apr 2024 07:13:53 +0000
> Alice Ryhl <[email protected]> wrote:
>
> > From: Wedson Almeida Filho <[email protected]>
> >
> > A pointer to an area in userspace memory, which can be either read-only
> > or read-write.
> >
> > All methods on this struct are safe: attempting to read or write on bad
> > addresses (either out of the bound of the slice or unmapped addresses)
> > will return `EFAULT`. Concurrent access, *including data races to/from
> > userspace memory*, is permitted, because fundamentally another userspace
> > thread/process could always be modifying memory at the same time (in the
> > same way that userspace Rust's `std::io` permits data races with the
> > contents of files on disk). In the presence of a race, the exact byte
> > values read/written are unspecified but the operation is well-defined.
> > Kernelspace code should validate its copy of data after completing a
> > read, and not expect that multiple reads of the same address will return
> > the same value.
> >
> > These APIs are designed to make it difficult to accidentally write
> > TOCTOU bugs. Every time you read from a memory location, the pointer is
> > advanced by the length so that you cannot use that reader to read the
> > same memory location twice. Preventing double-fetches avoids TOCTOU
> > bugs. This is accomplished by taking `self` by value to prevent
> > obtaining multiple readers on a given `UserSlicePtr`, and the readers
> > only permitting forward reads. If double-fetching a memory location is
> > necessary for some reason, then that is done by creating multiple
> > readers to the same memory location.
> >
> > Constructing a `UserSlicePtr` performs no checks on the provided
> > address and length, it can safely be constructed inside a kernel thread
> > with no current userspace process. Reads and writes wrap the kernel APIs
> > `copy_from_user` and `copy_to_user`, which check the memory map of the
> > current process and enforce that the address range is within the user
> > range (no additional calls to `access_ok` are needed).
> >
> > This code is based on something that was originally written by Wedson on
> > the old rust branch. It was modified by Alice by removing the
> > `IoBufferReader` and `IoBufferWriter` traits, and various other changes.
> >
> > Signed-off-by: Wedson Almeida Filho <[email protected]>
> > Co-developed-by: Alice Ryhl <[email protected]>
> > Signed-off-by: Alice Ryhl <[email protected]>
> > ---
> > rust/helpers.c | 14 +++
> > rust/kernel/lib.rs | 1 +
> > rust/kernel/uaccess.rs | 304 +++++++++++++++++++++++++++++++++++++++++++++++++
> > 3 files changed, 319 insertions(+)
> >
> > diff --git a/rust/kernel/uaccess.rs b/rust/kernel/uaccess.rs
>
> > +/// [`std::io`]: https://doc.rust-lang.org/std/io/index.html
> > +/// [`clone_reader`]: UserSliceReader::clone_reader
> > +pub struct UserSlice {
> > + ptr: *mut c_void,
> > + length: usize,
> > +}
>
> How useful is the `c_void` in the struct and new signature? They tend
> to not be very useful in Rust. Given that provenance doesn't matter
> for userspace pointers, could this be `usize` simply?
>
> I think `*mut u8` or `*mut ()` makes more sense than `*mut c_void` for
> Rust code even if we don't want to use `usize`.
I don't have a strong opinion here. I suppose a usize could make
sense. But I also think c_void is fine, and I lean towards not
changing it. :)
> Some thinking aloud and brainstorming bits about the API.
>
> I wonder if it make sense to have `User<[u8]>` instead of `UserSlice`?
> The `User` type can be defined like this:
>
> ```rust
> struct User<T: ?Sized> {
> ptr: *mut T,
> }
> ```
>
> and this allows arbitrary T as long as it's POD. So we could have
> `User<[u8]>`, `User<u32>`, `User<PodStruct>`. I imagine the
> `User<[u8]>` would be the general usage and the latter ones can be
> especially helpful if you are trying to implement ioctl and need to
> copy fixed size data structs from userspace.
Hmm, we have to be careful here. Generally, when you get a user slice
via an ioctl, you should make sure to use the length you get from
userspace. In binder, it looks like this:
let user_slice = UserSlice::new(arg, _IOC_SIZE(cmd));
so whichever API we use, we must make sure to get the length as an
argument in bytes. What should we do if the length is not a multiple
of size_of(T)?
Another issue is that there's no stable way to get the length from a
`*mut [T]` without creating a reference, which is not okay for a user
slice.
Alice
On Mon, 15 Apr 2024 07:13:53 +0000
Alice Ryhl <[email protected]> wrote:
> From: Wedson Almeida Filho <[email protected]>
>
> A pointer to an area in userspace memory, which can be either read-only
> or read-write.
>
> All methods on this struct are safe: attempting to read or write on bad
> addresses (either out of the bound of the slice or unmapped addresses)
> will return `EFAULT`. Concurrent access, *including data races to/from
> userspace memory*, is permitted, because fundamentally another userspace
> thread/process could always be modifying memory at the same time (in the
> same way that userspace Rust's `std::io` permits data races with the
> contents of files on disk). In the presence of a race, the exact byte
> values read/written are unspecified but the operation is well-defined.
> Kernelspace code should validate its copy of data after completing a
> read, and not expect that multiple reads of the same address will return
> the same value.
>
> These APIs are designed to make it difficult to accidentally write
> TOCTOU bugs. Every time you read from a memory location, the pointer is
> advanced by the length so that you cannot use that reader to read the
> same memory location twice. Preventing double-fetches avoids TOCTOU
> bugs. This is accomplished by taking `self` by value to prevent
> obtaining multiple readers on a given `UserSlicePtr`, and the readers
> only permitting forward reads. If double-fetching a memory location is
> necessary for some reason, then that is done by creating multiple
> readers to the same memory location.
>
> Constructing a `UserSlicePtr` performs no checks on the provided
> address and length, it can safely be constructed inside a kernel thread
> with no current userspace process. Reads and writes wrap the kernel APIs
> `copy_from_user` and `copy_to_user`, which check the memory map of the
> current process and enforce that the address range is within the user
> range (no additional calls to `access_ok` are needed).
>
> This code is based on something that was originally written by Wedson on
> the old rust branch. It was modified by Alice by removing the
> `IoBufferReader` and `IoBufferWriter` traits, and various other changes.
>
> Signed-off-by: Wedson Almeida Filho <[email protected]>
> Co-developed-by: Alice Ryhl <[email protected]>
> Signed-off-by: Alice Ryhl <[email protected]>
> ---
> rust/helpers.c | 14 +++
> rust/kernel/lib.rs | 1 +
> rust/kernel/uaccess.rs | 304 +++++++++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 319 insertions(+)
>
> diff --git a/rust/kernel/uaccess.rs b/rust/kernel/uaccess.rs
> +/// [`std::io`]: https://doc.rust-lang.org/std/io/index.html
> +/// [`clone_reader`]: UserSliceReader::clone_reader
> +pub struct UserSlice {
> + ptr: *mut c_void,
> + length: usize,
> +}
How useful is the `c_void` in the struct and new signature? They tend
to not be very useful in Rust. Given that provenance doesn't matter
for userspace pointers, could this be `usize` simply?
I think `*mut u8` or `*mut ()` makes more sense than `*mut c_void` for
Rust code even if we don't want to use `usize`.
---
Some thinking aloud and brainstorming bits about the API.
I wonder if it make sense to have `User<[u8]>` instead of `UserSlice`?
The `User` type can be defined like this:
```rust
struct User<T: ?Sized> {
ptr: *mut T,
}
```
and this allows arbitrary T as long as it's POD. So we could have
`User<[u8]>`, `User<u32>`, `User<PodStruct>`. I imagine the
`User<[u8]>` would be the general usage and the latter ones can be
especially helpful if you are trying to implement ioctl and need to
copy fixed size data structs from userspace.
Best,
Gary
On 17.04.24 16:40, Alice Ryhl wrote:
> On Wed, Apr 17, 2024 at 4:28 PM Gary Guo <[email protected]> wrote:
>>
>> On Mon, 15 Apr 2024 07:13:53 +0000
>> Alice Ryhl <[email protected]> wrote:
>>
>>> From: Wedson Almeida Filho <[email protected]>
>>>
>>> A pointer to an area in userspace memory, which can be either read-only
>>> or read-write.
>>>
>>> All methods on this struct are safe: attempting to read or write on bad
>>> addresses (either out of the bound of the slice or unmapped addresses)
>>> will return `EFAULT`. Concurrent access, *including data races to/from
>>> userspace memory*, is permitted, because fundamentally another userspace
>>> thread/process could always be modifying memory at the same time (in the
>>> same way that userspace Rust's `std::io` permits data races with the
>>> contents of files on disk). In the presence of a race, the exact byte
>>> values read/written are unspecified but the operation is well-defined.
>>> Kernelspace code should validate its copy of data after completing a
>>> read, and not expect that multiple reads of the same address will return
>>> the same value.
>>>
>>> These APIs are designed to make it difficult to accidentally write
>>> TOCTOU bugs. Every time you read from a memory location, the pointer is
>>> advanced by the length so that you cannot use that reader to read the
>>> same memory location twice. Preventing double-fetches avoids TOCTOU
>>> bugs. This is accomplished by taking `self` by value to prevent
>>> obtaining multiple readers on a given `UserSlicePtr`, and the readers
>>> only permitting forward reads. If double-fetching a memory location is
>>> necessary for some reason, then that is done by creating multiple
>>> readers to the same memory location.
>>>
>>> Constructing a `UserSlicePtr` performs no checks on the provided
>>> address and length, it can safely be constructed inside a kernel thread
>>> with no current userspace process. Reads and writes wrap the kernel APIs
>>> `copy_from_user` and `copy_to_user`, which check the memory map of the
>>> current process and enforce that the address range is within the user
>>> range (no additional calls to `access_ok` are needed).
>>>
>>> This code is based on something that was originally written by Wedson on
>>> the old rust branch. It was modified by Alice by removing the
>>> `IoBufferReader` and `IoBufferWriter` traits, and various other changes.
>>>
>>> Signed-off-by: Wedson Almeida Filho <[email protected]>
>>> Co-developed-by: Alice Ryhl <[email protected]>
>>> Signed-off-by: Alice Ryhl <[email protected]>
>>> ---
>>> rust/helpers.c | 14 +++
>>> rust/kernel/lib.rs | 1 +
>>> rust/kernel/uaccess.rs | 304 +++++++++++++++++++++++++++++++++++++++++++++++++
>>> 3 files changed, 319 insertions(+)
>>>
>>> diff --git a/rust/kernel/uaccess.rs b/rust/kernel/uaccess.rs
>>
>>> +/// [`std::io`]: https://doc.rust-lang.org/std/io/index.html
>>> +/// [`clone_reader`]: UserSliceReader::clone_reader
>>> +pub struct UserSlice {
>>> + ptr: *mut c_void,
>>> + length: usize,
>>> +}
>>
>> How useful is the `c_void` in the struct and new signature? They tend
>> to not be very useful in Rust. Given that provenance doesn't matter
>> for userspace pointers, could this be `usize` simply?
>>
>> I think `*mut u8` or `*mut ()` makes more sense than `*mut c_void` for
>> Rust code even if we don't want to use `usize`.
>
> I don't have a strong opinion here. I suppose a usize could make
> sense. But I also think c_void is fine, and I lean towards not
> changing it. :)
>
>> Some thinking aloud and brainstorming bits about the API.
>>
>> I wonder if it make sense to have `User<[u8]>` instead of `UserSlice`?
>> The `User` type can be defined like this:
>>
>> ```rust
>> struct User<T: ?Sized> {
>> ptr: *mut T,
>> }
>> ```
>>
>> and this allows arbitrary T as long as it's POD. So we could have
>> `User<[u8]>`, `User<u32>`, `User<PodStruct>`. I imagine the
>> `User<[u8]>` would be the general usage and the latter ones can be
>> especially helpful if you are trying to implement ioctl and need to
>> copy fixed size data structs from userspace.
>
> Hmm, we have to be careful here. Generally, when you get a user slice
> via an ioctl, you should make sure to use the length you get from
> userspace. In binder, it looks like this:
>
> let user_slice = UserSlice::new(arg, _IOC_SIZE(cmd));
>
> so whichever API we use, we must make sure to get the length as an
> argument in bytes. What should we do if the length is not a multiple
> of size_of(T)?
We could print a warning and then just floor to the next multiple of
`size_of::<T>()`. I agree that is not perfect, but if one uses the
current API, one also needs to do the length check eventually.
> Another issue is that there's no stable way to get the length from a
> `*mut [T]` without creating a reference, which is not okay for a user
> slice.
Seems like `<* const [T]>::len` (feature `slice_ptr_len`) [1] was just
stabilized 5 days ago [1].
[1]: https://doc.rust-lang.org/std/primitive.pointer.html#method.len-1
[2]: https://github.com/rust-lang/rust/pull/123868
--
Cheers,
Benno
On Wed, Apr 17, 2024 at 5:27 PM Benno Lossin <[email protected]> wrote:
>
> On 17.04.24 16:40, Alice Ryhl wrote:
> > On Wed, Apr 17, 2024 at 4:28 PM Gary Guo <[email protected]> wrote:
> >>
> >> On Mon, 15 Apr 2024 07:13:53 +0000
> >> Alice Ryhl <[email protected]> wrote:
> >>
> >>> From: Wedson Almeida Filho <[email protected]>
> >>>
> >>> A pointer to an area in userspace memory, which can be either read-only
> >>> or read-write.
> >>>
> >>> All methods on this struct are safe: attempting to read or write on bad
> >>> addresses (either out of the bound of the slice or unmapped addresses)
> >>> will return `EFAULT`. Concurrent access, *including data races to/from
> >>> userspace memory*, is permitted, because fundamentally another userspace
> >>> thread/process could always be modifying memory at the same time (in the
> >>> same way that userspace Rust's `std::io` permits data races with the
> >>> contents of files on disk). In the presence of a race, the exact byte
> >>> values read/written are unspecified but the operation is well-defined.
> >>> Kernelspace code should validate its copy of data after completing a
> >>> read, and not expect that multiple reads of the same address will return
> >>> the same value.
> >>>
> >>> These APIs are designed to make it difficult to accidentally write
> >>> TOCTOU bugs. Every time you read from a memory location, the pointer is
> >>> advanced by the length so that you cannot use that reader to read the
> >>> same memory location twice. Preventing double-fetches avoids TOCTOU
> >>> bugs. This is accomplished by taking `self` by value to prevent
> >>> obtaining multiple readers on a given `UserSlicePtr`, and the readers
> >>> only permitting forward reads. If double-fetching a memory location is
> >>> necessary for some reason, then that is done by creating multiple
> >>> readers to the same memory location.
> >>>
> >>> Constructing a `UserSlicePtr` performs no checks on the provided
> >>> address and length, it can safely be constructed inside a kernel thread
> >>> with no current userspace process. Reads and writes wrap the kernel APIs
> >>> `copy_from_user` and `copy_to_user`, which check the memory map of the
> >>> current process and enforce that the address range is within the user
> >>> range (no additional calls to `access_ok` are needed).
> >>>
> >>> This code is based on something that was originally written by Wedson on
> >>> the old rust branch. It was modified by Alice by removing the
> >>> `IoBufferReader` and `IoBufferWriter` traits, and various other changes.
> >>>
> >>> Signed-off-by: Wedson Almeida Filho <[email protected]>
> >>> Co-developed-by: Alice Ryhl <[email protected]>
> >>> Signed-off-by: Alice Ryhl <[email protected]>
> >>> ---
> >>> rust/helpers.c | 14 +++
> >>> rust/kernel/lib.rs | 1 +
> >>> rust/kernel/uaccess.rs | 304 +++++++++++++++++++++++++++++++++++++++++++++++++
> >>> 3 files changed, 319 insertions(+)
> >>>
> >>> diff --git a/rust/kernel/uaccess.rs b/rust/kernel/uaccess.rs
> >>
> >>> +/// [`std::io`]: https://doc.rust-lang.org/std/io/index.html
> >>> +/// [`clone_reader`]: UserSliceReader::clone_reader
> >>> +pub struct UserSlice {
> >>> + ptr: *mut c_void,
> >>> + length: usize,
> >>> +}
> >>
> >> How useful is the `c_void` in the struct and new signature? They tend
> >> to not be very useful in Rust. Given that provenance doesn't matter
> >> for userspace pointers, could this be `usize` simply?
> >>
> >> I think `*mut u8` or `*mut ()` makes more sense than `*mut c_void` for
> >> Rust code even if we don't want to use `usize`.
> >
> > I don't have a strong opinion here. I suppose a usize could make
> > sense. But I also think c_void is fine, and I lean towards not
> > changing it. :)
> >
> >> Some thinking aloud and brainstorming bits about the API.
> >>
> >> I wonder if it make sense to have `User<[u8]>` instead of `UserSlice`?
> >> The `User` type can be defined like this:
> >>
> >> ```rust
> >> struct User<T: ?Sized> {
> >> ptr: *mut T,
> >> }
> >> ```
> >>
> >> and this allows arbitrary T as long as it's POD. So we could have
> >> `User<[u8]>`, `User<u32>`, `User<PodStruct>`. I imagine the
> >> `User<[u8]>` would be the general usage and the latter ones can be
> >> especially helpful if you are trying to implement ioctl and need to
> >> copy fixed size data structs from userspace.
> >
> > Hmm, we have to be careful here. Generally, when you get a user slice
> > via an ioctl, you should make sure to use the length you get from
> > userspace. In binder, it looks like this:
> >
> > let user_slice = UserSlice::new(arg, _IOC_SIZE(cmd));
> >
> > so whichever API we use, we must make sure to get the length as an
> > argument in bytes. What should we do if the length is not a multiple
> > of size_of(T)?
>
> We could print a warning and then just floor to the next multiple of
> `size_of::<T>()`. I agree that is not perfect, but if one uses the
> current API, one also needs to do the length check eventually.
Right now, the length check happens when you call `read::<T>` and get
EFAULT if the size of T is greater than the length of the user slice.
That works pretty well. And there are real-world use-cases for
userspace passing in a length longer than what the kernel expects -
often adding fields to the end of the struct is how the kernel makes
ioctls extensible. So I don't think printing a warning in that case
would be good.
In Binder, I also have use-cases where I alternate between reading
bytes and various different structs. Basically, I read two user slices
in lockstep, where the next value in one userslice determines whether
I should read some amount of bytes or a specific struct from the other
user slice. That's much easier with the current API than this
proposal.
> > Another issue is that there's no stable way to get the length from a
> > `*mut [T]` without creating a reference, which is not okay for a user
> > slice.
>
> Seems like `<* const [T]>::len` (feature `slice_ptr_len`) [1] was just
> stabilized 5 days ago [1].
>
> [1]: https://doc.rust-lang.org/std/primitive.pointer.html#method.len-1
> [2]: https://github.com/rust-lang/rust/pull/123868
Okay.
Alice
Should you be implementing 'struct iov_iter' ?
Even if it means creating an IO_UBUF for ioctls?
(Although that might take some 'fettling' for read+write for ioctls.)
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
On Sun, Apr 21, 2024 at 8:08 PM David Laight <[email protected]> wrote:
>
> Should you be implementing 'struct iov_iter' ?
>
> Even if it means creating an IO_UBUF for ioctls?
> (Although that might take some 'fettling' for read+write for ioctls.)
That seems to be intended for when you have several chunks of memory
in userspace that you want to treat as one contiguous chunk. That's
not something I need in the Android Binder driver.
Alice
From: Alice Ryhl
> Sent: 21 April 2024 19:38
>
> On Sun, Apr 21, 2024 at 8:08 PM David Laight <[email protected]> wrote:
> >
> > Should you be implementing 'struct iov_iter' ?
> >
> > Even if it means creating an IO_UBUF for ioctls?
> > (Although that might take some 'fettling' for read+write for ioctls.)
>
> That seems to be intended for when you have several chunks of memory
> in userspace that you want to treat as one contiguous chunk. That's
> not something I need in the Android Binder driver.
It also transparently supports in-kernel users and some other cases.
I think there is a patch intended for 6.10 that removes the 'read'
and 'write' driver 'ops' and requires that drivers support 'read_iter'
and 'write_iter'.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
On Sun, Apr 21, 2024 at 9:49 PM David Laight <[email protected]> wrote:
>
> From: Alice Ryhl
> > Sent: 21 April 2024 19:38
> >
> > On Sun, Apr 21, 2024 at 8:08 PM David Laight <[email protected]> wrote:
> > >
> > > Should you be implementing 'struct iov_iter' ?
> > >
> > > Even if it means creating an IO_UBUF for ioctls?
> > > (Although that might take some 'fettling' for read+write for ioctls.)
> >
> > That seems to be intended for when you have several chunks of memory
> > in userspace that you want to treat as one contiguous chunk. That's
> > not something I need in the Android Binder driver.
>
> It also transparently supports in-kernel users and some other cases.
>
> I think there is a patch intended for 6.10 that removes the 'read'
> and 'write' driver 'ops' and requires that drivers support 'read_iter'
> and 'write_iter'.
Binder uses an ioctl, not read/write, so even if Binder could use it,
there's no real advantage for Binder to do so. But it does sound like
something we want to support eventually in the kernel. However, I've
spent a long time on polishing this API, and it fits my needs well. I
don't want to start over now that I am almost finished.
So, I think that this could be a good follow-up to this patch that
someone else is welcome to submit.
Alice