This patchset contains some abstractions needed by the Rust
implementation of the Binder driver for passing data between userspace,
kernelspace, and directly into other processes.
These abstractions do not exactly match what was included in the Rust
Binder RFC - I have made various improvements and simplifications since
then. Nonetheless, please see the Rust Binder RFC [1] to get an
understanding for how this will be used:
Users of "rust: add userspace pointers"
and "rust: add typed accessors for userspace pointers":
rust_binder: add binderfs support to Rust binder
rust_binder: add threading support
rust_binder: add nodes and context managers
rust_binder: add oneway transactions
rust_binder: add death notifications
rust_binder: send nodes in transactions
rust_binder: add BINDER_TYPE_PTR support
rust_binder: add BINDER_TYPE_FDA support
rust_binder: add process freezing
Users of "rust: add abstraction for `struct page`":
rust_binder: add oneway transactions
rust_binder: add vma shrinker
Links: https://lore.kernel.org/rust-for-linux/[email protected]/ [1]
Signed-off-by: Alice Ryhl <[email protected]>
---
Changes in v3:
- Fix bug in read_all.
- Add missing `#include <linux/nospec.h>`.
- Mention that the second patch passes CONFIG_TEST_USER_COPY.
- Add gfp flags for Page.
- Minor documentation adjustments.
- Link to v2: https://lore.kernel.org/rust-for-linux/[email protected]/
Changes in v2:
- Rename user_ptr module to uaccess.
- Use srctree-relative links.
- Improve documentation.
- Rename UserSlicePtr to UserSlice.
- Make read_to_end append to the buffer.
- Use named fields for uaccess types.
- Add examples.
- Use _copy_from/to_user to skip check_object_size.
- Rename traits and move to kernel::types.
- Remove PAGE_MASK constant.
- Rename page methods to say _raw.
- Link to v1: https://lore.kernel.org/rust-for-linux/[email protected]/
---
Alice Ryhl (2):
rust: uaccess: add typed accessors for userspace pointers
rust: add abstraction for `struct page`
Arnd Bergmann (1):
uaccess: always export _copy_[from|to]_user with CONFIG_RUST
Wedson Almeida Filho (1):
rust: uaccess: add userspace pointers
include/linux/uaccess.h | 38 ++--
lib/usercopy.c | 30 +---
rust/bindings/bindings_helper.h | 3 +
rust/helpers.c | 34 ++++
rust/kernel/lib.rs | 2 +
rust/kernel/page.rs | 223 +++++++++++++++++++++++
rust/kernel/types.rs | 67 +++++++
rust/kernel/uaccess.rs | 388 ++++++++++++++++++++++++++++++++++++++++
8 files changed, 745 insertions(+), 40 deletions(-)
---
base-commit: 768409cff6cc89fe1194da880537a09857b6e4db
change-id: 20231128-alice-mm-bc533456cee8
Best regards,
--
Alice Ryhl <[email protected]>
From: Wedson Almeida Filho <[email protected]>
A pointer to an area in userspace memory, which can be either read-only
or read-write.
All methods on this struct are safe: invalid pointers return `EFAULT`.
Concurrent access, *including data races to/from userspace memory*, is
permitted, because fundamentally another userspace thread/process could
always be modifying memory at the same time (in the same way that
userspace Rust's `std::io` permits data races with the contents of
files on disk). In the presence of a race, the exact byte values
read/written are unspecified but the operation is well-defined.
Kernelspace code should validate its copy of data after completing a
read, and not expect that multiple reads of the same address will return
the same value.
These APIs are designed to make it difficult to accidentally write
TOCTOU bugs. Every time you read from a memory location, the pointer is
advanced by the length so that you cannot use that reader to read the
same memory location twice. Preventing double-fetches avoids TOCTOU
bugs. This is accomplished by taking `self` by value to prevent
obtaining multiple readers on a given `UserSlicePtr`, and the readers
only permitting forward reads. If double-fetching a memory location is
necessary for some reason, then that is done by creating multiple
readers to the same memory location.
Constructing a `UserSlicePtr` performs no checks on the provided
address and length, it can safely be constructed inside a kernel thread
with no current userspace process. Reads and writes wrap the kernel APIs
`copy_from_user` and `copy_to_user`, which check the memory map of the
current process and enforce that the address range is within the user
range (no additional calls to `access_ok` are needed).
This code is based on something that was originally written by Wedson on
the old rust branch. It was modified by Alice by removing the
`IoBufferReader` and `IoBufferWriter` traits, and various other changes.
Signed-off-by: Wedson Almeida Filho <[email protected]>
Co-developed-by: Alice Ryhl <[email protected]>
Signed-off-by: Alice Ryhl <[email protected]>
---
rust/helpers.c | 14 +++
rust/kernel/lib.rs | 1 +
rust/kernel/uaccess.rs | 315 +++++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 330 insertions(+)
diff --git a/rust/helpers.c b/rust/helpers.c
index 70e59efd92bc..312b6fcb49d5 100644
--- a/rust/helpers.c
+++ b/rust/helpers.c
@@ -38,6 +38,20 @@ __noreturn void rust_helper_BUG(void)
}
EXPORT_SYMBOL_GPL(rust_helper_BUG);
+unsigned long rust_helper_copy_from_user(void *to, const void __user *from,
+ unsigned long n)
+{
+ return copy_from_user(to, from, n);
+}
+EXPORT_SYMBOL_GPL(rust_helper_copy_from_user);
+
+unsigned long rust_helper_copy_to_user(void __user *to, const void *from,
+ unsigned long n)
+{
+ return copy_to_user(to, from, n);
+}
+EXPORT_SYMBOL_GPL(rust_helper_copy_to_user);
+
void rust_helper_mutex_lock(struct mutex *lock)
{
mutex_lock(lock);
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index be68d5e567b1..37f84223b83f 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -49,6 +49,7 @@
pub mod task;
pub mod time;
pub mod types;
+pub mod uaccess;
pub mod workqueue;
#[doc(hidden)]
diff --git a/rust/kernel/uaccess.rs b/rust/kernel/uaccess.rs
new file mode 100644
index 000000000000..020f3847683f
--- /dev/null
+++ b/rust/kernel/uaccess.rs
@@ -0,0 +1,315 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! User pointers.
+//!
+//! C header: [`include/linux/uaccess.h`](srctree/include/linux/uaccess.h)
+
+use crate::{bindings, error::code::*, error::Result};
+use alloc::vec::Vec;
+use core::ffi::{c_ulong, c_void};
+
+/// A pointer to an area in userspace memory, which can be either read-only or
+/// read-write.
+///
+/// All methods on this struct are safe: attempting to read or write invalid
+/// pointers will return `EFAULT`. Concurrent access, *including data races
+/// to/from userspace memory*, is permitted, because fundamentally another
+/// userspace thread/process could always be modifying memory at the same time
+/// (in the same way that userspace Rust's [`std::io`] permits data races with
+/// the contents of files on disk). In the presence of a race, the exact byte
+/// values read/written are unspecified but the operation is well-defined.
+/// Kernelspace code should validate its copy of data after completing a read,
+/// and not expect that multiple reads of the same address will return the same
+/// value.
+///
+/// These APIs are designed to make it difficult to accidentally write TOCTOU
+/// (time-of-check to time-of-use) bugs. Every time a memory location is read,
+/// the reader's position is advanced by the read length and the next read will
+/// start from there. This helps prevent accidentally reading the same location
+/// twice and causing a TOCTOU bug.
+///
+/// Creating a [`UserSliceReader`] and/or [`UserSliceWriter`] consumes the
+/// `UserSlice`, helping ensure that there aren't multiple readers or writers to
+/// the same location.
+///
+/// If double-fetching a memory location is necessary for some reason, then that
+/// is done by creating multiple readers to the same memory location, e.g. using
+/// [`clone_reader`].
+///
+/// # Examples
+///
+/// Takes a region of userspace memory from the current process, and modify it
+/// by adding one to every byte in the region.
+///
+/// ```no_run
+/// use alloc::vec::Vec;
+/// use core::ffi::c_void;
+/// use kernel::error::Result;
+/// use kernel::uaccess::UserSlice;
+///
+/// pub fn bytes_add_one(uptr: *mut c_void, len: usize) -> Result<()> {
+/// let (read, mut write) = UserSlice::new(uptr, len).reader_writer();
+///
+/// let mut buf = Vec::new();
+/// read.read_all(&mut buf)?;
+///
+/// for b in &mut buf {
+/// *b = b.wrapping_add(1);
+/// }
+///
+/// write.write_slice(&buf)?;
+/// Ok(())
+/// }
+/// ```
+///
+/// Example illustrating a TOCTOU (time-of-check to time-of-use) bug.
+///
+/// ```no_run
+/// use alloc::vec::Vec;
+/// use core::ffi::c_void;
+/// use kernel::error::{code::EINVAL, Result};
+/// use kernel::uaccess::UserSlice;
+///
+/// /// Returns whether the data in this region is valid.
+/// fn is_valid(uptr: *mut c_void, len: usize) -> Result<bool> {
+/// let read = UserSlice::new(uptr, len).reader();
+///
+/// let mut buf = Vec::new();
+/// read.read_all(&mut buf)?;
+///
+/// todo!()
+/// }
+///
+/// /// Returns the bytes behind this user pointer if they are valid.
+/// pub fn get_bytes_if_valid(uptr: *mut c_void, len: usize) -> Result<Vec<u8>> {
+/// if !is_valid(uptr, len)? {
+/// return Err(EINVAL);
+/// }
+///
+/// let read = UserSlice::new(uptr, len).reader();
+///
+/// let mut buf = Vec::new();
+/// read.read_all(&mut buf)?;
+///
+/// // THIS IS A BUG! The bytes could have changed since we checked them.
+/// //
+/// // To avoid this kind of bug, don't call `UserSlice::new` multiple
+/// // times with the same address.
+/// Ok(buf)
+/// }
+/// ```
+///
+/// [`std::io`]: https://doc.rust-lang.org/std/io/index.html
+/// [`clone_reader`]: UserSliceReader::clone_reader
+pub struct UserSlice {
+ ptr: *mut c_void,
+ length: usize,
+}
+
+impl UserSlice {
+ /// Constructs a user slice from a raw pointer and a length in bytes.
+ ///
+ /// Constructing a [`UserSlice`] performs no checks on the provided address
+ /// and length, it can safely be constructed inside a kernel thread with no
+ /// current userspace process. Reads and writes wrap the kernel APIs
+ /// `copy_from_user` and `copy_to_user`, which check the memory map of the
+ /// current process and enforce that the address range is within the user
+ /// range (no additional calls to `access_ok` are needed).
+ ///
+ /// Callers must be careful to avoid time-of-check-time-of-use
+ /// (TOCTOU) issues. The simplest way is to create a single instance of
+ /// [`UserSlice`] per user memory block as it reads each byte at
+ /// most once.
+ pub fn new(ptr: *mut c_void, length: usize) -> Self {
+ UserSlice { ptr, length }
+ }
+
+ /// Reads the entirety of the user slice, appending it to the end of the
+ /// provided buffer.
+ ///
+ /// Fails with `EFAULT` if the read encounters a page fault.
+ pub fn read_all(self, buf: &mut Vec<u8>) -> Result {
+ self.reader().read_all(buf)
+ }
+
+ /// Constructs a [`UserSliceReader`].
+ pub fn reader(self) -> UserSliceReader {
+ UserSliceReader {
+ ptr: self.ptr,
+ length: self.length,
+ }
+ }
+
+ /// Constructs a [`UserSliceWriter`].
+ pub fn writer(self) -> UserSliceWriter {
+ UserSliceWriter {
+ ptr: self.ptr,
+ length: self.length,
+ }
+ }
+
+ /// Constructs both a [`UserSliceReader`] and a [`UserSliceWriter`].
+ ///
+ /// Usually when this is used, you will first read the data, and then
+ /// overwrite it afterwards.
+ pub fn reader_writer(self) -> (UserSliceReader, UserSliceWriter) {
+ (
+ UserSliceReader {
+ ptr: self.ptr,
+ length: self.length,
+ },
+ UserSliceWriter {
+ ptr: self.ptr,
+ length: self.length,
+ },
+ )
+ }
+}
+
+/// A reader for [`UserSlice`].
+///
+/// Used to incrementally read from the user slice.
+pub struct UserSliceReader {
+ ptr: *mut c_void,
+ length: usize,
+}
+
+impl UserSliceReader {
+ /// Skip the provided number of bytes.
+ ///
+ /// Returns an error if skipping more than the length of the buffer.
+ pub fn skip(&mut self, num_skip: usize) -> Result {
+ // Update `self.length` first since that's the fallible part of this
+ // operation.
+ self.length = self.length.checked_sub(num_skip).ok_or(EFAULT)?;
+ self.ptr = self.ptr.wrapping_byte_add(num_skip);
+ Ok(())
+ }
+
+ /// Create a reader that can access the same range of data.
+ ///
+ /// Reading from the clone does not advance the current reader.
+ ///
+ /// The caller should take care to not introduce TOCTOU issues, as described
+ /// in the documentation for [`UserSlice`].
+ pub fn clone_reader(&self) -> UserSliceReader {
+ UserSliceReader {
+ ptr: self.ptr,
+ length: self.length,
+ }
+ }
+
+ /// Returns the number of bytes left to be read from this reader.
+ ///
+ /// Note that even reading less than this number of bytes may fail.
+ pub fn len(&self) -> usize {
+ self.length
+ }
+
+ /// Returns `true` if no data is available in the io buffer.
+ pub fn is_empty(&self) -> bool {
+ self.length == 0
+ }
+
+ /// Reads raw data from the user slice into a raw kernel buffer.
+ ///
+ /// Fails with `EFAULT` if the read encounters a page fault.
+ ///
+ /// # Safety
+ ///
+ /// The `out` pointer must be valid for writing `len` bytes.
+ pub unsafe fn read_raw(&mut self, out: *mut u8, len: usize) -> Result {
+ if len > self.length {
+ return Err(EFAULT);
+ }
+ let Ok(len_ulong) = c_ulong::try_from(len) else {
+ return Err(EFAULT);
+ };
+ // SAFETY: The caller promises that `out` is valid for writing `len` bytes.
+ let res = unsafe { bindings::copy_from_user(out.cast::<c_void>(), self.ptr, len_ulong) };
+ if res != 0 {
+ return Err(EFAULT);
+ }
+ // Userspace pointers are not directly dereferencable by the kernel, so
+ // we cannot use `add`, which has C-style rules for defined behavior.
+ self.ptr = self.ptr.wrapping_byte_add(len);
+ self.length -= len;
+ Ok(())
+ }
+
+ /// Reads the entirety of the user slice, appending it to the end of the
+ /// provided buffer.
+ ///
+ /// Fails with `EFAULT` if the read encounters a page fault.
+ pub fn read_all(mut self, buf: &mut Vec<u8>) -> Result {
+ let len = self.length;
+ buf.try_reserve(len)?;
+
+ // SAFETY: The call to `try_reserve` was successful, so the spare
+ // capacity is at least `len` bytes long.
+ unsafe { self.read_raw(buf.spare_capacity_mut().as_mut_ptr().cast(), len)? };
+
+ // SAFETY: Since the call to `read_raw` was successful, so the next
+ // `len` bytes of the vector have been initialized.
+ unsafe { buf.set_len(buf.len() + len) };
+ Ok(())
+ }
+}
+
+/// A writer for [`UserSlice`].
+///
+/// Used to incrementally write into the user slice.
+pub struct UserSliceWriter {
+ ptr: *mut c_void,
+ length: usize,
+}
+
+impl UserSliceWriter {
+ /// Returns the amount of space remaining in this buffer.
+ ///
+ /// Note that even writing less than this number of bytes may fail.
+ pub fn len(&self) -> usize {
+ self.length
+ }
+
+ /// Returns `true` if no more data can be written to this buffer.
+ pub fn is_empty(&self) -> bool {
+ self.length == 0
+ }
+
+ /// Writes raw data to this user pointer from a raw kernel buffer.
+ ///
+ /// Fails with `EFAULT` if the write encounters a page fault.
+ ///
+ /// # Safety
+ ///
+ /// The `data` pointer must be valid for reading `len` bytes.
+ pub unsafe fn write_raw(&mut self, data: *const u8, len: usize) -> Result {
+ if len > self.length {
+ return Err(EFAULT);
+ }
+ let Ok(len_ulong) = c_ulong::try_from(len) else {
+ return Err(EFAULT);
+ };
+ let res = unsafe { bindings::copy_to_user(self.ptr, data.cast::<c_void>(), len_ulong) };
+ if res != 0 {
+ return Err(EFAULT);
+ }
+ // Userspace pointers are not directly dereferencable by the kernel, so
+ // we cannot use `add`, which has C-style rules for defined behavior.
+ self.ptr = self.ptr.wrapping_byte_add(len);
+ self.length -= len;
+ Ok(())
+ }
+
+ /// Writes the provided slice to this user pointer.
+ ///
+ /// Fails with `EFAULT` if the write encounters a page fault.
+ pub fn write_slice(&mut self, data: &[u8]) -> Result {
+ let len = data.len();
+ let ptr = data.as_ptr();
+ // SAFETY: The pointer originates from a reference to a slice of length
+ // `len`, so the pointer is valid for reading `len` bytes.
+ unsafe { self.write_raw(ptr, len) }
+ }
+}
--
2.44.0.278.ge034bb2e1d-goog
From: Arnd Bergmann <[email protected]>
Rust code needs to be able to access _copy_from_user and _copy_to_user
so that it can skip the check_copy_size check in cases where the length
is known at compile-time, mirroring the logic for when C code will skip
check_copy_size. To do this, we ensure that exported versions of these
methods are available when CONFIG_RUST is enabled.
Alice has verified that this patch passes the CONFIG_TEST_USER_COPY test
on x86 using the Android cuttlefish emulator.
Signed-off-by: Arnd Bergmann <[email protected]>
Tested-by: Alice Ryhl <[email protected]>
Signed-off-by: Alice Ryhl <[email protected]>
---
include/linux/uaccess.h | 38 ++++++++++++++++++++++++--------------
lib/usercopy.c | 30 ++++--------------------------
2 files changed, 28 insertions(+), 40 deletions(-)
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index 3064314f4832..2ebfce98b5cc 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -5,6 +5,7 @@
#include <linux/fault-inject-usercopy.h>
#include <linux/instrumented.h>
#include <linux/minmax.h>
+#include <linux/nospec.h>
#include <linux/sched.h>
#include <linux/thread_info.h>
@@ -138,13 +139,18 @@ __copy_to_user(void __user *to, const void *from, unsigned long n)
return raw_copy_to_user(to, from, n);
}
-#ifdef INLINE_COPY_FROM_USER
static inline __must_check unsigned long
-_copy_from_user(void *to, const void __user *from, unsigned long n)
+_inline_copy_from_user(void *to, const void __user *from, unsigned long n)
{
unsigned long res = n;
might_fault();
if (!should_fail_usercopy() && likely(access_ok(from, n))) {
+ /*
+ * Ensure that bad access_ok() speculation will not
+ * lead to nasty side effects *after* the copy is
+ * finished:
+ */
+ barrier_nospec();
instrument_copy_from_user_before(to, from, n);
res = raw_copy_from_user(to, from, n);
instrument_copy_from_user_after(to, from, n, res);
@@ -153,14 +159,11 @@ _copy_from_user(void *to, const void __user *from, unsigned long n)
memset(to + (n - res), 0, res);
return res;
}
-#else
extern __must_check unsigned long
_copy_from_user(void *, const void __user *, unsigned long);
-#endif
-#ifdef INLINE_COPY_TO_USER
static inline __must_check unsigned long
-_copy_to_user(void __user *to, const void *from, unsigned long n)
+_inline_copy_to_user(void __user *to, const void *from, unsigned long n)
{
might_fault();
if (should_fail_usercopy())
@@ -171,25 +174,32 @@ _copy_to_user(void __user *to, const void *from, unsigned long n)
}
return n;
}
-#else
extern __must_check unsigned long
_copy_to_user(void __user *, const void *, unsigned long);
-#endif
static __always_inline unsigned long __must_check
copy_from_user(void *to, const void __user *from, unsigned long n)
{
- if (check_copy_size(to, n, false))
- n = _copy_from_user(to, from, n);
- return n;
+ if (!check_copy_size(to, n, false))
+ return n;
+#ifdef INLINE_COPY_FROM_USER
+ return _inline_copy_from_user(to, from, n);
+#else
+ return _copy_from_user(to, from, n);
+#endif
}
static __always_inline unsigned long __must_check
copy_to_user(void __user *to, const void *from, unsigned long n)
{
- if (check_copy_size(from, n, true))
- n = _copy_to_user(to, from, n);
- return n;
+ if (!check_copy_size(from, n, true))
+ return n;
+
+#ifdef INLINE_COPY_TO_USER
+ return _inline_copy_to_user(to, from, n);
+#else
+ return _copy_to_user(to, from, n);
+#endif
}
#ifndef copy_mc_to_kernel
diff --git a/lib/usercopy.c b/lib/usercopy.c
index d29fe29c6849..de7f30618293 100644
--- a/lib/usercopy.c
+++ b/lib/usercopy.c
@@ -7,40 +7,18 @@
/* out-of-line parts */
-#ifndef INLINE_COPY_FROM_USER
+#if !defined(INLINE_COPY_FROM_USER) || defined(CONFIG_RUST)
unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n)
{
- unsigned long res = n;
- might_fault();
- if (!should_fail_usercopy() && likely(access_ok(from, n))) {
- /*
- * Ensure that bad access_ok() speculation will not
- * lead to nasty side effects *after* the copy is
- * finished:
- */
- barrier_nospec();
- instrument_copy_from_user_before(to, from, n);
- res = raw_copy_from_user(to, from, n);
- instrument_copy_from_user_after(to, from, n, res);
- }
- if (unlikely(res))
- memset(to + (n - res), 0, res);
- return res;
+ return _inline_copy_from_user(to, from, n);
}
EXPORT_SYMBOL(_copy_from_user);
#endif
-#ifndef INLINE_COPY_TO_USER
+#if !defined(INLINE_COPY_TO_USER) || defined(CONFIG_RUST)
unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n)
{
- might_fault();
- if (should_fail_usercopy())
- return n;
- if (likely(access_ok(to, n))) {
- instrument_copy_to_user(to, from, n);
- n = raw_copy_to_user(to, from, n);
- }
- return n;
+ return _inline_copy_to_user(to, from, n);
}
EXPORT_SYMBOL(_copy_to_user);
#endif
--
2.44.0.278.ge034bb2e1d-goog
Add safe methods for reading and writing Rust values to and from
userspace pointers.
The C methods for copying to/from userspace use a function called
`check_object_size` to verify that the kernel pointer is not dangling.
However, this check is skipped when the length is a compile-time
constant, with the assumption that such cases trivially have a correct
kernel pointer.
In this patch, we apply the same optimization to the typed accessors.
For both methods, the size of the operation is known at compile time to
be size_of of the type being read or written. Since the C side doesn't
provide a variant that skips only this check, we create custom helpers
for this purpose.
The majority of reads and writes to userspace pointers in the Rust
Binder driver uses these accessor methods. Benchmarking has found that
skipping the `check_object_size` check makes a big difference for the
cases being skipped here. (And that the check doesn't make a difference
for the cases that use the raw read/write methods.)
This code is based on something that was originally written by Wedson on
the old rust branch. It was modified by Alice to skip the
`check_object_size` check, and to update various comments, including the
notes about kernel pointers in `WritableToBytes`.
Co-developed-by: Wedson Almeida Filho <[email protected]>
Signed-off-by: Wedson Almeida Filho <[email protected]>
Signed-off-by: Alice Ryhl <[email protected]>
---
rust/kernel/types.rs | 67 ++++++++++++++++++++++++++++++++++++++++++++
rust/kernel/uaccess.rs | 75 +++++++++++++++++++++++++++++++++++++++++++++++++-
2 files changed, 141 insertions(+), 1 deletion(-)
diff --git a/rust/kernel/types.rs b/rust/kernel/types.rs
index aa77bad9bce4..f72b82efdbfa 100644
--- a/rust/kernel/types.rs
+++ b/rust/kernel/types.rs
@@ -409,3 +409,70 @@ pub enum Either<L, R> {
/// Constructs an instance of [`Either`] containing a value of type `R`.
Right(R),
}
+
+/// Types for which any bit pattern is valid.
+///
+/// Not all types are valid for all values. For example, a `bool` must be either
+/// zero or one, so reading arbitrary bytes into something that contains a
+/// `bool` is not okay.
+///
+/// It's okay for the type to have padding, as initializing those bytes has no
+/// effect.
+///
+/// # Safety
+///
+/// All bit-patterns must be valid for this type.
+pub unsafe trait FromBytes {}
+
+// SAFETY: All bit patterns are acceptable values of the types below.
+unsafe impl FromBytes for u8 {}
+unsafe impl FromBytes for u16 {}
+unsafe impl FromBytes for u32 {}
+unsafe impl FromBytes for u64 {}
+unsafe impl FromBytes for usize {}
+unsafe impl FromBytes for i8 {}
+unsafe impl FromBytes for i16 {}
+unsafe impl FromBytes for i32 {}
+unsafe impl FromBytes for i64 {}
+unsafe impl FromBytes for isize {}
+// SAFETY: If all bit patterns are acceptable for individual values in an array,
+// then all bit patterns are also acceptable for arrays of that type.
+unsafe impl<T: FromBytes> FromBytes for [T] {}
+unsafe impl<T: FromBytes, const N: usize> FromBytes for [T; N] {}
+
+/// Types that can be viewed as an immutable slice of initialized bytes.
+///
+/// If a struct implements this trait, then it is okay to copy it byte-for-byte
+/// to userspace. This means that it should not have any padding, as padding
+/// bytes are uninitialized. Reading uninitialized memory is not just undefined
+/// behavior, it may even lead to leaking sensitive information on the stack to
+/// userspace.
+///
+/// The struct should also not hold kernel pointers, as kernel pointer addresses
+/// are also considered sensitive. However, leaking kernel pointers is not
+/// considered undefined behavior by Rust, so this is a correctness requirement,
+/// but not a safety requirement.
+///
+/// # Safety
+///
+/// Values of this type may not contain any uninitialized bytes.
+pub unsafe trait AsBytes {}
+
+// SAFETY: Instances of the following types have no uninitialized portions.
+unsafe impl AsBytes for u8 {}
+unsafe impl AsBytes for u16 {}
+unsafe impl AsBytes for u32 {}
+unsafe impl AsBytes for u64 {}
+unsafe impl AsBytes for usize {}
+unsafe impl AsBytes for i8 {}
+unsafe impl AsBytes for i16 {}
+unsafe impl AsBytes for i32 {}
+unsafe impl AsBytes for i64 {}
+unsafe impl AsBytes for isize {}
+unsafe impl AsBytes for bool {}
+unsafe impl AsBytes for char {}
+unsafe impl AsBytes for str {}
+// SAFETY: If individual values in an array have no uninitialized portions, then
+// the array itself does not have any uninitialized portions either.
+unsafe impl<T: AsBytes> AsBytes for [T] {}
+unsafe impl<T: AsBytes, const N: usize> AsBytes for [T; N] {}
diff --git a/rust/kernel/uaccess.rs b/rust/kernel/uaccess.rs
index 020f3847683f..72d55b2b33c9 100644
--- a/rust/kernel/uaccess.rs
+++ b/rust/kernel/uaccess.rs
@@ -4,9 +4,15 @@
//!
//! C header: [`include/linux/uaccess.h`](srctree/include/linux/uaccess.h)
-use crate::{bindings, error::code::*, error::Result};
+use crate::{
+ bindings,
+ error::code::*,
+ error::Result,
+ types::{AsBytes, FromBytes},
+};
use alloc::vec::Vec;
use core::ffi::{c_ulong, c_void};
+use core::mem::{size_of, MaybeUninit};
/// A pointer to an area in userspace memory, which can be either read-only or
/// read-write.
@@ -237,6 +243,41 @@ pub unsafe fn read_raw(&mut self, out: *mut u8, len: usize) -> Result {
Ok(())
}
+ /// Reads a value of the specified type.
+ ///
+ /// Fails with `EFAULT` if the read encounters a page fault.
+ pub fn read<T: FromBytes>(&mut self) -> Result<T> {
+ let len = size_of::<T>();
+ if len > self.length {
+ return Err(EFAULT);
+ }
+ let Ok(len_ulong) = c_ulong::try_from(len) else {
+ return Err(EFAULT);
+ };
+ let mut out: MaybeUninit<T> = MaybeUninit::uninit();
+ // SAFETY: The local variable `out` is valid for writing `size_of::<T>()` bytes.
+ //
+ // By using the _copy_from_user variant, we skip the check_object_size
+ // check that verifies the kernel pointer. This mirrors the logic on the
+ // C side that skips the check when the length is a compile-time
+ // constant.
+ let res = unsafe {
+ bindings::_copy_from_user(out.as_mut_ptr().cast::<c_void>(), self.ptr, len_ulong)
+ };
+ if res != 0 {
+ return Err(EFAULT);
+ }
+ // Since this is not a pointer to a valid object in our program,
+ // we cannot use `add`, which has C-style rules for defined
+ // behavior.
+ self.ptr = self.ptr.wrapping_byte_add(len);
+ self.length -= len;
+ // SAFETY: The read above has initialized all bytes in `out`, and since
+ // `T` implements `FromBytes`, any bit-pattern is a valid value for this
+ // type.
+ Ok(unsafe { out.assume_init() })
+ }
+
/// Reads the entirety of the user slice, appending it to the end of the
/// provided buffer.
///
@@ -312,4 +353,36 @@ pub fn write_slice(&mut self, data: &[u8]) -> Result {
// `len`, so the pointer is valid for reading `len` bytes.
unsafe { self.write_raw(ptr, len) }
}
+
+ /// Writes the provided Rust value to this userspace pointer.
+ ///
+ /// Fails with `EFAULT` if the write encounters a page fault.
+ pub fn write<T: AsBytes>(&mut self, value: &T) -> Result {
+ let len = size_of::<T>();
+ if len > self.length {
+ return Err(EFAULT);
+ }
+ let Ok(len_ulong) = c_ulong::try_from(len) else {
+ return Err(EFAULT);
+ };
+ // SAFETY: The reference points to a value of type `T`, so it is valid
+ // for reading `size_of::<T>()` bytes.
+ //
+ // By using the _copy_to_user variant, we skip the check_object_size
+ // check that verifies the kernel pointer. This mirrors the logic on the
+ // C side that skips the check when the length is a compile-time
+ // constant.
+ let res = unsafe {
+ bindings::_copy_to_user(self.ptr, (value as *const T).cast::<c_void>(), len_ulong)
+ };
+ if res != 0 {
+ return Err(EFAULT);
+ }
+ // Since this is not a pointer to a valid object in our program,
+ // we cannot use `add`, which has C-style rules for defined
+ // behavior.
+ self.ptr = self.ptr.wrapping_byte_add(len);
+ self.length -= len;
+ Ok(())
+ }
}
--
2.44.0.278.ge034bb2e1d-goog
Adds a new struct called `Page` that wraps a pointer to `struct page`.
This struct is assumed to hold ownership over the page, so that Rust
code can allocate and manage pages directly.
The page type has various methods for reading and writing into the page.
These methods will temporarily map the page to allow the operation. All
of these methods use a helper that takes an offset and length, performs
bounds checks, and returns a pointer to the given offset in the page.
This patch only adds support for pages of order zero, as that is all
Rust Binder needs. However, it is written to make it easy to add support
for higher-order pages in the future. To do that, you would add a const
generic parameter to `Page` that specifies the order. Most of the
methods do not need to be adjusted, as the logic for dealing with
mapping multiple pages at once can be isolated to just the
`with_pointer_into_page` method. Finally, the struct can be renamed to
`Pages<ORDER>`, and the type alias `Page = Pages<0>` can be introduced.
Rust Binder needs to manage pages directly as that is how transactions
are delivered: Each process has an mmap'd region for incoming
transactions. When an incoming transaction arrives, the Binder driver
will choose a region in the mmap, allocate and map the relevant pages
manually, and copy the incoming transaction directly into the page. This
architecture allows the driver to copy transactions directly from the
address space of one process to another, without an intermediate copy
to a kernel buffer.
This code is based on Wedson's page abstractions from the old rust
branch, but it has been modified by Alice by removing the incomplete
support for higher-order pages, by introducing the `with_*` helpers
to consolidate the bounds checking logic into a single place, and by
introducing gfp flags.
Co-developed-by: Wedson Almeida Filho <[email protected]>
Signed-off-by: Wedson Almeida Filho <[email protected]>
Signed-off-by: Alice Ryhl <[email protected]>
---
rust/bindings/bindings_helper.h | 3 +
rust/helpers.c | 20 ++++
rust/kernel/lib.rs | 1 +
rust/kernel/page.rs | 223 ++++++++++++++++++++++++++++++++++++++++
4 files changed, 247 insertions(+)
diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index 65b98831b975..1073005ca449 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -20,5 +20,8 @@
/* `bindgen` gets confused at certain things. */
const size_t RUST_CONST_HELPER_ARCH_SLAB_MINALIGN = ARCH_SLAB_MINALIGN;
+const size_t RUST_CONST_HELPER_PAGE_SIZE = PAGE_SIZE;
+const size_t RUST_CONST_HELPER_PAGE_MASK = PAGE_MASK;
const gfp_t RUST_CONST_HELPER_GFP_KERNEL = GFP_KERNEL;
const gfp_t RUST_CONST_HELPER___GFP_ZERO = __GFP_ZERO;
+const gfp_t RUST_CONST_HELPER___GFP_HIGHMEM = ___GFP_HIGHMEM;
diff --git a/rust/helpers.c b/rust/helpers.c
index 312b6fcb49d5..298d2ee16e61 100644
--- a/rust/helpers.c
+++ b/rust/helpers.c
@@ -25,6 +25,8 @@
#include <linux/build_bug.h>
#include <linux/err.h>
#include <linux/errname.h>
+#include <linux/gfp.h>
+#include <linux/highmem.h>
#include <linux/mutex.h>
#include <linux/refcount.h>
#include <linux/sched/signal.h>
@@ -93,6 +95,24 @@ int rust_helper_signal_pending(struct task_struct *t)
}
EXPORT_SYMBOL_GPL(rust_helper_signal_pending);
+struct page *rust_helper_alloc_pages(gfp_t gfp_mask, unsigned int order)
+{
+ return alloc_pages(gfp_mask, order);
+}
+EXPORT_SYMBOL_GPL(rust_helper_alloc_pages);
+
+void *rust_helper_kmap_local_page(struct page *page)
+{
+ return kmap_local_page(page);
+}
+EXPORT_SYMBOL_GPL(rust_helper_kmap_local_page);
+
+void rust_helper_kunmap_local(const void *addr)
+{
+ kunmap_local(addr);
+}
+EXPORT_SYMBOL_GPL(rust_helper_kunmap_local);
+
refcount_t rust_helper_REFCOUNT_INIT(int n)
{
return (refcount_t)REFCOUNT_INIT(n);
diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
index 37f84223b83f..667fc67fa24f 100644
--- a/rust/kernel/lib.rs
+++ b/rust/kernel/lib.rs
@@ -39,6 +39,7 @@
pub mod kunit;
#[cfg(CONFIG_NET)]
pub mod net;
+pub mod page;
pub mod prelude;
pub mod print;
mod static_assert;
diff --git a/rust/kernel/page.rs b/rust/kernel/page.rs
new file mode 100644
index 000000000000..02d25b142fc8
--- /dev/null
+++ b/rust/kernel/page.rs
@@ -0,0 +1,223 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Kernel page allocation and management.
+
+use crate::{bindings, error::code::*, error::Result, uaccess::UserSliceReader};
+use core::{
+ alloc::AllocError,
+ ptr::{self, NonNull},
+};
+
+/// A bitwise shift for the page size.
+pub const PAGE_SHIFT: usize = bindings::PAGE_SHIFT as usize;
+/// The number of bytes in a page.
+pub const PAGE_SIZE: usize = bindings::PAGE_SIZE as usize;
+/// A bitmask that can be used to get the page containing a given address by masking away the lower
+/// bits.
+pub const PAGE_MASK: usize = bindings::PAGE_MASK as usize;
+
+/// Flags for the "get free page" function that underlies all memory allocations.
+pub mod flags {
+ pub type gfp_t = bindings::gfp_t;
+
+ /// `GFP_KERNEL` is typical for kernel-internal allocations. The caller requires `ZONE_NORMAL`
+ /// or a lower zone for direct access but can direct reclaim.
+ pub const GFP_KERNEL: gfp_t = bindings::GFP_KERNEL;
+ /// `GFP_ZERO` returns a zeroed page on success.
+ pub const __GFP_ZERO: gfp_t = bindings::__GFP_ZERO;
+ /// `GFP_HIGHMEM` indicates that the allocated memory may be located in high memory.
+ pub const __GFP_HIGHMEM: gfp_t = bindings::__GFP_HIGHMEM;
+}
+
+/// A pointer to a page that owns the page allocation.
+///
+/// # Invariants
+///
+/// The pointer points at a page, and has ownership over the page.
+pub struct Page {
+ page: NonNull<bindings::page>,
+}
+
+// SAFETY: It is safe to transfer page allocations between threads.
+unsafe impl Send for Page {}
+
+// SAFETY: As long as the safety requirements for `&self` methods on this type
+// are followed, there is no problem with calling them in parallel.
+unsafe impl Sync for Page {}
+
+impl Page {
+ /// Allocates a new page.
+ pub fn alloc_page(gfp_flags: flags::gfp_t) -> Result<Self, AllocError> {
+ // SAFETY: The specified order is zero and we want one page.
+ let page = unsafe { bindings::alloc_pages(gfp_flags, 0) };
+ let page = NonNull::new(page).ok_or(AllocError)?;
+ // INVARIANT: We checked that the allocation succeeded.
+ Ok(Self { page })
+ }
+
+ /// Returns a raw pointer to the page.
+ pub fn as_ptr(&self) -> *mut bindings::page {
+ self.page.as_ptr()
+ }
+
+ /// Runs a piece of code with this page mapped to an address.
+ ///
+ /// The page is unmapped when this call returns.
+ ///
+ /// It is up to the caller to use the provided raw pointer correctly.
+ pub fn with_page_mapped<T>(&self, f: impl FnOnce(*mut u8) -> T) -> T {
+ // SAFETY: `page` is valid due to the type invariants on `Page`.
+ let mapped_addr = unsafe { bindings::kmap_local_page(self.as_ptr()) };
+
+ let res = f(mapped_addr.cast());
+
+ // SAFETY: This unmaps the page mapped above.
+ //
+ // Since this API takes the user code as a closure, it can only be used
+ // in a manner where the pages are unmapped in reverse order. This is as
+ // required by `kunmap_local`.
+ //
+ // In other words, if this call to `kunmap_local` happens when a
+ // different page should be unmapped first, then there must necessarily
+ // be a call to `kmap_local_page` other than the call just above in
+ // `with_page_mapped` that made that possible. In this case, it is the
+ // unsafe block that wraps that other call that is incorrect.
+ unsafe { bindings::kunmap_local(mapped_addr) };
+
+ res
+ }
+
+ /// Runs a piece of code with a raw pointer to a slice of this page, with
+ /// bounds checking.
+ ///
+ /// If `f` is called, then it will be called with a pointer that points at
+ /// `off` bytes into the page, and the pointer will be valid for at least
+ /// `len` bytes. The pointer is only valid on this task, as this method uses
+ /// a local mapping.
+ ///
+ /// If `off` and `len` refers to a region outside of this page, then this
+ /// method returns `EINVAL` and does not call `f`.
+ ///
+ /// It is up to the caller to use the provided raw pointer correctly.
+ pub fn with_pointer_into_page<T>(
+ &self,
+ off: usize,
+ len: usize,
+ f: impl FnOnce(*mut u8) -> Result<T>,
+ ) -> Result<T> {
+ let bounds_ok = off <= PAGE_SIZE && len <= PAGE_SIZE && (off + len) <= PAGE_SIZE;
+
+ if bounds_ok {
+ self.with_page_mapped(move |page_addr| {
+ // SAFETY: The `off` integer is at most `PAGE_SIZE`, so this pointer offset will
+ // result in a pointer that is in bounds or one off the end of the page.
+ f(unsafe { page_addr.add(off) })
+ })
+ } else {
+ Err(EINVAL)
+ }
+ }
+
+ /// Maps the page and reads from it into the given buffer.
+ ///
+ /// This method will perform bounds checks on the page offset. If `offset ..
+ /// offset+len` goes outside ot the page, then this call returns `EINVAL`.
+ ///
+ /// # Safety
+ ///
+ /// * Callers must ensure that `dst` is valid for writing `len` bytes.
+ /// * Callers must ensure that this call does not race with a write to the
+ /// same page that overlaps with this read.
+ pub unsafe fn read_raw(&self, dst: *mut u8, offset: usize, len: usize) -> Result {
+ self.with_pointer_into_page(offset, len, move |src| {
+ // SAFETY: If `with_pointer_into_page` calls into this closure, then
+ // it has performed a bounds check and guarantees that `src` is
+ // valid for `len` bytes.
+ //
+ // There caller guarantees that there is no data race.
+ unsafe { ptr::copy_nonoverlapping(src, dst, len) };
+ Ok(())
+ })
+ }
+
+ /// Maps the page and writes into it from the given buffer.
+ ///
+ /// This method will perform bounds checks on the page offset. If `offset ..
+ /// offset+len` goes outside ot the page, then this call returns `EINVAL`.
+ ///
+ /// # Safety
+ ///
+ /// * Callers must ensure that `src` is valid for reading `len` bytes.
+ /// * Callers must ensure that this call does not race with a read or write
+ /// to the same page that overlaps with this write.
+ pub unsafe fn write_raw(&self, src: *const u8, offset: usize, len: usize) -> Result {
+ self.with_pointer_into_page(offset, len, move |dst| {
+ // SAFETY: If `with_pointer_into_page` calls into this closure, then
+ // it has performed a bounds check and guarantees that `dst` is
+ // valid for `len` bytes.
+ //
+ // There caller guarantees that there is no data race.
+ unsafe { ptr::copy_nonoverlapping(src, dst, len) };
+ Ok(())
+ })
+ }
+
+ /// Maps the page and zeroes the given slice.
+ ///
+ /// This method will perform bounds checks on the page offset. If `offset ..
+ /// offset+len` goes outside ot the page, then this call returns `EINVAL`.
+ ///
+ /// # Safety
+ ///
+ /// Callers must ensure that this call does not race with a read or write to
+ /// the same page that overlaps with this write.
+ pub unsafe fn fill_zero(&self, offset: usize, len: usize) -> Result {
+ self.with_pointer_into_page(offset, len, move |dst| {
+ // SAFETY: If `with_pointer_into_page` calls into this closure, then
+ // it has performed a bounds check and guarantees that `dst` is
+ // valid for `len` bytes.
+ //
+ // There caller guarantees that there is no data race.
+ unsafe { ptr::write_bytes(dst, 0u8, len) };
+ Ok(())
+ })
+ }
+
+ /// Copies data from userspace into this page.
+ ///
+ /// This method will perform bounds checks on the page offset. If `offset ..
+ /// offset+len` goes outside ot the page, then this call returns `EINVAL`.
+ ///
+ /// Like the other `UserSliceReader` methods, data races are allowed on the
+ /// userspace address. However, they are not allowed on the page you are
+ /// copying into.
+ ///
+ /// # Safety
+ ///
+ /// Callers must ensure that this call does not race with a read or write to
+ /// the same page that overlaps with this write.
+ pub unsafe fn copy_from_user_slice(
+ &self,
+ reader: &mut UserSliceReader,
+ offset: usize,
+ len: usize,
+ ) -> Result {
+ self.with_pointer_into_page(offset, len, move |dst| {
+ // SAFETY: If `with_pointer_into_page` calls into this closure, then
+ // it has performed a bounds check and guarantees that `dst` is
+ // valid for `len` bytes.
+ //
+ // There caller guarantees that there is no data race when writing
+ // to `dst`.
+ unsafe { reader.read_raw(dst, len) }
+ })
+ }
+}
+
+impl Drop for Page {
+ fn drop(&mut self) {
+ // SAFETY: By the type invariants, we have ownership of the page and can
+ // free it.
+ unsafe { bindings::__free_pages(self.page.as_ptr(), 0) };
+ }
+}
--
2.44.0.278.ge034bb2e1d-goog
Alice Ryhl <[email protected]> writes:
> +/// Flags for the "get free page" function that underlies all memory allocations.
> +pub mod flags {
> + pub type gfp_t = bindings::gfp_t;
> +
> + /// `GFP_KERNEL` is typical for kernel-internal allocations. The caller requires `ZONE_NORMAL`
> + /// or a lower zone for direct access but can direct reclaim.
> + pub const GFP_KERNEL: gfp_t = bindings::GFP_KERNEL;
> + /// `GFP_ZERO` returns a zeroed page on success.
> + pub const __GFP_ZERO: gfp_t = bindings::__GFP_ZERO;
> + /// `GFP_HIGHMEM` indicates that the allocated memory may be located in high memory.
> + pub const __GFP_HIGHMEM: gfp_t = bindings::__GFP_HIGHMEM;
> +}
>
> [...]
>
> +impl Page {
> + /// Allocates a new page.
> + pub fn alloc_page(gfp_flags: flags::gfp_t) -> Result<Self, AllocError> {
> + // SAFETY: The specified order is zero and we want one page.
> + let page = unsafe { bindings::alloc_pages(gfp_flags, 0) };
> + let page = NonNull::new(page).ok_or(AllocError)?;
> + // INVARIANT: We checked that the allocation succeeded.
> + Ok(Self { page })
> + }
Matthew Wilcox: You suggested on a previous version that I use gfp flags
here, or that I rename it to e.g. BinderPage to make it clear that this
is specific to the kind of pages that Binder needs.
In this version I added some gfp flags, but I'm not actually sure that
the Page abstraction works for all combinations of gfp flags. For
example, I use kmap_local_page when accessing the page, but is that
correct if there's a user that doesn't pass GFP_HIGHMEM?
So perhaps it should be called HighmemPage since the methods on it
hardcode that. Or maybe it really doesn't make sense to generalize it
beyond what Binder needs.
What do you think? How broadly does these implementations generalize? I
would be happy to hear your advice on this.
Andreas Hindborg: I recall you mentioning that you also needed an
abstraction for pages. To what extent do these abstractions fit your
needs? Which gfp flags do you need?
Also, sorry for taking so long to submit this version. I spent a long
time debugging the crash that led to the submission of [1].
Alice
[1]: https://lore.kernel.org/rust-for-linux/[email protected]/
Alice Ryhl <[email protected]> writes:
> Alice Ryhl <[email protected]> writes:
>
> Andreas Hindborg: I recall you mentioning that you also needed an
> abstraction for pages. To what extent do these abstractions fit your
> needs? Which gfp flags do you need?
>
I based the block device driver API and null block driver series on v1
of this patch and v3 should still be good for that. The null block
driver uses `Page` indirectly through `UniqueFolio` with `GFP_KERNEL`
alloc flags. I do not need to customize the flags outside of that.
As an aside, I added methods to safely operate on the page contents [1].
`kernel::block::vec::Segment` indirectly uses this to move data to and
from pages [2].
Best regards,
Andreas
[1] https://github.com/metaspace/linux/commit/e88f4dc928233fcedcb0afec40be9bc2f8f74e3b
[2] https://lore.kernel.org/rust-for-linux/[email protected]/T/#me6497ec69544efd21908f1acc6b3a1ab8b148ba0
On 3/11/24 11:47, Alice Ryhl wrote:
> From: Wedson Almeida Filho <[email protected]>
>
> A pointer to an area in userspace memory, which can be either read-only
> or read-write.
>
> All methods on this struct are safe: invalid pointers return `EFAULT`.
> Concurrent access, *including data races to/from userspace memory*, is
> permitted, because fundamentally another userspace thread/process could
> always be modifying memory at the same time (in the same way that
> userspace Rust's `std::io` permits data races with the contents of
> files on disk). In the presence of a race, the exact byte values
> read/written are unspecified but the operation is well-defined.
> Kernelspace code should validate its copy of data after completing a
> read, and not expect that multiple reads of the same address will return
> the same value.
>
> These APIs are designed to make it difficult to accidentally write
> TOCTOU bugs. Every time you read from a memory location, the pointer is
> advanced by the length so that you cannot use that reader to read the
> same memory location twice. Preventing double-fetches avoids TOCTOU
> bugs. This is accomplished by taking `self` by value to prevent
> obtaining multiple readers on a given `UserSlicePtr`, and the readers
> only permitting forward reads. If double-fetching a memory location is
> necessary for some reason, then that is done by creating multiple
> readers to the same memory location.
>
> Constructing a `UserSlicePtr` performs no checks on the provided
> address and length, it can safely be constructed inside a kernel thread
> with no current userspace process. Reads and writes wrap the kernel APIs
> `copy_from_user` and `copy_to_user`, which check the memory map of the
> current process and enforce that the address range is within the user
> range (no additional calls to `access_ok` are needed).
>
> This code is based on something that was originally written by Wedson on
> the old rust branch. It was modified by Alice by removing the
> `IoBufferReader` and `IoBufferWriter` traits, and various other changes.
>
> Signed-off-by: Wedson Almeida Filho <[email protected]>
> Co-developed-by: Alice Ryhl <[email protected]>
> Signed-off-by: Alice Ryhl <[email protected]>
Reviewed-by: Benno Lossin <[email protected]>
On 3/11/24 11:47, Alice Ryhl wrote:
> Add safe methods for reading and writing Rust values to and from
> userspace pointers.
>
> The C methods for copying to/from userspace use a function called
> `check_object_size` to verify that the kernel pointer is not dangling.
> However, this check is skipped when the length is a compile-time
> constant, with the assumption that such cases trivially have a correct
> kernel pointer.
>
> In this patch, we apply the same optimization to the typed accessors.
> For both methods, the size of the operation is known at compile time to
> be size_of of the type being read or written. Since the C side doesn't
> provide a variant that skips only this check, we create custom helpers
> for this purpose.
>
> The majority of reads and writes to userspace pointers in the Rust
> Binder driver uses these accessor methods. Benchmarking has found that
> skipping the `check_object_size` check makes a big difference for the
> cases being skipped here. (And that the check doesn't make a difference
> for the cases that use the raw read/write methods.)
>
> This code is based on something that was originally written by Wedson on
> the old rust branch. It was modified by Alice to skip the
> `check_object_size` check, and to update various comments, including the
> notes about kernel pointers in `WritableToBytes`.
>
> Co-developed-by: Wedson Almeida Filho <[email protected]>
> Signed-off-by: Wedson Almeida Filho <[email protected]>
> Signed-off-by: Alice Ryhl <[email protected]>
Reviewed-by: Benno Lossin <[email protected]>
On Mon, Mar 11, 2024 at 10:50:56AM +0000, Alice Ryhl wrote:
> Alice Ryhl <[email protected]> writes:
> > +/// Flags for the "get free page" function that underlies all memory allocations.
> > +pub mod flags {
> > + pub type gfp_t = bindings::gfp_t;
> > +
> > + /// `GFP_KERNEL` is typical for kernel-internal allocations. The caller requires `ZONE_NORMAL`
> > + /// or a lower zone for direct access but can direct reclaim.
> > + pub const GFP_KERNEL: gfp_t = bindings::GFP_KERNEL;
> > + /// `GFP_ZERO` returns a zeroed page on success.
> > + pub const __GFP_ZERO: gfp_t = bindings::__GFP_ZERO;
> > + /// `GFP_HIGHMEM` indicates that the allocated memory may be located in high memory.
> > + pub const __GFP_HIGHMEM: gfp_t = bindings::__GFP_HIGHMEM;
> > +}
> >
> > [...]
> >
> > +impl Page {
> > + /// Allocates a new page.
> > + pub fn alloc_page(gfp_flags: flags::gfp_t) -> Result<Self, AllocError> {
> > + // SAFETY: The specified order is zero and we want one page.
> > + let page = unsafe { bindings::alloc_pages(gfp_flags, 0) };
> > + let page = NonNull::new(page).ok_or(AllocError)?;
> > + // INVARIANT: We checked that the allocation succeeded.
> > + Ok(Self { page })
> > + }
>
> Matthew Wilcox: You suggested on a previous version that I use gfp flags
> here, or that I rename it to e.g. BinderPage to make it clear that this
> is specific to the kind of pages that Binder needs.
I think what you have here is good.
> In this version I added some gfp flags, but I'm not actually sure that
> the Page abstraction works for all combinations of gfp flags. For
> example, I use kmap_local_page when accessing the page, but is that
> correct if there's a user that doesn't pass GFP_HIGHMEM?
Yes, kmap_local_page() works for non-highmem pages (it's essentially a
no-op)
On 3/11/24 11:47, Alice Ryhl wrote:
> +/// A pointer to a page that owns the page allocation.
> +///
> +/// # Invariants
> +///
> +/// The pointer points at a page, and has ownership over the page.
Why not "`page` is valid"?
Do you mean by ownership of the page that `page` has ownership of the
allocation, or does that entail any other property/privilege?
> +pub struct Page {
> + page: NonNull<bindings::page>,
> +}
> +
> +// SAFETY: It is safe to transfer page allocations between threads.
Why?
> +unsafe impl Send for Page {}
> +
> +// SAFETY: As long as the safety requirements for `&self` methods on this type
> +// are followed, there is no problem with calling them in parallel.
Why?
> +unsafe impl Sync for Page {}
> +
> +impl Page {
> + /// Allocates a new page.
> + pub fn alloc_page(gfp_flags: flags::gfp_t) -> Result<Self, AllocError> {
> + // SAFETY: The specified order is zero and we want one page.
This doesn't explain why it is sound to call the function. I expect that
it is always sound to call this function with valid arguments.
> + let page = unsafe { bindings::alloc_pages(gfp_flags, 0) };
> + let page = NonNull::new(page).ok_or(AllocError)?;
> + // INVARIANT: We checked that the allocation succeeded.
Doesn't mention ownership.
> + Ok(Self { page })
> + }
> +
> + /// Returns a raw pointer to the page.
> + pub fn as_ptr(&self) -> *mut bindings::page {
> + self.page.as_ptr()
> + }
> +
> + /// Runs a piece of code with this page mapped to an address.
> + ///
> + /// The page is unmapped when this call returns.
> + ///
> + /// It is up to the caller to use the provided raw pointer correctly.
This says nothing about what 'correctly' means. What I gathered from the
implementation is that the supplied pointer is valid for the execution
of `f` for `PAGE_SIZE` bytes.
What other things are you allowed to rely upon?
Is it really OK for this function to be called from multiple threads?
Could that not result in the same page being mapped multiple times? If
that is fine, what about potential data races when two threads write to
the pointer given to `f`?
> + pub fn with_page_mapped<T>(&self, f: impl FnOnce(*mut u8) -> T) -> T {
> + // SAFETY: `page` is valid due to the type invariants on `Page`.
> + let mapped_addr = unsafe { bindings::kmap_local_page(self.as_ptr()) };
> +
> + let res = f(mapped_addr.cast());
> +
> + // SAFETY: This unmaps the page mapped above.
This doesn't explain why it is sound.
> + //
> + // Since this API takes the user code as a closure, it can only be used
> + // in a manner where the pages are unmapped in reverse order. This is as
> + // required by `kunmap_local`.
> + //
> + // In other words, if this call to `kunmap_local` happens when a
> + // different page should be unmapped first, then there must necessarily
> + // be a call to `kmap_local_page` other than the call just above in
> + // `with_page_mapped` that made that possible. In this case, it is the
> + // unsafe block that wraps that other call that is incorrect.
> + unsafe { bindings::kunmap_local(mapped_addr) };
> +
> + res
> + }
> +
> + /// Runs a piece of code with a raw pointer to a slice of this page, with
> + /// bounds checking.
> + ///
> + /// If `f` is called, then it will be called with a pointer that points at
> + /// `off` bytes into the page, and the pointer will be valid for at least
> + /// `len` bytes. The pointer is only valid on this task, as this method uses
> + /// a local mapping.
This information about the pointer only being valid on this task should
also apply to `with_page_mapped`, right?
> + ///
> + /// If `off` and `len` refers to a region outside of this page, then this
> + /// method returns `EINVAL` and does not call `f`.
> + ///
> + /// It is up to the caller to use the provided raw pointer correctly.
Again, please specify what 'correctly' means.
--
Cheers,
Benno
> + pub fn with_pointer_into_page<T>(
> + &self,
> + off: usize,
> + len: usize,
> + f: impl FnOnce(*mut u8) -> Result<T>,
> + ) -> Result<T> {
> + let bounds_ok = off <= PAGE_SIZE && len <= PAGE_SIZE && (off + len) <= PAGE_SIZE;
> +
> + if bounds_ok {
> + self.with_page_mapped(move |page_addr| {
> + // SAFETY: The `off` integer is at most `PAGE_SIZE`, so this pointer offset will
> + // result in a pointer that is in bounds or one off the end of the page.
> + f(unsafe { page_addr.add(off) })
> + })
> + } else {
> + Err(EINVAL)
> + }
> + }
On Mon, Mar 11, 2024 at 10:47:13AM +0000, Alice Ryhl wrote:
> From: Wedson Almeida Filho <[email protected]>
>
[...]
> +
> +/// A reader for [`UserSlice`].
> +///
> +/// Used to incrementally read from the user slice.
> +pub struct UserSliceReader {
> + ptr: *mut c_void,
> + length: usize,
> +}
> +
> +impl UserSliceReader {
[...]
> +
> + /// Reads raw data from the user slice into a raw kernel buffer.
> + ///
> + /// Fails with `EFAULT` if the read encounters a page fault.
> + ///
> + /// # Safety
> + ///
> + /// The `out` pointer must be valid for writing `len` bytes.
> + pub unsafe fn read_raw(&mut self, out: *mut u8, len: usize) -> Result {
I don't think we want to promote the pub usage of this unsafe function,
right? We can provide a safe version:
pub fn read_slice(&mut self, to: &[u8]) -> Result
and all users can just use the safe version (with the help of
slice::from_raw_parts_mut() if necessary).
> + if len > self.length {
> + return Err(EFAULT);
> + }
> + let Ok(len_ulong) = c_ulong::try_from(len) else {
> + return Err(EFAULT);
> + };
> + // SAFETY: The caller promises that `out` is valid for writing `len` bytes.
> + let res = unsafe { bindings::copy_from_user(out.cast::<c_void>(), self.ptr, len_ulong) };
> + if res != 0 {
> + return Err(EFAULT);
> + }
> + // Userspace pointers are not directly dereferencable by the kernel, so
> + // we cannot use `add`, which has C-style rules for defined behavior.
> + self.ptr = self.ptr.wrapping_byte_add(len);
> + self.length -= len;
> + Ok(())
> + }
> +
> + /// Reads the entirety of the user slice, appending it to the end of the
> + /// provided buffer.
> + ///
> + /// Fails with `EFAULT` if the read encounters a page fault.
> + pub fn read_all(mut self, buf: &mut Vec<u8>) -> Result {
> + let len = self.length;
> + buf.try_reserve(len)?;
> +
> + // SAFETY: The call to `try_reserve` was successful, so the spare
> + // capacity is at least `len` bytes long.
> + unsafe { self.read_raw(buf.spare_capacity_mut().as_mut_ptr().cast(), len)? };
> +
> + // SAFETY: Since the call to `read_raw` was successful, so the next
> + // `len` bytes of the vector have been initialized.
> + unsafe { buf.set_len(buf.len() + len) };
> + Ok(())
> + }
> +}
> +
> +/// A writer for [`UserSlice`].
> +///
> +/// Used to incrementally write into the user slice.
> +pub struct UserSliceWriter {
> + ptr: *mut c_void,
> + length: usize,
> +}
> +
> +impl UserSliceWriter {
> + /// Returns the amount of space remaining in this buffer.
> + ///
> + /// Note that even writing less than this number of bytes may fail.
> + pub fn len(&self) -> usize {
> + self.length
> + }
> +
> + /// Returns `true` if no more data can be written to this buffer.
> + pub fn is_empty(&self) -> bool {
> + self.length == 0
> + }
> +
> + /// Writes raw data to this user pointer from a raw kernel buffer.
> + ///
> + /// Fails with `EFAULT` if the write encounters a page fault.
> + ///
> + /// # Safety
> + ///
> + /// The `data` pointer must be valid for reading `len` bytes.
> + pub unsafe fn write_raw(&mut self, data: *const u8, len: usize) -> Result {
Same here, just remove the `pub`, and users should use write_slice()
(with the help of slice::from_raw_parts() if necessary).
Regards,
Boqun
> + if len > self.length {
> + return Err(EFAULT);
> + }
> + let Ok(len_ulong) = c_ulong::try_from(len) else {
> + return Err(EFAULT);
> + };
> + let res = unsafe { bindings::copy_to_user(self.ptr, data.cast::<c_void>(), len_ulong) };
> + if res != 0 {
> + return Err(EFAULT);
> + }
> + // Userspace pointers are not directly dereferencable by the kernel, so
> + // we cannot use `add`, which has C-style rules for defined behavior.
> + self.ptr = self.ptr.wrapping_byte_add(len);
> + self.length -= len;
> + Ok(())
> + }
> +
> + /// Writes the provided slice to this user pointer.
> + ///
> + /// Fails with `EFAULT` if the write encounters a page fault.
> + pub fn write_slice(&mut self, data: &[u8]) -> Result {
> + let len = data.len();
> + let ptr = data.as_ptr();
> + // SAFETY: The pointer originates from a reference to a slice of length
> + // `len`, so the pointer is valid for reading `len` bytes.
> + unsafe { self.write_raw(ptr, len) }
> + }
> +}
>
> --
> 2.44.0.278.ge034bb2e1d-goog
>
On Mon, Mar 18, 2024 at 7:59 PM Boqun Feng <[email protected]> wrote:
>
> On Mon, Mar 11, 2024 at 10:47:13AM +0000, Alice Ryhl wrote:
> > +
> > + /// Reads raw data from the user slice into a raw kernel buffer.
> > + ///
> > + /// Fails with `EFAULT` if the read encounters a page fault.
> > + ///
> > + /// # Safety
> > + ///
> > + /// The `out` pointer must be valid for writing `len` bytes.
> > + pub unsafe fn read_raw(&mut self, out: *mut u8, len: usize) -> Result {
>
> I don't think we want to promote the pub usage of this unsafe function,
> right? We can provide a safe version:
>
> pub fn read_slice(&mut self, to: &[u8]) -> Result
>
> and all users can just use the safe version (with the help of
> slice::from_raw_parts_mut() if necessary).
Personally, I think having the function be unsafe is plenty discouragement.
Also, this method would need an &mut [u8], which opens the can of
worms related to uninitialized memory. The _raw version of this method
is strictly more powerful.
I don't think I actually use it directly in Binder, so I can make it
private if you think that's important. It needs to be pub(crate),
though, since it is used in `Page`.
Alice
On Mon, Mar 18, 2024 at 08:12:27PM +0100, Alice Ryhl wrote:
> On Mon, Mar 18, 2024 at 7:59 PM Boqun Feng <[email protected]> wrote:
> >
> > On Mon, Mar 11, 2024 at 10:47:13AM +0000, Alice Ryhl wrote:
> > > +
> > > + /// Reads raw data from the user slice into a raw kernel buffer.
> > > + ///
> > > + /// Fails with `EFAULT` if the read encounters a page fault.
> > > + ///
> > > + /// # Safety
> > > + ///
> > > + /// The `out` pointer must be valid for writing `len` bytes.
> > > + pub unsafe fn read_raw(&mut self, out: *mut u8, len: usize) -> Result {
> >
> > I don't think we want to promote the pub usage of this unsafe function,
> > right? We can provide a safe version:
> >
> > pub fn read_slice(&mut self, to: &[u8]) -> Result
> >
> > and all users can just use the safe version (with the help of
> > slice::from_raw_parts_mut() if necessary).
>
> Personally, I think having the function be unsafe is plenty discouragement.
>
> Also, this method would need an &mut [u8], which opens the can of
> worms related to uninitialized memory. The _raw version of this method
make it a `&mut [MayUninit<u8>]` then? If that works, then _raw version
is not more powerful therefore no need to pub it.
> is strictly more powerful.
>
> I don't think I actually use it directly in Binder, so I can make it
> private if you think that's important. It needs to be pub(crate),
I might be too picky, but avoiding pub unsafe functions if not necessary
could help us reduce unnecessary unsafe code ;-)
Regards,
Boqun
> though, since it is used in `Page`.
>
> Alice
On Mon, Mar 18, 2024 at 8:33 PM Boqun Feng <[email protected]> wrote:
>
> On Mon, Mar 18, 2024 at 08:12:27PM +0100, Alice Ryhl wrote:
> > On Mon, Mar 18, 2024 at 7:59 PM Boqun Feng <[email protected]> wrote:
> > >
> > > On Mon, Mar 11, 2024 at 10:47:13AM +0000, Alice Ryhl wrote:
> > > > +
> > > > + /// Reads raw data from the user slice into a raw kernel buffer.
> > > > + ///
> > > > + /// Fails with `EFAULT` if the read encounters a page fault.
> > > > + ///
> > > > + /// # Safety
> > > > + ///
> > > > + /// The `out` pointer must be valid for writing `len` bytes.
> > > > + pub unsafe fn read_raw(&mut self, out: *mut u8, len: usize) -> Result {
> > >
> > > I don't think we want to promote the pub usage of this unsafe function,
> > > right? We can provide a safe version:
> > >
> > > pub fn read_slice(&mut self, to: &[u8]) -> Result
> > >
> > > and all users can just use the safe version (with the help of
> > > slice::from_raw_parts_mut() if necessary).
> >
> > Personally, I think having the function be unsafe is plenty discouragement.
> >
> > Also, this method would need an &mut [u8], which opens the can of
> > worms related to uninitialized memory. The _raw version of this method
>
> make it a `&mut [MayUninit<u8>]` then? If that works, then _raw version
> is not more powerful therefore no need to pub it.
Nobody actually has a need for that. Also, it doesn't even remove the
need for unsafe code in the caller, since the caller still needs to
assert that the call has initialized the memory.
> > is strictly more powerful.
> >
> > I don't think I actually use it directly in Binder, so I can make it
> > private if you think that's important. It needs to be pub(crate),
>
> I might be too picky, but avoiding pub unsafe functions if not necessary
> could help us reduce unnecessary unsafe code ;-)
>
> Regards,
> Boqun
>
> > though, since it is used in `Page`.
> >
> > Alice
On Mon, Mar 18, 2024 at 09:10:07PM +0100, Alice Ryhl wrote:
> On Mon, Mar 18, 2024 at 8:33 PM Boqun Feng <[email protected]> wrote:
> >
> > On Mon, Mar 18, 2024 at 08:12:27PM +0100, Alice Ryhl wrote:
> > > On Mon, Mar 18, 2024 at 7:59 PM Boqun Feng <[email protected]> wrote:
> > > >
> > > > On Mon, Mar 11, 2024 at 10:47:13AM +0000, Alice Ryhl wrote:
> > > > > +
> > > > > + /// Reads raw data from the user slice into a raw kernel buffer.
> > > > > + ///
> > > > > + /// Fails with `EFAULT` if the read encounters a page fault.
> > > > > + ///
> > > > > + /// # Safety
> > > > > + ///
> > > > > + /// The `out` pointer must be valid for writing `len` bytes.
> > > > > + pub unsafe fn read_raw(&mut self, out: *mut u8, len: usize) -> Result {
> > > >
> > > > I don't think we want to promote the pub usage of this unsafe function,
> > > > right? We can provide a safe version:
> > > >
> > > > pub fn read_slice(&mut self, to: &[u8]) -> Result
> > > >
> > > > and all users can just use the safe version (with the help of
> > > > slice::from_raw_parts_mut() if necessary).
> > >
> > > Personally, I think having the function be unsafe is plenty discouragement.
> > >
> > > Also, this method would need an &mut [u8], which opens the can of
> > > worms related to uninitialized memory. The _raw version of this method
> >
> > make it a `&mut [MayUninit<u8>]` then? If that works, then _raw version
> > is not more powerful therefore no need to pub it.
>
> Nobody actually has a need for that. Also, it doesn't even remove the
I want to use read_slice() to replace read_raw(), and avoid even
pub(crate) for read_raw().
> need for unsafe code in the caller, since the caller still needs to
> assert that the call has initialized the memory.
>
If we have the read_slice():
pub fn read_slice(&mut self, to: &mut [MayUninit<u8>]) -> Result
then the read_all() function can be implemented as:
pub fn read_all(mut self, buf: &mut Vec<u8>) -> Result {
let len = self.length;
buf.try_reserve(len)?;
// Append `len` bytes in the `buf`.
self.read_slice(&mut buf.spare_capacity_mut()[0..len])?;
// SAFETY: Since the call to `read_slice` was successful, so the
// next `len` bytes of the vector have been initialized.
unsafe { buf.set_len(buf.len() + len) };
Ok(())
}
one unsafe block has been removed, and yes, you're right, there is still
need of unsafe here, since the caller still needs to assert the memory
has been initialized. However, to me, it's still an improvement, since
one unsafe block gets removed because we get away from reasoning based
on raw pointers and length.
And yes, for the worst case, we still have the same amount of unsafe
code. For example in `Page::copy_from_user_slice`, if read_slice() is
used, we still need to:
let mut s = unsafe { slice::from_raw_part_mut(dst.cast::<MayUninit<u8>>(), len) };
reader.read_slice(&mut s);
i.e. move the unsafe part from `reader` to the construction of a
"writable slice". However, it's still better, since contructing a slice
is quite common in Rust so it's easy to check the safety requirement.
I generally think replacing a pointer+length pair with a slice is
better.
Regards,
Boqun
> > > is strictly more powerful.
> > >
> > > I don't think I actually use it directly in Binder, so I can make it
> > > private if you think that's important. It needs to be pub(crate),
> >
> > I might be too picky, but avoiding pub unsafe functions if not necessary
> > could help us reduce unnecessary unsafe code ;-)
> >
> > Regards,
> > Boqun
> >
> > > though, since it is used in `Page`.
> > >
> > > Alice
On Mon, Mar 11, 2024 at 10:47:14AM +0000, Alice Ryhl wrote:
> From: Arnd Bergmann <[email protected]>
>
> Rust code needs to be able to access _copy_from_user and _copy_to_user
> so that it can skip the check_copy_size check in cases where the length
> is known at compile-time, mirroring the logic for when C code will skip
> check_copy_size. To do this, we ensure that exported versions of these
> methods are available when CONFIG_RUST is enabled.
>
> Alice has verified that this patch passes the CONFIG_TEST_USER_COPY test
> on x86 using the Android cuttlefish emulator.
>
> Signed-off-by: Arnd Bergmann <[email protected]>
> Tested-by: Alice Ryhl <[email protected]>
> Signed-off-by: Alice Ryhl <[email protected]>
Reviewed-by: Boqun Feng <[email protected]>
Regards,
Boqun
On Mon, Mar 11, 2024 at 10:47:15AM +0000, Alice Ryhl wrote:
> Add safe methods for reading and writing Rust values to and from
> userspace pointers.
>
> The C methods for copying to/from userspace use a function called
> `check_object_size` to verify that the kernel pointer is not dangling.
> However, this check is skipped when the length is a compile-time
> constant, with the assumption that such cases trivially have a correct
> kernel pointer.
>
> In this patch, we apply the same optimization to the typed accessors.
> For both methods, the size of the operation is known at compile time to
> be size_of of the type being read or written. Since the C side doesn't
> provide a variant that skips only this check, we create custom helpers
> for this purpose.
>
> The majority of reads and writes to userspace pointers in the Rust
> Binder driver uses these accessor methods. Benchmarking has found that
> skipping the `check_object_size` check makes a big difference for the
> cases being skipped here. (And that the check doesn't make a difference
> for the cases that use the raw read/write methods.)
>
> This code is based on something that was originally written by Wedson on
> the old rust branch. It was modified by Alice to skip the
> `check_object_size` check, and to update various comments, including the
> notes about kernel pointers in `WritableToBytes`.
>
> Co-developed-by: Wedson Almeida Filho <[email protected]>
> Signed-off-by: Wedson Almeida Filho <[email protected]>
> Signed-off-by: Alice Ryhl <[email protected]>
Reviewed-by: Boqun Feng <[email protected]>
Regards,
Boqun
> ---
> rust/kernel/types.rs | 67 ++++++++++++++++++++++++++++++++++++++++++++
> rust/kernel/uaccess.rs | 75 +++++++++++++++++++++++++++++++++++++++++++++++++-
> 2 files changed, 141 insertions(+), 1 deletion(-)
[...]
On Mon, Mar 11, 2024 at 10:47:16AM +0000, Alice Ryhl wrote:
[...]
> /* `bindgen` gets confused at certain things. */
> const size_t RUST_CONST_HELPER_ARCH_SLAB_MINALIGN = ARCH_SLAB_MINALIGN;
> +const size_t RUST_CONST_HELPER_PAGE_SIZE = PAGE_SIZE;
> +const size_t RUST_CONST_HELPER_PAGE_MASK = PAGE_MASK;
At least for me, bindgen couldn't work out the macro expansion, and I
got:
pub const PAGE_SIZE: usize = 4096;
extern "C" {
pub static RUST_CONST_HELPER_PAGE_MASK: usize;
}
in rust/bindings/bindings_generated.rs, which will eventually cause the
code cannot compile.
I'm using bindgen-cli 0.65.1, libclang (16 or 17), rustc (1.76 or 1.77).
Anyone else sees the same thing?
Regards,
Boqun
> const gfp_t RUST_CONST_HELPER_GFP_KERNEL = GFP_KERNEL;
> const gfp_t RUST_CONST_HELPER___GFP_ZERO = __GFP_ZERO;
> +const gfp_t RUST_CONST_HELPER___GFP_HIGHMEM = ___GFP_HIGHMEM;
[...]
Hi Alice,
I was trying to work on a patch for UserSlice::read_slice(), and I found
a few place that may need some documentation improvements. Please see
below:
On Mon, Mar 11, 2024 at 10:47:13AM +0000, Alice Ryhl wrote:
[...]
> diff --git a/rust/kernel/uaccess.rs b/rust/kernel/uaccess.rs
> new file mode 100644
> index 000000000000..020f3847683f
> --- /dev/null
> +++ b/rust/kernel/uaccess.rs
> @@ -0,0 +1,315 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! User pointers.
Since the type is renamed as UserSlice, maybe:
//! Slices to user space memory regions.
?
> +//!
> +//! C header: [`include/linux/uaccess.h`](srctree/include/linux/uaccess.h)
> +
> +use crate::{bindings, error::code::*, error::Result};
> +use alloc::vec::Vec;
> +use core::ffi::{c_ulong, c_void};
> +
> +/// A pointer to an area in userspace memory, which can be either read-only or
> +/// read-write.
> +///
> +/// All methods on this struct are safe: attempting to read or write invalid
> +/// pointers will return `EFAULT`. Concurrent access, *including data races
Probably reword this a little bit:
"All methods on this struct are safe: attempting to read or write on bad
addresses (either out of the bound of the slice or unmapped addresses)
will return `EFAULT`."
, please see below for the reason.
> +/// to/from userspace memory*, is permitted, because fundamentally another
> +/// userspace thread/process could always be modifying memory at the same time
> +/// (in the same way that userspace Rust's [`std::io`] permits data races with
> +/// the contents of files on disk). In the presence of a race, the exact byte
> +/// values read/written are unspecified but the operation is well-defined.
> +/// Kernelspace code should validate its copy of data after completing a read,
> +/// and not expect that multiple reads of the same address will return the same
> +/// value.
> +///
> +/// These APIs are designed to make it difficult to accidentally write TOCTOU
> +/// (time-of-check to time-of-use) bugs. Every time a memory location is read,
> +/// the reader's position is advanced by the read length and the next read will
> +/// start from there. This helps prevent accidentally reading the same location
> +/// twice and causing a TOCTOU bug.
> +///
> +/// Creating a [`UserSliceReader`] and/or [`UserSliceWriter`] consumes the
> +/// `UserSlice`, helping ensure that there aren't multiple readers or writers to
> +/// the same location.
> +///
> +/// If double-fetching a memory location is necessary for some reason, then that
> +/// is done by creating multiple readers to the same memory location, e.g. using
> +/// [`clone_reader`].
> +///
[...]
> + /// Reads raw data from the user slice into a raw kernel buffer.
> + ///
> + /// Fails with `EFAULT` if the read encounters a page fault.
Technically, this is not correct, since normal page faults can happen
during copy_from_user() (for example, userspace memory gets swapped). So
returning `EFAULT` really means the read happens on a bad address, which
also matches `EFAULT`'s definition:
EFAULT Bad address (POSIX.1-2001).
so maybe reword this and the similar ones below into something like:
/// Fails with `EFAULT` if the read happens on a bad address.
Otherwise, people may think that this function just abort whenever there
is a page fault. Thoughts?
Regards,
Boqun
> + ///
> + /// # Safety
> + ///
> + /// The `out` pointer must be valid for writing `len` bytes.
> + pub unsafe fn read_raw(&mut self, out: *mut u8, len: usize) -> Result {
> + if len > self.length {
> + return Err(EFAULT);
> + }
> + let Ok(len_ulong) = c_ulong::try_from(len) else {
> + return Err(EFAULT);
> + };
> + // SAFETY: The caller promises that `out` is valid for writing `len` bytes.
> + let res = unsafe { bindings::copy_from_user(out.cast::<c_void>(), self.ptr, len_ulong) };
> + if res != 0 {
> + return Err(EFAULT);
> + }
> + // Userspace pointers are not directly dereferencable by the kernel, so
> + // we cannot use `add`, which has C-style rules for defined behavior.
> + self.ptr = self.ptr.wrapping_byte_add(len);
> + self.length -= len;
> + Ok(())
> + }
> +
[...]
> On 3/11/24 11:47, Alice Ryhl wrote:
> > +/// A pointer to a page that owns the page allocation.
> > +///
> > +/// # Invariants
> > +///
> > +/// The pointer points at a page, and has ownership over the page.
>
> Why not "`page` is valid"?
> Do you mean by ownership of the page that `page` has ownership of the
> allocation, or does that entail any other property/privilege?
I can add "at a valid page".
By ownership I mean that we are allowed to pass it to __free_page and
that until we do, we can access the page. If you want me to reword this,
please tell me what you want it to say.
> > +// SAFETY: It is safe to transfer page allocations between threads.
>
> Why?
>
> > +unsafe impl Send for Page {}
How about:
// SAFETY: Pages have no logic that relies on them staying on a given
// thread, so moving them across threads is safe.
> > +// SAFETY: As long as the safety requirements for `&self` methods on this type
> > +// are followed, there is no problem with calling them in parallel.
>
> Why?
>
> > +unsafe impl Sync for Page {}
How about:
// SAFETY: Pages have no logic that relies on them not being accessed
// concurrently, so accessing them concurrently is safe.
> > + // SAFETY: The specified order is zero and we want one page.
>
> This doesn't explain why it is sound to call the function. I expect that
> it is always sound to call this function with valid arguments.
>
> > + let page = unsafe { bindings::alloc_pages(gfp_flags, 0) };
How about:
// SAFETY: Depending on the value of `gfp_flags`, this call may sleep.
// Other than that, it is always safe to call this method.
> > + // INVARIANT: We checked that the allocation succeeded.
>
> Doesn't mention ownership.
>
> > + Ok(Self { page })
How about:
// INVARIANT: We just successfully allocated a page, so we now have
// ownership of the newly allocated page. We transfer that ownership to
// the new `Page` object.
> > + /// Runs a piece of code with this page mapped to an address.
> > + ///
> > + /// The page is unmapped when this call returns.
> > + ///
> > + /// It is up to the caller to use the provided raw pointer correctly.
>
> This says nothing about what 'correctly' means. What I gathered from the
> implementation is that the supplied pointer is valid for the execution
> of `f` for `PAGE_SIZE` bytes.
> What other things are you allowed to rely upon?
>
> Is it really OK for this function to be called from multiple threads?
> Could that not result in the same page being mapped multiple times? If
> that is fine, what about potential data races when two threads write to
> the pointer given to `f`?
>
> > + pub fn with_page_mapped<T>(&self, f: impl FnOnce(*mut u8) -> T) -> T {
I will say:
/// It is up to the caller to use the provided raw pointer correctly.
/// The pointer is valid for `PAGE_SIZE` bytes and for the duration in
/// which the closure is called. Depending on the gfp flags and kernel
/// configuration, the pointer may only be mapped on the current thread,
/// and in those cases, dereferencing it on other threads is UB. Other
/// than that, the usual rules for dereferencing a raw pointer apply.
/// (E.g., don't cause data races, the memory may be uninitialized, and
/// so on.)
It's okay to map it multiple times from different threads.
> > + // SAFETY: This unmaps the page mapped above.
>
> This doesn't explain why it is sound.
>
> > + //
> > + // Since this API takes the user code as a closure, it can only be used
> > + // in a manner where the pages are unmapped in reverse order. This is as
> > + // required by `kunmap_local`.
> > + //
> > + // In other words, if this call to `kunmap_local` happens when a
> > + // different page should be unmapped first, then there must necessarily
> > + // be a call to `kmap_local_page` other than the call just above in
> > + // `with_page_mapped` that made that possible. In this case, it is the
> > + // unsafe block that wraps that other call that is incorrect.
> > + unsafe { bindings::kunmap_local(mapped_addr) };
Why do you say that? The kunmap_local method requires that the address
being unmapped is currently mapped, and that pages are unmapped in
reverse order. The safety comment explains that the page is currently
mapped and that this method cannot be used to unmap them in anything
other than reverse order.
> > + /// Runs a piece of code with a raw pointer to a slice of this page, with
> > + /// bounds checking.
> > + ///
> > + /// If `f` is called, then it will be called with a pointer that points at
> > + /// `off` bytes into the page, and the pointer will be valid for at least
> > + /// `len` bytes. The pointer is only valid on this task, as this method uses
> > + /// a local mapping.
>
> This information about the pointer only being valid on this task should
> also apply to `with_page_mapped`, right?
>
> > + ///
> > + /// If `off` and `len` refers to a region outside of this page, then this
> > + /// method returns `EINVAL` and does not call `f`.
> > + ///
> > + /// It is up to the caller to use the provided raw pointer correctly.
>
> Again, please specify what 'correctly' means.
I will remove the "The pointer is only valid on this task, as this
method uses a local mapping." sentence and copy the same paragraph as
previously (without the `PAGE_SIZE` remark).
Alice
On Mon, Mar 11, 2024 at 10:47:13AM +0000, Alice Ryhl wrote:
> From: Wedson Almeida Filho <[email protected]>
>
[...]
> +/// # Examples
> +///
> +/// Takes a region of userspace memory from the current process, and modify it
> +/// by adding one to every byte in the region.
> +///
> +/// ```no_run
> +/// use alloc::vec::Vec;
> +/// use core::ffi::c_void;
> +/// use kernel::error::Result;
> +/// use kernel::uaccess::UserSlice;
> +///
> +/// pub fn bytes_add_one(uptr: *mut c_void, len: usize) -> Result<()> {
I hit the following compile error when trying to run kunit test:
ERROR:root:error: unreachable `pub` item
--> rust/doctests_kernel_generated.rs:4167:1
|
4167 | pub fn bytes_add_one(uptr: *mut c_void, len: usize) -> Result<()> {
| ---^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| |
| help: consider restricting its visibility: `pub(crate)`
|
= help: or consider exporting it for use by other crates
= note: requested on the command line with `-D unreachable-pub`
error: unreachable `pub` item
--> rust/doctests_kernel_generated.rs:4243:1
|
4243 | pub fn get_bytes_if_valid(uptr: *mut c_void, len: usize) -> Result<Vec<u8>> {
| ---^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| |
| help: consider restricting its visibility: `pub(crate)`
|
= help: or consider exporting it for use by other crates
error: aborting due to 2 previous errors
, which should be fixed if we make the function in the example not
`pub`.
> +/// let (read, mut write) = UserSlice::new(uptr, len).reader_writer();
> +///
> +/// let mut buf = Vec::new();
> +/// read.read_all(&mut buf)?;
> +///
> +/// for b in &mut buf {
> +/// *b = b.wrapping_add(1);
> +/// }
> +///
> +/// write.write_slice(&buf)?;
> +/// Ok(())
> +/// }
> +/// ```
> +///
> +/// Example illustrating a TOCTOU (time-of-check to time-of-use) bug.
> +///
> +/// ```no_run
> +/// use alloc::vec::Vec;
> +/// use core::ffi::c_void;
> +/// use kernel::error::{code::EINVAL, Result};
> +/// use kernel::uaccess::UserSlice;
> +///
> +/// /// Returns whether the data in this region is valid.
> +/// fn is_valid(uptr: *mut c_void, len: usize) -> Result<bool> {
> +/// let read = UserSlice::new(uptr, len).reader();
> +///
> +/// let mut buf = Vec::new();
> +/// read.read_all(&mut buf)?;
> +///
> +/// todo!()
> +/// }
> +///
> +/// /// Returns the bytes behind this user pointer if they are valid.
> +/// pub fn get_bytes_if_valid(uptr: *mut c_void, len: usize) -> Result<Vec<u8>> {
Ditto here.
> +/// if !is_valid(uptr, len)? {
> +/// return Err(EINVAL);
> +/// }
> +///
> +/// let read = UserSlice::new(uptr, len).reader();
> +///
> +/// let mut buf = Vec::new();
> +/// read.read_all(&mut buf)?;
> +///
> +/// // THIS IS A BUG! The bytes could have changed since we checked them.
> +/// //
> +/// // To avoid this kind of bug, don't call `UserSlice::new` multiple
> +/// // times with the same address.
> +/// Ok(buf)
> +/// }
> +/// ```
> +///
> +/// [`std::io`]: https://doc.rust-lang.org/std/io/index.html
> +/// [`clone_reader`]: UserSliceReader::clone_reader
> +pub struct UserSlice {
> + ptr: *mut c_void,
> + length: usize,
> +}
> +
Regards,
Boqun
[...]
On 3/20/24 09:46, Alice Ryhl wrote:
>> On 3/11/24 11:47, Alice Ryhl wrote:
>>> +/// A pointer to a page that owns the page allocation.
>>> +///
>>> +/// # Invariants
>>> +///
>>> +/// The pointer points at a page, and has ownership over the page.
>>
>> Why not "`page` is valid"?
>> Do you mean by ownership of the page that `page` has ownership of the
>> allocation, or does that entail any other property/privilege?
>
> I can add "at a valid page".
I don't think that helps, what you need as an invariant is that the
pointer is valid.
> By ownership I mean that we are allowed to pass it to __free_page and
> that until we do, we can access the page. If you want me to reword this,
> please tell me what you want it to say.
I see, no need to change it.
>>> +// SAFETY: It is safe to transfer page allocations between threads.
>>
>> Why?
>>
>>> +unsafe impl Send for Page {}
>
> How about:
>
> // SAFETY: Pages have no logic that relies on them staying on a given
> // thread, so moving them across threads is safe.
Sounds good.
>>> +// SAFETY: As long as the safety requirements for `&self` methods on this type
>>> +// are followed, there is no problem with calling them in parallel.
>>
>> Why?
>>
>>> +unsafe impl Sync for Page {}
>
> How about:
>
> // SAFETY: Pages have no logic that relies on them not being accessed
> // concurrently, so accessing them concurrently is safe.
Sounds good.
>>> + // SAFETY: The specified order is zero and we want one page.
>>
>> This doesn't explain why it is sound to call the function. I expect that
>> it is always sound to call this function with valid arguments.
>>
>>> + let page = unsafe { bindings::alloc_pages(gfp_flags, 0) };
>
> How about:
>
> // SAFETY: Depending on the value of `gfp_flags`, this call may sleep.
> // Other than that, it is always safe to call this method.
Sounds good.
>>> + // INVARIANT: We checked that the allocation succeeded.
>>
>> Doesn't mention ownership.
>>
>>> + Ok(Self { page })
>
> How about:
>
> // INVARIANT: We just successfully allocated a page, so we now have
> // ownership of the newly allocated page. We transfer that ownership to
> // the new `Page` object.
Sounds good.
>>> + /// Runs a piece of code with this page mapped to an address.
>>> + ///
>>> + /// The page is unmapped when this call returns.
>>> + ///
>>> + /// It is up to the caller to use the provided raw pointer correctly.
>>
>> This says nothing about what 'correctly' means. What I gathered from the
>> implementation is that the supplied pointer is valid for the execution
>> of `f` for `PAGE_SIZE` bytes.
>> What other things are you allowed to rely upon?
>>
>> Is it really OK for this function to be called from multiple threads?
>> Could that not result in the same page being mapped multiple times? If
>> that is fine, what about potential data races when two threads write to
>> the pointer given to `f`?
>>
>>> + pub fn with_page_mapped<T>(&self, f: impl FnOnce(*mut u8) -> T) -> T {
>
> I will say:
>
> /// It is up to the caller to use the provided raw pointer correctly.
> /// The pointer is valid for `PAGE_SIZE` bytes and for the duration in
> /// which the closure is called. Depending on the gfp flags and kernel
> /// configuration, the pointer may only be mapped on the current thread,
> /// and in those cases, dereferencing it on other threads is UB. Other
> /// than that, the usual rules for dereferencing a raw pointer apply.
> /// (E.g., don't cause data races, the memory may be uninitialized, and
> /// so on.)
I would simplify and drop "depending on the gfp flags and kernel..." and
just say that the pointer is only valid on the current thread.
Also would it make sense to make the pointer type *mut [u8; PAGE_SIZE]?
> It's okay to map it multiple times from different threads.
Do you still need to take care of data races?
So would it be fine to execute this code on two threads in parallel?
static PAGE: Page = ...; // assume we have a page accessible by both threads
PAGE.with_page_mapped(|ptr| {
loop {
unsafe { ptr.write(0) };
pr_info!("{}", unsafe { ptr.read() });
}
});
If this is not allowed, I don't really like the API. As a raw version it
would be fine, but I think we should have a safer version (eg by taking
`&mut self`).
>>> + // SAFETY: This unmaps the page mapped above.
>>
>> This doesn't explain why it is sound.
>>
>>> + //
>>> + // Since this API takes the user code as a closure, it can only be used
>>> + // in a manner where the pages are unmapped in reverse order. This is as
>>> + // required by `kunmap_local`.
>>> + //
>>> + // In other words, if this call to `kunmap_local` happens when a
>>> + // different page should be unmapped first, then there must necessarily
>>> + // be a call to `kmap_local_page` other than the call just above in
>>> + // `with_page_mapped` that made that possible. In this case, it is the
>>> + // unsafe block that wraps that other call that is incorrect.
>>> + unsafe { bindings::kunmap_local(mapped_addr) };
>
> Why do you say that? The kunmap_local method requires that the address
> being unmapped is currently mapped, and that pages are unmapped in
> reverse order. The safety comment explains that the page is currently
> mapped and that this method cannot be used to unmap them in anything
> other than reverse order.
Sorry it seems I thought that the safety comment ended after the first
sentence. Can you (re)move that first sentence, since it is not part of
a justification?
The rest is fine.
>>> + /// Runs a piece of code with a raw pointer to a slice of this page, with
>>> + /// bounds checking.
>>> + ///
>>> + /// If `f` is called, then it will be called with a pointer that points at
>>> + /// `off` bytes into the page, and the pointer will be valid for at least
>>> + /// `len` bytes. The pointer is only valid on this task, as this method uses
>>> + /// a local mapping.
>>
>> This information about the pointer only being valid on this task should
>> also apply to `with_page_mapped`, right?
>>
>>> + ///
>>> + /// If `off` and `len` refers to a region outside of this page, then this
>>> + /// method returns `EINVAL` and does not call `f`.
>>> + ///
>>> + /// It is up to the caller to use the provided raw pointer correctly.
>>
>> Again, please specify what 'correctly' means.
>
> I will remove the "The pointer is only valid on this task, as this
> method uses a local mapping." sentence and copy the same paragraph as
> previously (without the `PAGE_SIZE` remark).
Sounds good.
--
Cheers,
Benno
On Thu, Mar 21, 2024 at 2:16 PM Benno Lossin <[email protected]> wrote:
>
> On 3/20/24 09:46, Alice Ryhl wrote:
> >> On 3/11/24 11:47, Alice Ryhl wrote:
> >>> +/// A pointer to a page that owns the page allocation.
> >>> +///
> >>> +/// # Invariants
> >>> +///
> >>> +/// The pointer points at a page, and has ownership over the page.
> >>
> >> Why not "`page` is valid"?
> >> Do you mean by ownership of the page that `page` has ownership of the
> >> allocation, or does that entail any other property/privilege?
> >
> > I can add "at a valid page".
>
> I don't think that helps, what you need as an invariant is that the
> pointer is valid.
To me "points at a page" implies that the pointer is valid. I mean, if
it was dangling, it would not point at a page?
But I can reword to something else if you have a preferred phrasing.
> >>> + /// Runs a piece of code with this page mapped to an address.
> >>> + ///
> >>> + /// The page is unmapped when this call returns.
> >>> + ///
> >>> + /// It is up to the caller to use the provided raw pointer correctly.
> >>
> >> This says nothing about what 'correctly' means. What I gathered from the
> >> implementation is that the supplied pointer is valid for the execution
> >> of `f` for `PAGE_SIZE` bytes.
> >> What other things are you allowed to rely upon?
> >>
> >> Is it really OK for this function to be called from multiple threads?
> >> Could that not result in the same page being mapped multiple times? If
> >> that is fine, what about potential data races when two threads write to
> >> the pointer given to `f`?
> >>
> >>> + pub fn with_page_mapped<T>(&self, f: impl FnOnce(*mut u8) -> T) -> T {
> >
> > I will say:
> >
> > /// It is up to the caller to use the provided raw pointer correctly.
> > /// The pointer is valid for `PAGE_SIZE` bytes and for the duration in
> > /// which the closure is called. Depending on the gfp flags and kernel
> > /// configuration, the pointer may only be mapped on the current thread,
> > /// and in those cases, dereferencing it on other threads is UB. Other
> > /// than that, the usual rules for dereferencing a raw pointer apply.
> > /// (E.g., don't cause data races, the memory may be uninitialized, and
> > /// so on.)
>
> I would simplify and drop "depending on the gfp flags and kernel..." and
> just say that the pointer is only valid on the current thread.
Sure, that works for me.
> Also would it make sense to make the pointer type *mut [u8; PAGE_SIZE]?
I think it's a trade-off. That makes the code more error-prone, since
`pointer::add` now doesn't move by a number of bytes, but a number of
pages.
> > It's okay to map it multiple times from different threads.
>
> Do you still need to take care of data races?
> So would it be fine to execute this code on two threads in parallel?
>
> static PAGE: Page = ...; // assume we have a page accessible by both threads
>
> PAGE.with_page_mapped(|ptr| {
> loop {
> unsafe { ptr.write(0) };
> pr_info!("{}", unsafe { ptr.read() });
> }
> });
Like I said, the usual pointer rules apply. Two threads can access it
in parallel as long as one of the following are satisfied:
* Both accesses are reads.
* Both accesses are atomic.
* They access disjoint byte ranges.
Other than the fact that it uses a thread-local mapping on machines
that can't address all of their memory at the same time, it's
completely normal memory. It's literally just a PAGE_SIZE-aligned
allocation of PAGE_SIZE bytes.
> If this is not allowed, I don't really like the API. As a raw version it
> would be fine, but I think we should have a safer version (eg by taking
> `&mut self`).
I don't understand what you mean. It is the *most* raw API that `Page`
has. I can make them private if you want me to. The API cannot take
`&mut self` because I need to be able to unsafely perform concurrent
writes to disjoint byte ranges.
Alice
On Thu, Mar 21, 2024 at 2:56 PM Benno Lossin <[email protected]> wrote:
>
> On 3/21/24 14:42, Alice Ryhl wrote:
> > On Thu, Mar 21, 2024 at 2:16 PM Benno Lossin <[email protected]> wrote:
> >>
> >> On 3/20/24 09:46, Alice Ryhl wrote:
> >>>> On 3/11/24 11:47, Alice Ryhl wrote:
> >>>>> +/// A pointer to a page that owns the page allocation.
> >>>>> +///
> >>>>> +/// # Invariants
> >>>>> +///
> >>>>> +/// The pointer points at a page, and has ownership over the page.
> >>>>
> >>>> Why not "`page` is valid"?
> >>>> Do you mean by ownership of the page that `page` has ownership of the
> >>>> allocation, or does that entail any other property/privilege?
> >>>
> >>> I can add "at a valid page".
> >>
> >> I don't think that helps, what you need as an invariant is that the
> >> pointer is valid.
> >
> > To me "points at a page" implies that the pointer is valid. I mean, if
> > it was dangling, it would not point at a page?
> >
> > But I can reword to something else if you have a preferred phrasing.
>
> I would just say "`page` is valid" or "`self.page` is valid".
>
> >>>>> + /// Runs a piece of code with this page mapped to an address.
> >>>>> + ///
> >>>>> + /// The page is unmapped when this call returns.
> >>>>> + ///
> >>>>> + /// It is up to the caller to use the provided raw pointer correctly.
> >>>>
> >>>> This says nothing about what 'correctly' means. What I gathered from the
> >>>> implementation is that the supplied pointer is valid for the execution
> >>>> of `f` for `PAGE_SIZE` bytes.
> >>>> What other things are you allowed to rely upon?
> >>>>
> >>>> Is it really OK for this function to be called from multiple threads?
> >>>> Could that not result in the same page being mapped multiple times? If
> >>>> that is fine, what about potential data races when two threads write to
> >>>> the pointer given to `f`?
> >>>>
> >>>>> + pub fn with_page_mapped<T>(&self, f: impl FnOnce(*mut u8) -> T) -> T {
> >>>
> >>> I will say:
> >>>
> >>> /// It is up to the caller to use the provided raw pointer correctly.
> >>> /// The pointer is valid for `PAGE_SIZE` bytes and for the duration in
> >>> /// which the closure is called. Depending on the gfp flags and kernel
> >>> /// configuration, the pointer may only be mapped on the current thread,
> >>> /// and in those cases, dereferencing it on other threads is UB. Other
> >>> /// than that, the usual rules for dereferencing a raw pointer apply.
> >>> /// (E.g., don't cause data races, the memory may be uninitialized, and
> >>> /// so on.)
> >>
> >> I would simplify and drop "depending on the gfp flags and kernel..." and
> >> just say that the pointer is only valid on the current thread.
> >
> > Sure, that works for me.
> >
> >> Also would it make sense to make the pointer type *mut [u8; PAGE_SIZE]?
> >
> > I think it's a trade-off. That makes the code more error-prone, since
> > `pointer::add` now doesn't move by a number of bytes, but a number of
> > pages.
>
> Yeah. As long as you document that the pointer is valid for r/w with
> offsets in `0..PAGE_SIZE` bytes, leaving the type as is, is fine by me.
>
>
> >>> It's okay to map it multiple times from different threads.
> >>
> >> Do you still need to take care of data races?
> >> So would it be fine to execute this code on two threads in parallel?
> >>
> >> static PAGE: Page = ...; // assume we have a page accessible by both threads
> >>
> >> PAGE.with_page_mapped(|ptr| {
> >> loop {
> >> unsafe { ptr.write(0) };
> >> pr_info!("{}", unsafe { ptr.read() });
> >> }
> >> });
> >
> > Like I said, the usual pointer rules apply. Two threads can access it
> > in parallel as long as one of the following are satisfied:
> >
> > * Both accesses are reads.
> > * Both accesses are atomic.
> > * They access disjoint byte ranges.
> >
> > Other than the fact that it uses a thread-local mapping on machines
> > that can't address all of their memory at the same time, it's
> > completely normal memory. It's literally just a PAGE_SIZE-aligned
> > allocation of PAGE_SIZE bytes.
>
> Thanks for the info, what do you think of this?:
>
> /// It is up to the caller to use the provided raw pointer correctly. The pointer is valid for reads
> /// and writes for `PAGE_SIZE` bytes and for the duration in which the closure is called. The
> /// pointer must only be used on the current thread. The caller must also ensure that no data races
> /// occur: when mapping the same page on two threads accesses to memory with the same offset must be
> /// synchronized.
I would much rather phrase it in terms of "the usual pointer" rules. I
mean, the memory could also be uninitialized if you don't pass
__GFP_ZERO when you create it, so you also have to make sure to follow
the rules about uninitialized memory. I don't want to be in the
business of listing all requirements for accessing memory here.
> >> If this is not allowed, I don't really like the API. As a raw version it
> >> would be fine, but I think we should have a safer version (eg by taking
> >> `&mut self`).
> >
> > I don't understand what you mean. It is the *most* raw API that `Page`
> > has. I can make them private if you want me to. The API cannot take
> > `&mut self` because I need to be able to unsafely perform concurrent
> > writes to disjoint byte ranges.
>
> If you don't need these functions to be public, I think we should
> definitely make them private.
> Also we could add a `raw` suffix to the functions to make it clear that
> it is a primitive API. If you think that it is highly unlikely that we
> get a safer version, then I don't think there is value in adding the
> suffix.
The old code on the Rust branch didn't have these functions, but
that's because the old `read_raw` and `write_raw` methods did all of
these things directly in their implementation:
* Map the memory so we can get a pointer.
* Get a pointer to a subslice (with bounds checks!)
* Do the actual read/write.
I thought that doing this many things in a single function was
convoluted, so I decided to refactor the code by extracting the "get a
pointer to the page" logic into `with_page_mapped` and the "point to
subslice with bounds check" logic into `with_pointer_into_page`. That
way, each function has only one responsibility, instead of mixing
three responsibilities into one.
So even if we get a safer version, I would not want to get rid of this
method. I don't want to inline its implementation into more
complicated functions. The safer method would call the raw method, and
then do whatever additional logic it wants to do on top of that.
Alice
On 3/21/24 15:11, Alice Ryhl wrote:
> On Thu, Mar 21, 2024 at 2:56 PM Benno Lossin <benno.lossin@protonme> wrote:
>>
>> On 3/21/24 14:42, Alice Ryhl wrote:
>>> On Thu, Mar 21, 2024 at 2:16 PM Benno Lossin <[email protected]> wrote:
>>>>
>>>> On 3/20/24 09:46, Alice Ryhl wrote:
>>>>>> On 3/11/24 11:47, Alice Ryhl wrote:
>>>>>>> +/// A pointer to a page that owns the page allocation.
>>>>>>> +///
>>>>>>> +/// # Invariants
>>>>>>> +///
>>>>>>> +/// The pointer points at a page, and has ownership over the page.
>>>>>>
>>>>>> Why not "`page` is valid"?
>>>>>> Do you mean by ownership of the page that `page` has ownership of the
>>>>>> allocation, or does that entail any other property/privilege?
>>>>>
>>>>> I can add "at a valid page".
>>>>
>>>> I don't think that helps, what you need as an invariant is that the
>>>> pointer is valid.
>>>
>>> To me "points at a page" implies that the pointer is valid. I mean, if
>>> it was dangling, it would not point at a page?
>>>
>>> But I can reword to something else if you have a preferred phrasing.
>>
>> I would just say "`page` is valid" or "`self.page` is valid".
>>
>>>>>>> + /// Runs a piece of code with this page mapped to an address.
>>>>>>> + ///
>>>>>>> + /// The page is unmapped when this call returns.
>>>>>>> + ///
>>>>>>> + /// It is up to the caller to use the provided raw pointer correctly.
>>>>>>
>>>>>> This says nothing about what 'correctly' means. What I gathered from the
>>>>>> implementation is that the supplied pointer is valid for the execution
>>>>>> of `f` for `PAGE_SIZE` bytes.
>>>>>> What other things are you allowed to rely upon?
>>>>>>
>>>>>> Is it really OK for this function to be called from multiple threads?
>>>>>> Could that not result in the same page being mapped multiple times? If
>>>>>> that is fine, what about potential data races when two threads write to
>>>>>> the pointer given to `f`?
>>>>>>
>>>>>>> + pub fn with_page_mapped<T>(&self, f: impl FnOnce(*mut u8) -> T) -> T {
>>>>>
>>>>> I will say:
>>>>>
>>>>> /// It is up to the caller to use the provided raw pointer correctly.
>>>>> /// The pointer is valid for `PAGE_SIZE` bytes and for the duration in
>>>>> /// which the closure is called. Depending on the gfp flags and kernel
>>>>> /// configuration, the pointer may only be mapped on the current thread,
>>>>> /// and in those cases, dereferencing it on other threads is UB. Other
>>>>> /// than that, the usual rules for dereferencing a raw pointer apply.
>>>>> /// (E.g., don't cause data races, the memory may be uninitialized, and
>>>>> /// so on.)
>>>>
>>>> I would simplify and drop "depending on the gfp flags and kernel..." and
>>>> just say that the pointer is only valid on the current thread.
>>>
>>> Sure, that works for me.
>>>
>>>> Also would it make sense to make the pointer type *mut [u8; PAGE_SIZE]?
>>>
>>> I think it's a trade-off. That makes the code more error-prone, since
>>> `pointer::add` now doesn't move by a number of bytes, but a number of
>>> pages.
>>
>> Yeah. As long as you document that the pointer is valid for r/w with
>> offsets in `0..PAGE_SIZE` bytes, leaving the type as is, is fine by me.
>>
>>
>>>>> It's okay to map it multiple times from different threads.
>>>>
>>>> Do you still need to take care of data races?
>>>> So would it be fine to execute this code on two threads in parallel?
>>>>
>>>> static PAGE: Page = ...; // assume we have a page accessible by both threads
>>>>
>>>> PAGE.with_page_mapped(|ptr| {
>>>> loop {
>>>> unsafe { ptr.write(0) };
>>>> pr_info!("{}", unsafe { ptr.read() });
>>>> }
>>>> });
>>>
>>> Like I said, the usual pointer rules apply. Two threads can access it
>>> in parallel as long as one of the following are satisfied:
>>>
>>> * Both accesses are reads.
>>> * Both accesses are atomic.
>>> * They access disjoint byte ranges.
>>>
>>> Other than the fact that it uses a thread-local mapping on machines
>>> that can't address all of their memory at the same time, it's
>>> completely normal memory. It's literally just a PAGE_SIZE-aligned
>>> allocation of PAGE_SIZE bytes.
>>
>> Thanks for the info, what do you think of this?:
>>
>> /// It is up to the caller to use the provided raw pointer correctly. The pointer is valid for reads
>> /// and writes for `PAGE_SIZE` bytes and for the duration in which the closure is called. The
>> /// pointer must only be used on the current thread. The caller must also ensure that no data races
>> /// occur: when mapping the same page on two threads accesses to memory with the same offset must be
>> /// synchronized.
>
> I would much rather phrase it in terms of "the usual pointer" rules. I
> mean, the memory could also be uninitialized if you don't pass
> __GFP_ZERO when you create it, so you also have to make sure to follow
> the rules about uninitialized memory. I don't want to be in the
> business of listing all requirements for accessing memory here.
Sure you can add that part again, I just want to highlight that mapping
the same page multiple times means that the caller has to synchronize
accesses to those pointers even if the pointers do not have the same
address value. That is not normally something you need to take care of,
ie normally if `ptr1.addr() != ptr2.addr()` then you can access them
without synchronization.
>>>> If this is not allowed, I don't really like the API. As a raw version it
>>>> would be fine, but I think we should have a safer version (eg by taking
>>>> `&mut self`).
>>>
>>> I don't understand what you mean. It is the *most* raw API that `Page`
>>> has. I can make them private if you want me to. The API cannot take
>>> `&mut self` because I need to be able to unsafely perform concurrent
>>> writes to disjoint byte ranges.
>>
>> If you don't need these functions to be public, I think we should
>> definitely make them private.
>> Also we could add a `raw` suffix to the functions to make it clear that
>> it is a primitive API. If you think that it is highly unlikely that we
>> get a safer version, then I don't think there is value in adding the
>> suffix.
>
> The old code on the Rust branch didn't have these functions, but
> that's because the old `read_raw` and `write_raw` methods did all of
> these things directly in their implementation:
>
> * Map the memory so we can get a pointer.
> * Get a pointer to a subslice (with bounds checks!)
> * Do the actual read/write.
>
> I thought that doing this many things in a single function was
> convoluted, so I decided to refactor the code by extracting the "get a
> pointer to the page" logic into `with_page_mapped` and the "point to
> subslice with bounds check" logic into `with_pointer_into_page`. That
> way, each function has only one responsibility, instead of mixing
> three responsibilities into one.
I think that design decision is good.
> So even if we get a safer version, I would not want to get rid of this
> method. I don't want to inline its implementation into more
> complicated functions. The safer method would call the raw method, and
> then do whatever additional logic it wants to do on top of that.
I was not suggesting removing this method, rather renaming it to reflect
that it is a primitive API that should be avoided if possible.
--
Cheers,
Benno
On Thu, Mar 21, 2024 at 3:11 PM Alice Ryhl <[email protected]> wrote:
>
> On Thu, Mar 21, 2024 at 2:56 PM Benno Lossin <benno.lossin@protonme> wrote:
> >
> > On 3/21/24 14:42, Alice Ryhl wrote:
> > > On Thu, Mar 21, 2024 at 2:16 PM Benno Lossin <[email protected]> wrote:
> > >>
> > >> On 3/20/24 09:46, Alice Ryhl wrote:
> > >>>> On 3/11/24 11:47, Alice Ryhl wrote:
> > >>>>> +/// A pointer to a page that owns the page allocation.
> > >>>>> +///
> > >>>>> +/// # Invariants
> > >>>>> +///
> > >>>>> +/// The pointer points at a page, and has ownership over the page.
> > >>>>
> > >>>> Why not "`page` is valid"?
> > >>>> Do you mean by ownership of the page that `page` has ownership of the
> > >>>> allocation, or does that entail any other property/privilege?
> > >>>
> > >>> I can add "at a valid page".
> > >>
> > >> I don't think that helps, what you need as an invariant is that the
> > >> pointer is valid.
> > >
> > > To me "points at a page" implies that the pointer is valid. I mean, if
> > > it was dangling, it would not point at a page?
> > >
> > > But I can reword to something else if you have a preferred phrasing.
> >
> > I would just say "`page` is valid" or "`self.page` is valid".
> >
> > >>>>> + /// Runs a piece of code with this page mapped to an address.
> > >>>>> + ///
> > >>>>> + /// The page is unmapped when this call returns.
> > >>>>> + ///
> > >>>>> + /// It is up to the caller to use the provided raw pointer correctly.
> > >>>>
> > >>>> This says nothing about what 'correctly' means. What I gathered from the
> > >>>> implementation is that the supplied pointer is valid for the execution
> > >>>> of `f` for `PAGE_SIZE` bytes.
> > >>>> What other things are you allowed to rely upon?
> > >>>>
> > >>>> Is it really OK for this function to be called from multiple threads?
> > >>>> Could that not result in the same page being mapped multiple times? If
> > >>>> that is fine, what about potential data races when two threads write to
> > >>>> the pointer given to `f`?
> > >>>>
> > >>>>> + pub fn with_page_mapped<T>(&self, f: impl FnOnce(*mut u8) -> T) -> T {
> > >>>
> > >>> I will say:
> > >>>
> > >>> /// It is up to the caller to use the provided raw pointer correctly.
> > >>> /// The pointer is valid for `PAGE_SIZE` bytes and for the duration in
> > >>> /// which the closure is called. Depending on the gfp flags and kernel
> > >>> /// configuration, the pointer may only be mapped on the current thread,
> > >>> /// and in those cases, dereferencing it on other threads is UB. Other
> > >>> /// than that, the usual rules for dereferencing a raw pointer apply.
> > >>> /// (E.g., don't cause data races, the memory may be uninitialized, and
> > >>> /// so on.)
> > >>
> > >> I would simplify and drop "depending on the gfp flags and kernel..." and
> > >> just say that the pointer is only valid on the current thread.
> > >
> > > Sure, that works for me.
> > >
> > >> Also would it make sense to make the pointer type *mut [u8; PAGE_SIZE]?
> > >
> > > I think it's a trade-off. That makes the code more error-prone, since
> > > `pointer::add` now doesn't move by a number of bytes, but a number of
> > > pages.
> >
> > Yeah. As long as you document that the pointer is valid for r/w with
> > offsets in `0..PAGE_SIZE` bytes, leaving the type as is, is fine by me.
> >
> >
> > >>> It's okay to map it multiple times from different threads.
> > >>
> > >> Do you still need to take care of data races?
> > >> So would it be fine to execute this code on two threads in parallel?
> > >>
> > >> static PAGE: Page = ...; // assume we have a page accessible by both threads
> > >>
> > >> PAGE.with_page_mapped(|ptr| {
> > >> loop {
> > >> unsafe { ptr.write(0) };
> > >> pr_info!("{}", unsafe { ptr.read() });
> > >> }
> > >> });
> > >
> > > Like I said, the usual pointer rules apply. Two threads can access it
> > > in parallel as long as one of the following are satisfied:
> > >
> > > * Both accesses are reads.
> > > * Both accesses are atomic.
> > > * They access disjoint byte ranges.
> > >
> > > Other than the fact that it uses a thread-local mapping on machines
> > > that can't address all of their memory at the same time, it's
> > > completely normal memory. It's literally just a PAGE_SIZE-aligned
> > > allocation of PAGE_SIZE bytes.
> >
> > Thanks for the info, what do you think of this?:
> >
> > /// It is up to the caller to use the provided raw pointer correctly. The pointer is valid for reads
> > /// and writes for `PAGE_SIZE` bytes and for the duration in which the closure is called. The
> > /// pointer must only be used on the current thread. The caller must also ensure that no data races
> > /// occur: when mapping the same page on two threads accesses to memory with the same offset must be
> > /// synchronized.
>
> I would much rather phrase it in terms of "the usual pointer" rules. I
> mean, the memory could also be uninitialized if you don't pass
> __GFP_ZERO when you create it, so you also have to make sure to follow
> the rules about uninitialized memory. I don't want to be in the
> business of listing all requirements for accessing memory here.
>
> > >> If this is not allowed, I don't really like the API. As a raw version it
> > >> would be fine, but I think we should have a safer version (eg by taking
> > >> `&mut self`).
> > >
> > > I don't understand what you mean. It is the *most* raw API that `Page`
> > > has. I can make them private if you want me to. The API cannot take
> > > `&mut self` because I need to be able to unsafely perform concurrent
> > > writes to disjoint byte ranges.
> >
> > If you don't need these functions to be public, I think we should
> > definitely make them private.
> > Also we could add a `raw` suffix to the functions to make it clear that
> > it is a primitive API. If you think that it is highly unlikely that we
> > get a safer version, then I don't think there is value in adding the
> > suffix.
>
> The old code on the Rust branch didn't have these functions, but
> that's because the old `read_raw` and `write_raw` methods did all of
> these things directly in their implementation:
>
> * Map the memory so we can get a pointer.
> * Get a pointer to a subslice (with bounds checks!)
> * Do the actual read/write.
>
> I thought that doing this many things in a single function was
> convoluted, so I decided to refactor the code by extracting the "get a
> pointer to the page" logic into `with_page_mapped` and the "point to
> subslice with bounds check" logic into `with_pointer_into_page`. That
> way, each function has only one responsibility, instead of mixing
> three responsibilities into one.
>
> So even if we get a safer version, I would not want to get rid of this
> method. I don't want to inline its implementation into more
> complicated functions. The safer method would call the raw method, and
> then do whatever additional logic it wants to do on top of that.
Adding to this: To me, we *do* already have safer versions of this
method. Those are the read_raw and write_raw and fill_zero and
copy_from_user_slice methods.
Alice
On 3/21/24 14:42, Alice Ryhl wrote:
> On Thu, Mar 21, 2024 at 2:16 PM Benno Lossin <benno.lossin@protonme> wrote:
>>
>> On 3/20/24 09:46, Alice Ryhl wrote:
>>>> On 3/11/24 11:47, Alice Ryhl wrote:
>>>>> +/// A pointer to a page that owns the page allocation.
>>>>> +///
>>>>> +/// # Invariants
>>>>> +///
>>>>> +/// The pointer points at a page, and has ownership over the page.
>>>>
>>>> Why not "`page` is valid"?
>>>> Do you mean by ownership of the page that `page` has ownership of the
>>>> allocation, or does that entail any other property/privilege?
>>>
>>> I can add "at a valid page".
>>
>> I don't think that helps, what you need as an invariant is that the
>> pointer is valid.
>
> To me "points at a page" implies that the pointer is valid. I mean, if
> it was dangling, it would not point at a page?
>
> But I can reword to something else if you have a preferred phrasing.
I would just say "`page` is valid" or "`self.page` is valid".
>>>>> + /// Runs a piece of code with this page mapped to an address.
>>>>> + ///
>>>>> + /// The page is unmapped when this call returns.
>>>>> + ///
>>>>> + /// It is up to the caller to use the provided raw pointer correctly.
>>>>
>>>> This says nothing about what 'correctly' means. What I gathered from the
>>>> implementation is that the supplied pointer is valid for the execution
>>>> of `f` for `PAGE_SIZE` bytes.
>>>> What other things are you allowed to rely upon?
>>>>
>>>> Is it really OK for this function to be called from multiple threads?
>>>> Could that not result in the same page being mapped multiple times? If
>>>> that is fine, what about potential data races when two threads write to
>>>> the pointer given to `f`?
>>>>
>>>>> + pub fn with_page_mapped<T>(&self, f: impl FnOnce(*mut u8) -> T) -> T {
>>>
>>> I will say:
>>>
>>> /// It is up to the caller to use the provided raw pointer correctly.
>>> /// The pointer is valid for `PAGE_SIZE` bytes and for the duration in
>>> /// which the closure is called. Depending on the gfp flags and kernel
>>> /// configuration, the pointer may only be mapped on the current thread,
>>> /// and in those cases, dereferencing it on other threads is UB. Other
>>> /// than that, the usual rules for dereferencing a raw pointer apply.
>>> /// (E.g., don't cause data races, the memory may be uninitialized, and
>>> /// so on.)
>>
>> I would simplify and drop "depending on the gfp flags and kernel..." and
>> just say that the pointer is only valid on the current thread.
>
> Sure, that works for me.
>
>> Also would it make sense to make the pointer type *mut [u8; PAGE_SIZE]?
>
> I think it's a trade-off. That makes the code more error-prone, since
> `pointer::add` now doesn't move by a number of bytes, but a number of
> pages.
Yeah. As long as you document that the pointer is valid for r/w with
offsets in `0..PAGE_SIZE` bytes, leaving the type as is, is fine by me.
>>> It's okay to map it multiple times from different threads.
>>
>> Do you still need to take care of data races?
>> So would it be fine to execute this code on two threads in parallel?
>>
>> static PAGE: Page = ...; // assume we have a page accessible by both threads
>>
>> PAGE.with_page_mapped(|ptr| {
>> loop {
>> unsafe { ptr.write(0) };
>> pr_info!("{}", unsafe { ptr.read() });
>> }
>> });
>
> Like I said, the usual pointer rules apply. Two threads can access it
> in parallel as long as one of the following are satisfied:
>
> * Both accesses are reads.
> * Both accesses are atomic.
> * They access disjoint byte ranges.
>
> Other than the fact that it uses a thread-local mapping on machines
> that can't address all of their memory at the same time, it's
> completely normal memory. It's literally just a PAGE_SIZE-aligned
> allocation of PAGE_SIZE bytes.
Thanks for the info, what do you think of this?:
/// It is up to the caller to use the provided raw pointer correctly. The pointer is valid for reads
/// and writes for `PAGE_SIZE` bytes and for the duration in which the closure is called. The
/// pointer must only be used on the current thread. The caller must also ensure that no data races
/// occur: when mapping the same page on two threads accesses to memory with the same offset must be
/// synchronized.
>
>> If this is not allowed, I don't really like the API. As a raw version it
>> would be fine, but I think we should have a safer version (eg by taking
>> `&mut self`).
>
> I don't understand what you mean. It is the *most* raw API that `Page`
> has. I can make them private if you want me to. The API cannot take
> `&mut self` because I need to be able to unsafely perform concurrent
> writes to disjoint byte ranges.
If you don't need these functions to be public, I think we should
definitely make them private.
Also we could add a `raw` suffix to the functions to make it clear that
it is a primitive API. If you think that it is highly unlikely that we
get a safer version, then I don't think there is value in adding the
suffix.
--
Cheers,
Benno
On 3/19/24 23:16, Boqun Feng wrote:
> On Mon, Mar 11, 2024 at 10:47:16AM +0000, Alice Ryhl wrote:
> [...]
>> /* `bindgen` gets confused at certain things. */
>> const size_t RUST_CONST_HELPER_ARCH_SLAB_MINALIGN = ARCH_SLAB_MINALIGN;
>> +const size_t RUST_CONST_HELPER_PAGE_SIZE = PAGE_SIZE;
>> +const size_t RUST_CONST_HELPER_PAGE_MASK = PAGE_MASK;
>
> At least for me, bindgen couldn't work out the macro expansion, and I
> got:
>
> pub const PAGE_SIZE: usize = 4096;
> extern "C" {
> pub static RUST_CONST_HELPER_PAGE_MASK: usize;
> }
>
> in rust/bindings/bindings_generated.rs, which will eventually cause the
> code cannot compile.
>
> I'm using bindgen-cli 0.65.1, libclang (16 or 17), rustc (1.76 or 1.77).
>
> Anyone else sees the same thing?
I also have this problem with bindgen-cli 0.69.1 libclang 16 and rustc 1.760.
For reference, here is the actual compilation error:
error[E0425]: cannot find value `PAGE_MASK` in crate `bindings`
--> rust/kernel/page.rs:17:40
|
17 | pub const PAGE_MASK: usize = bindings::PAGE_MASK as usize;
| ^^^^^^^^^ help: a constant with a similar name exists: `GATE_TASK`
|
::: /home/benno/kernel/review/mem-man-binder/rust/bindings/bindings_generated.rs:12188:1
|
12188 | pub const GATE_TASK: _bindgen_ty_4 = 5;
| ---------------------------------- similarly named constant `GATE_TASK` defined here
error: type `gfp_t` should have an upper camel case name
--> rust/kernel/page.rs:21:14
|
21 | pub type gfp_t = bindings::gfp_t;
| ^^^^^ help: convert the identifier to upper camel case: `GfpT`
|
= note: `-D non-camel-case-types` implied by `-D warnings`
= help: to override `-D warnings` add `#[allow(non_camel_case_types)]`
error: aborting due to 2 previous errors
For more information about this error, try `rustc --explain E0425`.
@Alice: the second error should be unrelated to this problem.
--
Cheers,
Benno
>
> Regards,
> Boqun
>
>> const gfp_t RUST_CONST_HELPER_GFP_KERNEL = GFP_KERNEL;
>> const gfp_t RUST_CONST_HELPER___GFP_ZERO = __GFP_ZERO;
>> +const gfp_t RUST_CONST_HELPER___GFP_HIGHMEM = ___GFP_HIGHMEM;
> [...]