Received: by 2002:ab2:69cc:0:b0:1f4:be93:e15a with SMTP id n12csp1858553lqp; Mon, 15 Apr 2024 22:41:24 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCWCy5LTqeuURp4FMmZkwkapCW2GrVI4O5otmusWHyyaGU3OkvzVB5TcP3drHQ9tpT121ZaFuTA1E+SAoPF/ANCZk/DmlWdRH0+fwg892w== X-Google-Smtp-Source: AGHT+IH1TENevuFLlggl3At6hjkUTLKaCIF+6mqy0fIyMvNGtGBbeEI1suyV/0aj6p97C9o4KMOJ X-Received: by 2002:ac8:57c4:0:b0:436:ac83:43f5 with SMTP id w4-20020ac857c4000000b00436ac8343f5mr11667561qta.52.1713246084415; Mon, 15 Apr 2024 22:41:24 -0700 (PDT) Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id g11-20020ac87f4b000000b00431819d4ebfsi11995859qtk.521.2024.04.15.22.41.24 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Apr 2024 22:41:24 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel+bounces-146281-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@umich.edu header.s=google-2016-06-03 header.b=SvL06M4l; arc=fail (body hash mismatch); spf=pass (google.com: domain of linux-kernel+bounces-146281-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-146281-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=umich.edu Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 107581C21732 for ; Tue, 16 Apr 2024 05:41:24 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 000093C064; Tue, 16 Apr 2024 05:40:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=umich.edu header.i=@umich.edu header.b="SvL06M4l" Received: from mail-yb1-f181.google.com (mail-yb1-f181.google.com [209.85.219.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 480CD3B7AC for ; Tue, 16 Apr 2024 05:40:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.181 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713246040; cv=none; b=mhhyJ3YVyaMfWrVlnGdy/yPA1C7JL+GxzMhI5jLiuJy2nf/k2gRT3X8+vzkxpMYRJYBI67Hiw5V+DJ63v7HiE3YVqRWimS322f7p84osaDHd63oluZQe6JPMfl06Dm/OyfNChcs3KySgfzkjR3uB7A9r6Any+qMtcqrW7M77Ce8= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1713246040; c=relaxed/simple; bh=qdY6jPNtZdRKwxkv5vlWMVyHFL3sMFt8KXejXwuxxxo=; h=MIME-Version:References:In-Reply-To:From:Date:Message-ID:Subject: To:Cc:Content-Type; b=b//iC8JkPRm6EouU+AFDkORmqw3w3TL2SlU5ZN9hCsIRyM5xJUAQhfc4t7sys6cTc0G7JK+p1YCrOm1wmJ3oX2nb5arlU0XlyiOXMnMZlzPzkmhLoree2NYZalrk7DpdFgIOCEACvNS3vvau5ZvSJEOpw78BB0bQRa3rqHu/JfA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=umich.edu; spf=pass smtp.mailfrom=umich.edu; dkim=pass (2048-bit key) header.d=umich.edu header.i=@umich.edu header.b=SvL06M4l; arc=none smtp.client-ip=209.85.219.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=umich.edu Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=umich.edu Received: by mail-yb1-f181.google.com with SMTP id 3f1490d57ef6-d9b9adaf291so4001749276.1 for ; Mon, 15 Apr 2024 22:40:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=umich.edu; s=google-2016-06-03; t=1713246036; x=1713850836; darn=vger.kernel.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=jZnTsFH2u/jd95rE0nScFwnMla46fGdQkSBISp00WVA=; b=SvL06M4ljd9vfAK2aQ871KaS+ZIRrVIgPmuEXsYMCRJz4uXuLTOqB/30LlwpELGSfp p1ciZ57Wt+3mwtxxHe8ST0v8t2CCeNnRddpTg+CXIv2yg633daIRGJp330AaK8UWh7lM yO3sdszLsvS86QAbBwPjuyhlbTrsPXfUN5EjPXeqbpbs/l1ZDMAcOSjRy15vLT2a8bF+ RZLSwESFYHEf23XcZTCMNyDRAhXqG6+uRC6yMYDYPpgEwNDyaw0g3UYJUtrR6mgTfDPW q/rw/gM+6JIFDxmYcYA3KvhbzZON2+HAHs5uf3iyVNwhWXcZUlHnPYJ70v5+GD3ggCd7 bCRg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1713246036; x=1713850836; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jZnTsFH2u/jd95rE0nScFwnMla46fGdQkSBISp00WVA=; b=AWFNPhzGtRFbQX4k1AbAfZoyg1WwQ1Z72WhoJZg5qn5UfO4SxSwRWfkmyKZynxB8GG f4EKe1+CxOAvBWsIM38bvcKwhfO59UC2vs0HhuO1KIfZ2zpUXTeGqVDdGdyvn2YtnFbr sPH26T6rpXdLw40NsgH5Vc5OpDaDFXujnLC99bZYnMFjhCkvTlBmL7dlCAn6pANnEHHJ ZTzgBj50lfkYYTixNrAarHcPBkCe0cUTw6X/sq6Pcq0rEuBd/gh2xjAR1HvffvYrLKK4 7s2wJtcSA5KqofDBUbdMX4FQJlj3RB+i0fZi8MkeoVnWbvMY3hJd2w41Ffiksv9hxN5H yeUg== X-Forwarded-Encrypted: i=1; AJvYcCX/MLUI6VgxgoNJ0WtU27RzYbytz/t/TnSiYNEXvcXAV6BoC96PDy2DM8XpO+6yUM7V63sIoqzSA0V35nxZwQ+pu8tvePvbK9l+iLB+ X-Gm-Message-State: AOJu0YzZ4ocglacPgjZ+FJk9BndeTVb8Q09gymJ8f4ALf7StYqhi8B6y Tn4AHMDC1M7ENYu+/R2wjE551uS5qyQm2Qiz1zfwkgAXyTF0kXoOHBcCjCI/C9z9ecvNZnySOFb 3GjpSrg8ZkuWs+7v0UItvpLgOh4P3MCqqwbAsqw== X-Received: by 2002:a25:9ac1:0:b0:dcf:f535:dad6 with SMTP id t1-20020a259ac1000000b00dcff535dad6mr9792461ybo.56.1713246036166; Mon, 15 Apr 2024 22:40:36 -0700 (PDT) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 References: <20240415-alice-mm-v5-0-6f55e4d8ef51@google.com> <20240415-alice-mm-v5-4-6f55e4d8ef51@google.com> In-Reply-To: <20240415-alice-mm-v5-4-6f55e4d8ef51@google.com> From: Trevor Gross Date: Tue, 16 Apr 2024 01:40:25 -0400 Message-ID: Subject: Re: [PATCH v5 4/4] rust: add abstraction for `struct page` To: Alice Ryhl Cc: Miguel Ojeda , Matthew Wilcox , Al Viro , Andrew Morton , Kees Cook , Alex Gaynor , Wedson Almeida Filho , Boqun Feng , Gary Guo , =?UTF-8?Q?Bj=C3=B6rn_Roy_Baron?= , Benno Lossin , Andreas Hindborg , Greg Kroah-Hartman , =?UTF-8?B?QXJ2ZSBIasO4bm5ldsOlZw==?= , Todd Kjos , Martijn Coenen , Joel Fernandes , Carlos Llamas , Suren Baghdasaryan , Arnd Bergmann , linux-mm@kvack.org, linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Christian Brauner Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Mon, Apr 15, 2024 at 3:15=E2=80=AFAM Alice Ryhl w= rote: > > Adds a new struct called `Page` that wraps a pointer to `struct page`. > This struct is assumed to hold ownership over the page, so that Rust > code can allocate and manage pages directly. > > The page type has various methods for reading and writing into the page. > These methods will temporarily map the page to allow the operation. All > of these methods use a helper that takes an offset and length, performs > bounds checks, and returns a pointer to the given offset in the page. > > This patch only adds support for pages of order zero, as that is all > Rust Binder needs. However, it is written to make it easy to add support > for higher-order pages in the future. To do that, you would add a const > generic parameter to `Page` that specifies the order. Most of the > methods do not need to be adjusted, as the logic for dealing with > mapping multiple pages at once can be isolated to just the > `with_pointer_into_page` method. > > Rust Binder needs to manage pages directly as that is how transactions > are delivered: Each process has an mmap'd region for incoming > transactions. When an incoming transaction arrives, the Binder driver > will choose a region in the mmap, allocate and map the relevant pages > manually, and copy the incoming transaction directly into the page. This > architecture allows the driver to copy transactions directly from the > address space of one process to another, without an intermediate copy > to a kernel buffer. > > This code is based on Wedson's page abstractions from the old rust > branch, but it has been modified by Alice by removing the incomplete > support for higher-order pages, by introducing the `with_*` helpers > to consolidate the bounds checking logic into a single place, and by > introducing gfp flags. > > Co-developed-by: Wedson Almeida Filho > Signed-off-by: Wedson Almeida Filho > Signed-off-by: Alice Ryhl I have a couple questions about naming, and think an example would be good for the functions that are trickier to use correctly. But I wouldn't block on this, implementation looks good to me. Reviewed-by: Trevor Gross > +++ b/rust/kernel/page.rs > @@ -0,0 +1,240 @@ > +// SPDX-License-Identifier: GPL-2.0 > + > +//! Kernel page allocation and management. > + > +use crate::{bindings, error::code::*, error::Result, uaccess::UserSliceR= eader}; > +use core::{ > + alloc::AllocError, > + ptr::{self, NonNull}, > +}; > + > +/// A bitwise shift for the page size. > +pub const PAGE_SHIFT: usize =3D bindings::PAGE_SHIFT as usize; > + > +/// The number of bytes in a page. > +pub const PAGE_SIZE: usize =3D bindings::PAGE_SIZE; > + > +/// A bitmask that gives the page containing a given address. > +pub const PAGE_MASK: usize =3D !(PAGE_SIZE - 1); > + > +/// Flags for the "get free page" function that underlies all memory all= ocations. > +pub mod flags { > + /// gfp flags. Uppercase acronym, maybe with a description: GFP (Get Free Page) flags. > + #[allow(non_camel_case_types)] > + pub type gfp_t =3D bindings::gfp_t; Why not GfpFlags, do we do this elsewhere? > + /// `GFP_KERNEL` is typical for kernel-internal allocations. The cal= ler requires `ZONE_NORMAL` > + /// or a lower zone for direct access but can direct reclaim. > + pub const GFP_KERNEL: gfp_t =3D bindings::GFP_KERNEL; > + /// `GFP_ZERO` returns a zeroed page on success. > + pub const __GFP_ZERO: gfp_t =3D bindings::__GFP_ZERO; > + /// `GFP_HIGHMEM` indicates that the allocated memory may be located= in high memory. > + pub const __GFP_HIGHMEM: gfp_t =3D bindings::__GFP_HIGHMEM; It feels a bit weird to have dunder constants on the rust side that aren't also `#[doc(hidden)]` or just nonpublic. Makes me think they are an implementation detail or not really meant to be used - could you update the docs if this is the case? > + > +impl Page { > + /// Allocates a new page. Could you add a small example here? > + pub fn alloc_page(gfp_flags: flags::gfp_t) -> Result { > [...] > + } > + > + /// Returns a raw pointer to the page. Could you add a note about how the pointer needs to be used correctly, if it is for anything more than interfacing with kernel APIs? > + pub fn as_ptr(&self) -> *mut bindings::page { > + self.page.as_ptr() > + } > + > + /// Runs a piece of code with this page mapped to an address. > + /// > + /// The page is unmapped when this call returns. > + /// > + /// # Using the raw pointer > + /// > + /// It is up to the caller to use the provided raw pointer correctly= The pointer is valid for > + /// `PAGE_SIZE` bytes and for the duration in which the closure is c= alled. The pointer might > + /// only be mapped on the current thread, and when that is the case,= dereferencing it on other > + /// threads is UB. Other than that, the usual rules for dereferencin= g a raw pointer apply: don't > + /// cause data races, the memory may be uninitialized, and so on. > + /// > + /// If multiple threads map the same page at the same time, then the= y may reference with > + /// different addresses. However, even if the addresses are differen= t, the underlying memory is > + /// still the same for these purposes (e.g., it's still a data race = if they both write to the > + /// same underlying byte at the same time). > + fn with_page_mapped(&self, f: impl FnOnce(*mut u8) -> T) -> T { > [...] > + } Could you add an example of how to use this correctly? > + /// Runs a piece of code with a raw pointer to a slice of this page,= with bounds checking. > + /// > + /// If `f` is called, then it will be called with a pointer that poi= nts at `off` bytes into the > + /// page, and the pointer will be valid for at least `len` bytes. Th= e pointer is only valid on > + /// this task, as this method uses a local mapping. > + /// > + /// If `off` and `len` refers to a region outside of this page, then= this method returns > + /// `EINVAL` and does not call `f`. > + /// > + /// # Using the raw pointer > + /// > + /// It is up to the caller to use the provided raw pointer correctly= The pointer is valid for > + /// `len` bytes and for the duration in which the closure is called.= The pointer might only be > + /// mapped on the current thread, and when that is the case, derefer= encing it on other threads > + /// is UB. Other than that, the usual rules for dereferencing a raw = pointer apply: don't cause > + /// data races, the memory may be uninitialized, and so on. > + /// > + /// If multiple threads map the same page at the same time, then the= y may reference with > + /// different addresses. However, even if the addresses are differen= t, the underlying memory is > + /// still the same for these purposes (e.g., it's still a data race = if they both write to the > + /// same underlying byte at the same time). This could probably also use an example. A note about how to select between with_pointer_into_page and with_page_mapped would also be nice to guide usage, e.g. "prefer with_pointer_into_page for all cases except when..." > + fn with_pointer_into_page( > + &self, > + off: usize, > + len: usize, > + f: impl FnOnce(*mut u8) -> Result, > + ) -> Result { > [...] > + /// Maps the page and zeroes the given slice. > + /// > + /// This method will perform bounds checks on the page offset. If `o= ffset .. offset+len` goes > + /// outside ot the page, then this call returns `EINVAL`. > + /// > + /// # Safety > + /// > + /// Callers must ensure that this call does not race with a read or = write to the same page that > + /// overlaps with this write. > + pub unsafe fn fill_zero(&self, offset: usize, len: usize) -> Result = { > + self.with_pointer_into_page(offset, len, move |dst| { > + // SAFETY: If `with_pointer_into_page` calls into this closu= re, then it has performed a > + // bounds check and guarantees that `dst` is valid for `len`= bytes. > + // > + // There caller guarantees that there is no data race. > + unsafe { ptr::write_bytes(dst, 0u8, len) }; > + Ok(()) > + }) > + } Could this be named `fill_zero_raw` to leave room for a safe `fill_zero(&mut self, ...)`? > + /// Copies data from userspace into this page. > + /// > + /// This method will perform bounds checks on the page offset. If `o= ffset .. offset+len` goes > + /// outside ot the page, then this call returns `EINVAL`. > + /// > + /// Like the other `UserSliceReader` methods, data races are allowed= on the userspace address. > + /// However, they are not allowed on the page you are copying into. > + /// > + /// # Safety > + /// > + /// Callers must ensure that this call does not race with a read or = write to the same page that > + /// overlaps with this write. > + pub unsafe fn copy_from_user_slice( > + &self, > + reader: &mut UserSliceReader, > + offset: usize, > + len: usize, > + ) -> Result { > + self.with_pointer_into_page(offset, len, move |dst| { > + // SAFETY: If `with_pointer_into_page` calls into this closu= re, then it has performed a > + // bounds check and guarantees that `dst` is valid for `len`= bytes. Furthermore, we have > + // exclusive access to the slice since the caller guarantees= that there are no races. > + reader.read_raw(unsafe { core::slice::from_raw_parts_mut(dst= cast(), len) }) > + }) > + } > +} Same as above, `copy_from_user_slice_raw` would leave room for a safe API.