Received: by 2002:a05:6358:11c7:b0:104:8066:f915 with SMTP id i7csp828384rwl; Wed, 12 Apr 2023 04:50:54 -0700 (PDT) X-Google-Smtp-Source: AKy350aL4tSBDZDYu1bv8+eOjlcThvvU6ZIi1MYAy81ckPBM1HABTkDg+UEjPgfZLWfVIZIYLCL3 X-Received: by 2002:a17:90a:ad93:b0:23f:7d05:8762 with SMTP id s19-20020a17090aad9300b0023f7d058762mr2133602pjq.23.1681300254686; Wed, 12 Apr 2023 04:50:54 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1681300254; cv=none; d=google.com; s=arc-20160816; b=eY+1HHWK5BzSqu2JBBAewawAspcaHvKv4PUhx8Sn+32v8GQ1QIGMYSA+AWkebyLpCf 5tNEIi+qZdKoR9BY+35lzMqWcsh/QijaE/JeWjWjzh+T15w1D1V2TvxBu1CRSrQT4BMp DvXVG0Jtq3uQUlBVgrrvurk40yN8VYeqPBjPolykIK7P7PZk7SS3n7MZOnbik2mrbUKW g97bcYUA/bixXqyS+I+J6kNz09vVOQOUqdGqKQGnLZZX0WB1zBLAUxgasFPcXbhFjsA2 Da8CaPu0iF3yh7amfj8ID5Alyzz0hWEM9QWVz2grLDYc7wiRl50IT6XLBWtlLSSslj+Q Nd5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=n3dVbDu7hAlbrDiv9/ZUJT5TunPW1jIu8qSPcKdVUhU=; b=f1NJ6ZWKiM1ep+UAOlKNJZSEcW2jOrq0tD/kgIWNfs8e/SkM0VRiO+7zk9FrM5mMtA NUEYcBi7iYxepbibiHROPq9+a+GHI5Gu0ze1H/0gLu4jZTRO/LlJ87NcrlBJv5QFH8YE WXmJRNQPZm0gKfMkSAon22jMVVN5+eDrbbyqQTmc3v8YINKlvUW+AAjba3yaoQbBaCgP QZnec9NXRc1zwjl0PL4YntxEQqe/jmUKv/AsZhiR+ykY55jLGc+RFs31Cmj/Ibt+A6cn bRJYi2v68F1bm9Zgk0w53zya8wAEKyzeLIoRRwWkxWK75aZzXX26xLAQt8uLxzIBziey l4xw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=E8YA+5Nw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lb13-20020a17090b4a4d00b00246d4b17191si1969097pjb.177.2023.04.12.04.50.44; Wed, 12 Apr 2023 04:50:54 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@gmail.com header.s=20221208 header.b=E8YA+5Nw; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=gmail.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230254AbjDLLjj (ORCPT + 99 others); Wed, 12 Apr 2023 07:39:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43480 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231430AbjDLLjN (ORCPT ); Wed, 12 Apr 2023 07:39:13 -0400 Received: from mail-yb1-xb31.google.com (mail-yb1-xb31.google.com [IPv6:2607:f8b0:4864:20::b31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id AD2616E8D; Wed, 12 Apr 2023 04:38:50 -0700 (PDT) Received: by mail-yb1-xb31.google.com with SMTP id d204so2919398ybh.6; Wed, 12 Apr 2023 04:38:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20221208; t=1681299529; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=n3dVbDu7hAlbrDiv9/ZUJT5TunPW1jIu8qSPcKdVUhU=; b=E8YA+5NwAFyuyF5c9CAsk6B3qXsfn6KDFcmfhhsycsEhhLITBpjTy8RyTKB8oyWR8/ z8WN9HxcDM4Zpyt9tSxUyb+SZAwuBr+jG8WPlXt+DrU3H74bqT7+46BYsddmRAd2dbBN PRVK9XdznWPklPcpGIrVIJaSBuZj3D7s0IQQm1a6zZFimcxTzy3K4c81Yf9vcXdX/Tak 7OW8sl1zchJTB429wrO68zU27RYjFwWJKDB2UMBO3QHlUBaJX6YpwosWDg9E7AbHeRcU aM9j8E8xBYHBUgCx/LRkSYa/fzNqyMMqnc/7Kg/t6epWC+NPOt3zzfZcNx6hd3+qv626 8tsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1681299529; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=n3dVbDu7hAlbrDiv9/ZUJT5TunPW1jIu8qSPcKdVUhU=; b=xBzY5TswBY8bxeyy9m9sqSPCWXU6mhB4dOASenf4f4cGtktEJxC/Iy7SN9BVqi7cJ+ YZ/55LG4R4jzTCgCmf2Tjg9sJeJFNMhq5pos7Jz1GgtQYY3Uuv35F26AiKMrmWw4Yf6g WrxQLGqOYuklFumEURfnjresTLP0mfisYpmUwlaYChMOWTzuIyzmDGr0S4WVDcvcW/lC /wOL+0VdYGYRr3uUqPL+X+E6icd1vy2wtjKGmAeaIU9Bbp07edkc16/+VylYeLrs56Fe KY+49Qa7N2aWQD85xDRTS0xd/4d7UYGfbMGpXJTuj/VYKUcMT/OODIVF1luYRPg3a7dw ViSA== X-Gm-Message-State: AAQBX9f0Dq9OF4PApsi1eqruuCnSINpOxc9YYHDZ8KxAUf2cA3O2urU7 HMUAfwc+0ncWYt4SjrpFMxXE5xCJaYX7OvABy/A= X-Received: by 2002:a25:d705:0:b0:b6b:6a39:949c with SMTP id o5-20020a25d705000000b00b6b6a39949cmr6851264ybg.6.1681299528860; Wed, 12 Apr 2023 04:38:48 -0700 (PDT) MIME-Version: 1.0 References: <20230411054543.21278-1-wedsonaf@gmail.com> <20230411054543.21278-2-wedsonaf@gmail.com> <20230411214205.5753343f.gary@garyguo.net> In-Reply-To: <20230411214205.5753343f.gary@garyguo.net> From: Wedson Almeida Filho Date: Wed, 12 Apr 2023 08:38:37 -0300 Message-ID: Subject: Re: [PATCH v4 02/13] rust: sync: introduce `Lock` and `Guard` To: Gary Guo Cc: rust-for-linux@vger.kernel.org, Miguel Ojeda , Alex Gaynor , Boqun Feng , =?UTF-8?Q?Bj=C3=B6rn_Roy_Baron?= , linux-kernel@vger.kernel.org, Wedson Almeida Filho , Martin Rodriguez Reboredo Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM, RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 11 Apr 2023 at 17:42, Gary Guo wrote: > > On Tue, 11 Apr 2023 02:45:32 -0300 > Wedson Almeida Filho wrote: > > > From: Wedson Almeida Filho > > > > They are generic Rust implementations of a lock and a lock guard that > > contain code that is common to all locks. Different backends will be > > introduced in subsequent commits. > > > > Reviewed-by: Martin Rodriguez Reboredo > > Suggested-by: Gary Guo > > Signed-off-by: Wedson Almeida Filho > > --- > > v1 -> v2: No changes > > v2 -> v3: Use new Opaque::ffi_init from Benno's series > > v3 -> v4: Fixed name of parameter in Lock comment > > > > rust/kernel/sync.rs | 2 +- > > rust/kernel/sync/lock.rs | 162 +++++++++++++++++++++++++++++++++++++++ > > 2 files changed, 163 insertions(+), 1 deletion(-) > > create mode 100644 rust/kernel/sync/lock.rs > > > > diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs > > index 541d235ffbeb..81b0998eaa18 100644 > > --- a/rust/kernel/sync.rs > > +++ b/rust/kernel/sync.rs > > @@ -8,6 +8,7 @@ > > use crate::types::Opaque; > > > > mod arc; > > +pub mod lock; > > > > pub use arc::{Arc, ArcBorrow, UniqueArc}; > > > > @@ -25,7 +26,6 @@ impl LockClassKey { > > Self(Opaque::uninit()) > > } > > > > - #[allow(dead_code)] > > pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key { > > self.0.get() > > } > > diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs > > new file mode 100644 > > index 000000000000..1a8ecccf4f24 > > --- /dev/null > > +++ b/rust/kernel/sync/lock.rs > > @@ -0,0 +1,162 @@ > > +// SPDX-License-Identifier: GPL-2.0 > > + > > +//! Generic kernel lock and guard. > > +//! > > +//! It contains a generic Rust lock and guard that allow for different backends (e.g., mutexes, > > +//! spinlocks, raw spinlocks) to be provided with minimal effort. > > + > > +use super::LockClassKey; > > +use crate::{bindings, init::PinInit, pin_init, str::CStr, types::Opaque}; > > +use core::{cell::UnsafeCell, marker::PhantomData, marker::PhantomPinned}; > > +use macros::pin_data; > > + > > +/// The "backend" of a lock. > > +/// > > +/// It is the actual implementation of the lock, without the need to repeat patterns used in all > > +/// locks. > > +/// > > +/// # Safety > > +/// > > +/// - Implementers must ensure that only one thread/CPU may access the protected data once the lock > > +/// is owned, that is, between calls to `lock` and `unlock`. > > +pub unsafe trait Backend { > > + /// The state required by the lock. > > + type State; > > + > > + /// The state required to be kept between lock and unlock. > > + type GuardState; > > + > > + /// Initialises the lock. > > + /// > > + /// # Safety > > + /// > > + /// `ptr` must be valid for write for the duration of the call, while `name` and `key` must > > + /// remain valid for read indefinitely. > > + unsafe fn init( > > + ptr: *mut Self::State, > > + name: *const core::ffi::c_char, > > + key: *mut bindings::lock_class_key, > > + ); > > Any reason that this takes FFI types rather than just `&'static CStr` and `&'static LockClassKey`? Yes, because we want to move work that is done by all backend implementations into `Lock`. This includes calls to convert these to ffi types. > > + > > + /// Acquires the lock, making the caller its owner. > > + /// > > + /// # Safety > > + /// > > + /// Callers must ensure that [`Backend::init`] has been previously called. > > + #[must_use] > > + unsafe fn lock(ptr: *mut Self::State) -> Self::GuardState; > > + > > + /// Releases the lock, giving up its ownership. > > + /// > > + /// # Safety > > + /// > > + /// It must only be called by the current owner of the lock. > > + unsafe fn unlock(ptr: *mut Self::State, guard_state: &Self::GuardState); > > +} > > + > > +/// A mutual exclusion primitive. > > +/// > > +/// Exposes one of the kernel locking primitives. Which one is exposed depends on the lock banckend > > +/// specified as the generic parameter `B`. > > +#[pin_data] > > +pub struct Lock { > > + /// The kernel lock object. > > + #[pin] > > + state: Opaque, > > + > > + /// Some locks are known to be self-referential (e.g., mutexes), while others are architecture > > + /// or config defined (e.g., spinlocks). So we conservatively require them to be pinned in case > > + /// some architecture uses self-references now or in the future. > > + #[pin] > > + _pin: PhantomPinned, > > + > > + /// The data protected by the lock. > > + data: UnsafeCell, > > +} > > + > > +// SAFETY: `Lock` can be transferred across thread boundaries iff the data it protects can. > > +unsafe impl Send for Lock {} > > + > > +// SAFETY: `Lock` serialises the interior mutability it provides, so it is `Sync` as long as the > > +// data it protects is `Send`. > > +unsafe impl Sync for Lock {} > > + > > +impl Lock { > > + /// Constructs a new lock initialiser. > > + #[allow(clippy::new_ret_no_self)] > > + pub fn new(t: T, name: &'static CStr, key: &'static LockClassKey) -> impl PinInit { > > + pin_init!(Self { > > + data: UnsafeCell::new(t), > > + _pin: PhantomPinned, > > + // SAFETY: `slot` is valid while the closure is called and both `name` and `key` have > > + // static lifetimes so they live indefinitely. > > + state <- Opaque::ffi_init(|slot| unsafe { > > + B::init(slot, name.as_char_ptr(), key.as_ptr()) > > + }), > > + }) > > + } > > +} > > There is not drop implementation on `Lock`, which implies all locks can > be just forgotten? Yes, all locks can be forgotten. > I believe we discussed a case where this is can lead to UAF when a lock > is dropped while it is locked (e.g. because the guard is forgotten). Yes, this is the issue brought up by Boqun: https://github.com/Rust-for-Linux/linux/issues/862 The issue arises when a mutex guard is forgotten and the task that owns it exits. Then another task trying to acquire the mutex will lead to a UAF. A drop implementation on the lock doesn't solve this. One solution is to increment the refcount on the current task when we acquire the mutex and decrement it when we release, but if we do that, the cost of acquiring/releasing a mutex gets much worse in Rust than it is in C. Another solution might be to force disable CONFIG_MUTEX_SPIN_ON_OWNER when Rust is enabled, which is undesirable because it affects the performance of C code as well. Even a closure-based lock (which I believe you suggested at the time) doesn't solve this completely because the thread may exit during the closure execution and leave a dangling pointer in the mutex. So we don't have a good solution for this yet. > > + > > +impl Lock { > > + /// Acquires the lock and gives the caller access to the data protected by it. > > + pub fn lock(&self) -> Guard<'_, T, B> { > > + // SAFETY: The constructor of the type calls `init`, so the existence of the object proves > > + // that `init` was called. > > + let state = unsafe { B::lock(self.state.get()) }; > > + // SAFETY: The lock was just acquired. > > + unsafe { Guard::new(self, state) } > > + } > > +} > > + > > +/// A lock guard. > > +/// > > +/// Allows mutual exclusion primitives that implement the `Backend` trait to automatically unlock > > +/// when a guard goes out of scope. It also provides a safe and convenient way to access the data > > +/// protected by the lock. > > +#[must_use = "the lock unlocks immediately when the guard is unused"] > > +pub struct Guard<'a, T: ?Sized, B: Backend> { > > + pub(crate) lock: &'a Lock, > > + pub(crate) state: B::GuardState, > > + _not_send: PhantomData<*mut ()>, > > +} > > + > > +// SAFETY: `Guard` is sync when the data protected by the lock is also sync. > > +unsafe impl Sync for Guard<'_, T, B> {} > > + > > +impl core::ops::Deref for Guard<'_, T, B> { > > + type Target = T; > > + > > + fn deref(&self) -> &Self::Target { > > + // SAFETY: The caller owns the lock, so it is safe to deref the protected data. > > + unsafe { &*self.lock.data.get() } > > + } > > +} > > + > > +impl core::ops::DerefMut for Guard<'_, T, B> { > > + fn deref_mut(&mut self) -> &mut Self::Target { > > + // SAFETY: The caller owns the lock, so it is safe to deref the protected data. > > + unsafe { &mut *self.lock.data.get() } > > + } > > +} > > + > > +impl Drop for Guard<'_, T, B> { > > + fn drop(&mut self) { > > + // SAFETY: The caller owns the lock, so it is safe to unlock it. > > + unsafe { B::unlock(self.lock.state.get(), &self.state) }; > > + } > > +} > > + > > +impl<'a, T: ?Sized, B: Backend> Guard<'a, T, B> { > > + /// Constructs a new immutable lock guard. > > + /// > > + /// # Safety > > + /// > > + /// The caller must ensure that it owns the lock. > > + pub(crate) unsafe fn new(lock: &'a Lock, state: B::GuardState) -> Self { > > + Self { > > + lock, > > + state, > > + _not_send: PhantomData, > > + } > > + } > > +} >