Received: by 2002:a05:7412:f589:b0:e2:908c:2ebd with SMTP id eh9csp1093719rdb; Wed, 1 Nov 2023 11:04:08 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEGfIFq45KXlljR/nBi41JeXll+oyWrlzpD2qG0BYNQAIURzBr9OCoA5mq391SqSU+qm0Gm X-Received: by 2002:a05:6a00:22d3:b0:68c:638b:e2c6 with SMTP id f19-20020a056a0022d300b0068c638be2c6mr19410590pfj.9.1698861847751; Wed, 01 Nov 2023 11:04:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1698861847; cv=none; d=google.com; s=arc-20160816; b=BlPpsYEncfHrT/mr22EBvXLpD+Lrz5hiUWXwvr0eYzryNi6HKg68OZyEF6Edk0eWiI s3RMmi3hYcT5RJt4E8Z3drbWfifEC+AzagvJ+5QYQHIp/LFBxtNp63bWNejqNz3tn4TA 6WD5rJuAlsZNK3C00LnO411AgZ6QXSoBtkrGHs2klZ2T0AGylMSkeoOqO8nqkybkSxOe ABveMg4jgGc4NgWHzKEleEzUtOJnT1hp8VHYFzZ/7TnoEZlnKMIOq7e4kIFdsoGJmqfv ubjepVQMt3JG1tzLnUNxjuL6q2u/TGTA3C2EWZqbjwaBa3rSoITE3X4LCZZ+RoosTsUO SfdA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=D1N1ayCOZ2IlywhoU6sO7hhv9VakR10Zh4aCkasI2mY=; fh=cHdDrFPTfwdP0/Ip9jHI/T24Yd8xIIOhbocUOLU1mtg=; b=bxuEdMH01GEeLr5pA5Oqo4rdzSc+Em9flTO1ZyooIlIGrwoQHrIr1dUQyem1ure8wQ +mPSaHN2AyGhVAwpvp+QJ/bgAScv6ti/yIBRUr1L3lgeVdhr+XMIUr+j/mQ3OPTva1mm xYkW8BC0sOvQAKR/ruhdjs75BUak+JRosAr1CNw7Q6/6S3oqymtFxN0npnnCtpR6Rmfd nw5RLQhJumRHDchvjtWs6cm67waztJ3ZC4hyBGknagY+WqyAKoP9r68/lhPG5I4fEQiY UhtFkZcCqWpJ0y+y91UDrUmdbSv5b6FO6Zpixkjk1dGxnXWAdI6ShAXPglSSrQ9ViUgs Yxyg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=mqZaHcKP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from lipwig.vger.email (lipwig.vger.email. [23.128.96.33]) by mx.google.com with ESMTPS id j4-20020a056a00130400b006c31b7deccesi152105pfu.106.2023.11.01.11.04.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Nov 2023 11:04:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) client-ip=23.128.96.33; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=mqZaHcKP; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.33 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from out1.vger.email (depot.vger.email [IPv6:2620:137:e000::3:0]) by lipwig.vger.email (Postfix) with ESMTP id 609EC8029851; Wed, 1 Nov 2023 11:03:53 -0700 (PDT) X-Virus-Status: Clean X-Virus-Scanned: clamav-milter 0.103.10 at lipwig.vger.email Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1344946AbjKASDI (ORCPT + 99 others); Wed, 1 Nov 2023 14:03:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39324 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1344866AbjKASDB (ORCPT ); Wed, 1 Nov 2023 14:03:01 -0400 Received: from mail-ed1-x549.google.com (mail-ed1-x549.google.com [IPv6:2a00:1450:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D362DB for ; Wed, 1 Nov 2023 11:02:37 -0700 (PDT) Received: by mail-ed1-x549.google.com with SMTP id 4fb4d7f45d1cf-54356d8ea43so24205a12.2 for ; Wed, 01 Nov 2023 11:02:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1698861756; x=1699466556; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=D1N1ayCOZ2IlywhoU6sO7hhv9VakR10Zh4aCkasI2mY=; b=mqZaHcKPL6/OxnQt3P5nyYKR3X4HscJs0T0A5x8EcX1LAI8gRT+Bckh3w9cy4JA6G0 XJIy8EJ7dn7nnxuziZNP3xCGVLxfzE1jfnOdPvVwWw3dEIZgaxUXVCDibx9SchxvwlKe pNTiDeamdKrlcU4WKN1yF5UFZdKhhPxm6w60jX0UiOox2IOYdkb8aB6ivD1ClEy4mGKE EpNu6rdhagE7GZIHRdPX1RlEfnizwTufajK4jbZD4KARWUiH2jeVRYP43zl7RSGMPWjb jyK62CGJmZGjTmmInF6kEumw6CkDyrqXamqYgqV9rhoWH284DvKWYCDY1EGk5GQeHkYD w9sw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698861756; x=1699466556; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=D1N1ayCOZ2IlywhoU6sO7hhv9VakR10Zh4aCkasI2mY=; b=qEsAFG0YHVLBrUeOSTHbXwcil6Y0+J7Z8CrNcWy4PNxVXwMA2nScH/q1Dj66l4RVNQ 0W+sybZ1fMlDvbFQZwp/l+gqD1Kw2KKd4PrZ4+8khrIs4qECXD73xp2cI3VXGJ9XiUma SSBPvg1tpsXpbqY9JJk3LJXTwzrCUoUc0vxhCAzv8o6ObiU1YUthbzaVcdLPfm1l6GV8 WbyFS/jdmQJEcjbrnj14UKJJMbYUwmxcJc5eMYi1JY9vyc6dYhYjIdKav0kTOFUlLFiY YlI0r2RiFvbtpb1YcyuUwwQedOtIGFOfDfbwejOEIiXE96fZOG+6tRXhioo2RQSNZ1sO gBug== X-Gm-Message-State: AOJu0YzVCFzY4hnbka6DvfTSTJpky7IqCPjmkLCUpeptkGXSH/VV2PPv 7LCZdBRpX+/ixnyo7QjM32aetB+U2oz8M98= X-Received: from aliceryhl.c.googlers.com ([fda3:e722:ac3:cc00:31:98fb:c0a8:6c8]) (user=aliceryhl job=sendgmr) by 2002:a05:6402:3514:b0:543:92f2:216 with SMTP id b20-20020a056402351400b0054392f20216mr38865edd.7.1698861755870; Wed, 01 Nov 2023 11:02:35 -0700 (PDT) Date: Wed, 01 Nov 2023 18:01:38 +0000 In-Reply-To: <20231101-rust-binder-v1-0-08ba9197f637@google.com> Mime-Version: 1.0 References: <20231101-rust-binder-v1-0-08ba9197f637@google.com> X-Mailer: b4 0.13-dev-26615 Message-ID: <20231101-rust-binder-v1-8-08ba9197f637@google.com> Subject: [PATCH RFC 08/20] rust_binder: add non-oneway transactions From: Alice Ryhl To: Greg Kroah-Hartman , "=?utf-8?q?Arve_Hj=C3=B8nnev=C3=A5g?=" , Todd Kjos , Martijn Coenen , Joel Fernandes , Christian Brauner , Carlos Llamas , Suren Baghdasaryan , Miguel Ojeda , Alex Gaynor , Wedson Almeida Filho Cc: linux-kernel@vger.kernel.org, rust-for-linux@vger.kernel.org, Boqun Feng , Gary Guo , "=?utf-8?q?Bj=C3=B6rn_Roy_Baron?=" , Benno Lossin , Andreas Hindborg , Matt Gilbride , Jeffrey Vander Stoep , Matthew Maurer , Alice Ryhl Content-Type: text/plain; charset="utf-8" X-Spam-Status: No, score=-8.4 required=5.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE, URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lipwig.vger.email Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Greylist: Sender passed SPF test, not delayed by milter-greylist-4.6.4 (lipwig.vger.email [0.0.0.0]); Wed, 01 Nov 2023 11:03:53 -0700 (PDT) From: Wedson Almeida Filho Make it possible to send transactions that are not oneway transactions, that is, transactions that you need to reply to. Generally, binder will try to look like a normal function call, where the call blocks until the function returns. This is implemented by allowing you to reply to incoming transactions, and having the sender sleep until a reply arrives. For each thread, binder will keep track of the current transaction. Furthermore, if you send a transaction from a thread that already has a current transaction, then binder will make that transaction into a "sub-transaction". This mimicks a call stack with normal functions. If you use subtransactions to send calls A->B->A with A and B being two different processes, then binder will ensure that the incoming sub-transaction is executed on the thread in A that sent the original message to B (and that thread in A is not used for any other incoming transactions). This feature is often referred to as "deadlock avoidance" because it avoids the case where A's threadpool has run out of threads, preventing the incoming subtransaction from being processed. Signed-off-by: Wedson Almeida Filho Co-developed-by: Alice Ryhl Signed-off-by: Alice Ryhl --- drivers/android/defs.rs | 2 + drivers/android/thread.rs | 218 ++++++++++++++++++++++++++++++++++++++++- drivers/android/transaction.rs | 132 ++++++++++++++++++++++--- 3 files changed, 336 insertions(+), 16 deletions(-) diff --git a/drivers/android/defs.rs b/drivers/android/defs.rs index d0fc00fa5a57..32178e8c5596 100644 --- a/drivers/android/defs.rs +++ b/drivers/android/defs.rs @@ -33,6 +33,8 @@ macro_rules! pub_no_prefix { binder_driver_command_protocol_, BC_TRANSACTION, BC_TRANSACTION_SG, + BC_REPLY, + BC_REPLY_SG, BC_FREE_BUFFER, BC_ENTER_LOOPER, BC_EXIT_LOOPER, diff --git a/drivers/android/thread.rs b/drivers/android/thread.rs index 159beebbd23e..b583297cea91 100644 --- a/drivers/android/thread.rs +++ b/drivers/android/thread.rs @@ -56,6 +56,10 @@ struct InnerThread { /// Determines if thread is dead. is_dead: bool, + /// Work item used to deliver error codes to the thread that started a transaction. Stored here + /// so that it can be reused. + reply_work: DArc, + /// Work item used to deliver error codes to the current thread. Stored here so that it can be /// reused. return_work: DArc, @@ -65,6 +69,7 @@ struct InnerThread { process_work_list: bool, /// List of work items to deliver to userspace. work_list: List>, + current_transaction: Option>, /// Extended error information for this thread. extended_error: ExtendedError, @@ -90,8 +95,10 @@ fn next_err_id() -> u32 { looper_need_return: false, is_dead: false, process_work_list: false, + reply_work: ThreadError::try_new()?, return_work: ThreadError::try_new()?, work_list: List::new(), + current_transaction: None, extended_error: ExtendedError::new(next_err_id(), BR_OK, 0), }) } @@ -116,6 +123,15 @@ fn push_work(&mut self, work: DLArc) -> PushWorkRes { } } + fn push_reply_work(&mut self, code: u32) { + if let Ok(work) = ListArc::try_from_arc(self.reply_work.clone()) { + work.set_error_code(code); + self.push_work(work); + } else { + pr_warn!("Thread reply work is already in use."); + } + } + fn push_return_work(&mut self, reply: u32) { if let Ok(work) = ListArc::try_from_arc(self.return_work.clone()) { work.set_error_code(reply); @@ -131,6 +147,36 @@ fn push_work_deferred(&mut self, work: DLArc) { self.work_list.push_back(work); } + /// Fetches the transaction this thread can reply to. If the thread has a pending transaction + /// (that it could respond to) but it has also issued a transaction, it must first wait for the + /// previously-issued transaction to complete. + /// + /// The `thread` parameter should be the thread containing this `ThreadInner`. + fn pop_transaction_to_reply(&mut self, thread: &Thread) -> Result> { + let transaction = self.current_transaction.take().ok_or(EINVAL)?; + if core::ptr::eq(thread, transaction.from.as_ref()) { + self.current_transaction = Some(transaction); + return Err(EINVAL); + } + // Find a new current transaction for this thread. + self.current_transaction = transaction.find_from(thread); + Ok(transaction) + } + + fn pop_transaction_replied(&mut self, transaction: &DArc) -> bool { + match self.current_transaction.take() { + None => false, + Some(old) => { + if !Arc::ptr_eq(transaction, &old) { + self.current_transaction = Some(old); + return false; + } + self.current_transaction = old.clone_next(); + true + } + } + } + fn looper_enter(&mut self) { self.looper_flags |= LOOPER_ENTERED; if self.looper_flags & LOOPER_REGISTERED != 0 { @@ -159,7 +205,7 @@ fn is_looper(&self) -> bool { /// looper. Also, if there is local work, we want to return to userspace before we deliver any /// remote work. fn should_use_process_work_queue(&self) -> bool { - !self.process_work_list && self.is_looper() + self.current_transaction.is_none() && !self.process_work_list && self.is_looper() } fn poll(&mut self) -> u32 { @@ -225,6 +271,10 @@ pub(crate) fn get_extended_error(&self, data: UserSlicePtr) -> Result { Ok(()) } + pub(crate) fn set_current_transaction(&self, transaction: DArc) { + self.inner.lock().current_transaction = Some(transaction); + } + /// Attempts to fetch a work item from the thread-local queue. The behaviour if the queue is /// empty depends on `wait`: if it is true, the function waits for some work to be queued (or a /// signal); otherwise it returns indicating that none is available. @@ -407,6 +457,89 @@ pub(crate) fn copy_transaction_data( Ok(alloc) } + fn unwind_transaction_stack(self: &Arc) { + let mut thread = self.clone(); + while let Ok(transaction) = { + let mut inner = thread.inner.lock(); + inner.pop_transaction_to_reply(thread.as_ref()) + } { + let reply = Either::Right(BR_DEAD_REPLY); + if !transaction.from.deliver_single_reply(reply, &transaction) { + break; + } + + thread = transaction.from.clone(); + } + } + + pub(crate) fn deliver_reply( + &self, + reply: Either, u32>, + transaction: &DArc, + ) { + if self.deliver_single_reply(reply, transaction) { + transaction.from.unwind_transaction_stack(); + } + } + + /// Delivers a reply to the thread that started a transaction. The reply can either be a + /// reply-transaction or an error code to be delivered instead. + /// + /// Returns whether the thread is dead. If it is, the caller is expected to unwind the + /// transaction stack by completing transactions for threads that are dead. + fn deliver_single_reply( + &self, + reply: Either, u32>, + transaction: &DArc, + ) -> bool { + { + let mut inner = self.inner.lock(); + if !inner.pop_transaction_replied(transaction) { + return false; + } + + if inner.is_dead { + return true; + } + + match reply { + Either::Left(work) => { + inner.push_work(work); + } + Either::Right(code) => inner.push_reply_work(code), + } + } + + // Notify the thread now that we've released the inner lock. + self.work_condvar.notify_sync(); + false + } + + /// Determines if the given transaction is the current transaction for this thread. + fn is_current_transaction(&self, transaction: &DArc) -> bool { + let inner = self.inner.lock(); + match &inner.current_transaction { + None => false, + Some(current) => Arc::ptr_eq(current, transaction), + } + } + + /// Determines the current top of the transaction stack. It fails if the top is in another + /// thread (i.e., this thread belongs to a stack but it has called another thread). The top is + /// [`None`] if the thread is not currently participating in a transaction stack. + fn top_of_transaction_stack(&self) -> Result>> { + let inner = self.inner.lock(); + if let Some(cur) = &inner.current_transaction { + if core::ptr::eq(self, cur.from.as_ref()) { + pr_warn!("got new transaction with bad transaction stack"); + return Err(EINVAL); + } + Ok(Some(cur.clone())) + } else { + Ok(None) + } + } + fn transaction(self: &Arc, tr: &BinderTransactionDataSg, inner: T) where T: FnOnce(&Arc, &BinderTransactionDataSg) -> BinderResult, @@ -427,12 +560,79 @@ fn transaction(self: &Arc, tr: &BinderTransactionDataSg, inner: T) } } + fn transaction_inner(self: &Arc, tr: &BinderTransactionDataSg) -> BinderResult { + let handle = unsafe { tr.transaction_data.target.handle }; + let node_ref = self.process.get_transaction_node(handle)?; + security::binder_transaction(&self.process.cred, &node_ref.node.owner.cred)?; + // TODO: We need to ensure that there isn't a pending transaction in the work queue. How + // could this happen? + let top = self.top_of_transaction_stack()?; + let list_completion = DTRWrap::arc_try_new(DeliverCode::new(BR_TRANSACTION_COMPLETE))?; + let completion = list_completion.clone_arc(); + let transaction = Transaction::new(node_ref, top, self, tr)?; + + // Check that the transaction stack hasn't changed while the lock was released, then update + // it with the new transaction. + { + let mut inner = self.inner.lock(); + if !transaction.is_stacked_on(&inner.current_transaction) { + pr_warn!("Transaction stack changed during transaction!"); + return Err(EINVAL.into()); + } + inner.current_transaction = Some(transaction.clone_arc()); + // We push the completion as a deferred work so that we wait for the reply before returning + // to userland. + inner.push_work_deferred(list_completion); + } + + if let Err(e) = transaction.submit() { + completion.skip(); + // Define `transaction` first to drop it after `inner`. + let transaction; + let mut inner = self.inner.lock(); + transaction = inner.current_transaction.take().unwrap(); + inner.current_transaction = transaction.clone_next(); + Err(e) + } else { + Ok(()) + } + } + + fn reply_inner(self: &Arc, tr: &BinderTransactionDataSg) -> BinderResult { + let orig = self.inner.lock().pop_transaction_to_reply(self)?; + if !orig.from.is_current_transaction(&orig) { + return Err(EINVAL.into()); + } + + // We need to complete the transaction even if we cannot complete building the reply. + (|| -> BinderResult<_> { + let completion = DTRWrap::arc_try_new(DeliverCode::new(BR_TRANSACTION_COMPLETE))?; + let process = orig.from.process.clone(); + let reply = Transaction::new_reply(self, process, tr)?; + self.inner.lock().push_work(completion); + orig.from.deliver_reply(Either::Left(reply), &orig); + Ok(()) + })() + .map_err(|mut err| { + // At this point we only return `BR_TRANSACTION_COMPLETE` to the caller, and we must let + // the sender know that the transaction has completed (with an error in this case). + pr_warn!( + "Failure {:?} during reply - delivering BR_FAILED_REPLY to sender.", + err + ); + let reply = Either::Right(BR_FAILED_REPLY); + orig.from.deliver_reply(reply, &orig); + err.reply = BR_TRANSACTION_COMPLETE; + err + }) + } + fn oneway_transaction_inner(self: &Arc, tr: &BinderTransactionDataSg) -> BinderResult { let handle = unsafe { tr.transaction_data.target.handle }; let node_ref = self.process.get_transaction_node(handle)?; security::binder_transaction(&self.process.cred, &node_ref.node.owner.cred)?; let list_completion = DTRWrap::arc_try_new(DeliverCode::new(BR_TRANSACTION_COMPLETE))?; - let transaction = Transaction::new(node_ref, self, tr)?; + let transaction = Transaction::new(node_ref, None, self, tr)?; let completion = list_completion.clone_arc(); self.inner.lock().push_work(list_completion); match transaction.submit() { @@ -458,7 +658,7 @@ fn write(self: &Arc, req: &mut BinderWriteRead) -> Result { if tr.transaction_data.flags & TF_ONE_WAY != 0 { self.transaction(&tr, Self::oneway_transaction_inner); } else { - return Err(EINVAL); + self.transaction(&tr, Self::transaction_inner); } } BC_TRANSACTION_SG => { @@ -466,9 +666,17 @@ fn write(self: &Arc, req: &mut BinderWriteRead) -> Result { if tr.transaction_data.flags & TF_ONE_WAY != 0 { self.transaction(&tr, Self::oneway_transaction_inner); } else { - return Err(EINVAL); + self.transaction(&tr, Self::transaction_inner); } } + BC_REPLY => { + let tr = reader.read::()?.with_buffers_size(0); + self.transaction(&tr, Self::reply_inner) + } + BC_REPLY_SG => { + let tr = reader.read::()?; + self.transaction(&tr, Self::reply_inner) + } BC_FREE_BUFFER => drop(self.process.buffer_get(reader.read()?)), BC_INCREFS => self.process.update_ref(reader.read()?, true, false)?, BC_ACQUIRE => self.process.update_ref(reader.read()?, true, true)?, @@ -644,6 +852,8 @@ pub(crate) fn release(self: &Arc) { while let Ok(Some(work)) = self.get_work_local(false) { work.into_arc().cancel(); } + + self.unwind_transaction_stack(); } } diff --git a/drivers/android/transaction.rs b/drivers/android/transaction.rs index 8b4274ddc415..a6525a4253ea 100644 --- a/drivers/android/transaction.rs +++ b/drivers/android/transaction.rs @@ -6,23 +6,25 @@ prelude::*, sync::{Arc, SpinLock}, task::Kuid, + types::{Either, ScopeGuard}, user_ptr::UserSlicePtrWriter, }; use crate::{ allocation::Allocation, defs::*, - error::BinderResult, + error::{BinderError, BinderResult}, node::{Node, NodeRef}, process::Process, ptr_align, - thread::Thread, + thread::{PushWorkRes, Thread}, DArc, DLArc, DTRWrap, DeliverToRead, }; #[pin_data] pub(crate) struct Transaction { target_node: Option>, + stack_next: Option>, pub(crate) from: Arc, to: Arc, #[pin] @@ -42,6 +44,7 @@ pub(crate) struct Transaction { impl Transaction { pub(crate) fn new( node_ref: NodeRef, + stack_next: Option>, from: &Arc, tr: &BinderTransactionDataSg, ) -> BinderResult> { @@ -59,8 +62,8 @@ pub(crate) fn new( return Err(err); } }; - if trd.flags & TF_ONE_WAY == 0 { - pr_warn!("Non-oneway transactions are not yet supported."); + if trd.flags & TF_ONE_WAY != 0 && stack_next.is_some() { + pr_warn!("Oneway transaction should not be in a transaction stack."); return Err(EINVAL.into()); } if trd.flags & TF_CLEAR_BUF != 0 { @@ -72,6 +75,7 @@ pub(crate) fn new( Ok(DTRWrap::arc_pin_init(pin_init!(Transaction { target_node: Some(target_node), + stack_next, sender_euid: from.process.cred.euid(), from: from.clone(), to, @@ -84,15 +88,100 @@ pub(crate) fn new( }))?) } - /// Submits the transaction to a work queue. + pub(crate) fn new_reply( + from: &Arc, + to: Arc, + tr: &BinderTransactionDataSg, + ) -> BinderResult> { + let trd = &tr.transaction_data; + let mut alloc = match from.copy_transaction_data(to.clone(), tr, None) { + Ok(alloc) => alloc, + Err(err) => { + pr_warn!("Failure in copy_transaction_data: {:?}", err); + return Err(err); + } + }; + if trd.flags & TF_CLEAR_BUF != 0 { + alloc.set_info_clear_on_drop(); + } + Ok(DTRWrap::arc_pin_init(pin_init!(Transaction { + target_node: None, + stack_next: None, + sender_euid: from.process.task.euid(), + from: from.clone(), + to, + code: trd.code, + flags: trd.flags, + data_size: trd.data_size as _, + data_address: alloc.ptr, + allocation <- kernel::new_spinlock!(Some(alloc), "Transaction::new"), + txn_security_ctx_off: None, + }))?) + } + + /// Determines if the transaction is stacked on top of the given transaction. + pub(crate) fn is_stacked_on(&self, onext: &Option>) -> bool { + match (&self.stack_next, onext) { + (None, None) => true, + (Some(stack_next), Some(next)) => Arc::ptr_eq(stack_next, next), + _ => false, + } + } + + /// Returns a pointer to the next transaction on the transaction stack, if there is one. + pub(crate) fn clone_next(&self) -> Option> { + Some(self.stack_next.as_ref()?.clone()) + } + + /// Searches in the transaction stack for a thread that belongs to the target process. This is + /// useful when finding a target for a new transaction: if the node belongs to a process that + /// is already part of the transaction stack, we reuse the thread. + fn find_target_thread(&self) -> Option> { + let mut it = &self.stack_next; + while let Some(transaction) = it { + if Arc::ptr_eq(&transaction.from.process, &self.to) { + return Some(transaction.from.clone()); + } + it = &transaction.stack_next; + } + None + } + + /// Searches in the transaction stack for a transaction originating at the given thread. + pub(crate) fn find_from(&self, thread: &Thread) -> Option> { + let mut it = &self.stack_next; + while let Some(transaction) = it { + if core::ptr::eq(thread, transaction.from.as_ref()) { + return Some(transaction.clone()); + } + + it = &transaction.stack_next; + } + None + } + + /// Submits the transaction to a work queue. Uses a thread if there is one in the transaction + /// stack, otherwise uses the destination process. + /// + /// Not used for replies. pub(crate) fn submit(self: DLArc) -> BinderResult { let process = self.to.clone(); let mut process_inner = process.inner.lock(); - match process_inner.push_work(self) { + + let res = if let Some(thread) = self.find_target_thread() { + match thread.push_work(self) { + PushWorkRes::Ok => Ok(()), + PushWorkRes::FailedDead(me) => Err((BinderError::new_dead(), me)), + } + } else { + process_inner.push_work(self) + }; + drop(process_inner); + + match res { Ok(()) => Ok(()), Err((err, work)) => { // Drop work after releasing process lock. - drop(process_inner); drop(work); Err(err) } @@ -101,11 +190,14 @@ pub(crate) fn submit(self: DLArc) -> BinderResult { } impl DeliverToRead for Transaction { - fn do_work( - self: DArc, - _thread: &Thread, - writer: &mut UserSlicePtrWriter, - ) -> Result { + fn do_work(self: DArc, thread: &Thread, writer: &mut UserSlicePtrWriter) -> Result { + let send_failed_reply = ScopeGuard::new(|| { + if self.target_node.is_some() && self.flags & TF_ONE_WAY == 0 { + let reply = Either::Right(BR_FAILED_REPLY); + self.from.deliver_reply(reply, &self); + } + }); + let mut tr_sec = BinderTransactionDataSecctx::default(); let tr = tr_sec.tr_data(); if let Some(target_node) = &self.target_node { @@ -144,17 +236,33 @@ fn do_work( writer.write(&*tr)?; } + // Dismiss the completion of transaction with a failure. No failure paths are allowed from + // here on out. + send_failed_reply.dismiss(); + // It is now the user's responsibility to clear the allocation. let alloc = self.allocation.lock().take(); if let Some(alloc) = alloc { alloc.keep_alive(); } + // When this is not a reply and not a oneway transaction, update `current_transaction`. If + // it's a reply, `current_transaction` has already been updated appropriately. + if self.target_node.is_some() && tr_sec.transaction_data.flags & TF_ONE_WAY == 0 { + thread.set_current_transaction(self); + } + Ok(false) } fn cancel(self: DArc) { drop(self.allocation.lock().take()); + + // If this is not a reply or oneway transaction, then send a dead reply. + if self.target_node.is_some() && self.flags & TF_ONE_WAY == 0 { + let reply = Either::Right(BR_DEAD_REPLY); + self.from.deliver_reply(reply, &self); + } } fn should_sync_wakeup(&self) -> bool { -- 2.42.0.820.g83a721a137-goog