Received: by 2002:a05:7412:d008:b0:f9:6acb:47ec with SMTP id bd8csp260629rdb; Tue, 19 Dec 2023 16:20:56 -0800 (PST) X-Google-Smtp-Source: AGHT+IE33bafp0OMz3zvdtDYXwrvkRLHLRKfTgwTJKpXsMTJfaj9fLPJx4I/oERhJxxNAxxNvXNa X-Received: by 2002:a05:6358:7f05:b0:169:9c45:ca12 with SMTP id p5-20020a0563587f0500b001699c45ca12mr24907581rwn.23.1703031655877; Tue, 19 Dec 2023 16:20:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703031655; cv=none; d=google.com; s=arc-20160816; b=GhnYHLc3HzqiZsUKxORgKO+S+DyqIMsMmcaBhSRxXl9usRNEJl+ox0KcvGUfqhX9OR jgTItV3hUpm4R6t+PeB/XOY8pHVTdc63tcil82jtBYzqXPnacMrfnNfA+THFkoTTDLMs LSGj2OiyrtxwUUNgxn0G08oLQwUz4ylLV8FtA2+2l/u/JFdmsnpK0A8ZCXMTys1fMHjR BeiV/PxNiecXhXTHkB09VtCOa3rNOWOgAGxVZe/uG2lRxJtOQO7ovGYnjGFRuPluUKJ9 DCSQJHEKeiRMMVFtdm+8roBcL7sG0pc4sEvaDZYetskGM/qnj+GHfCIKYozC7D0mjniA EJYw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=EqtrdKGdXzPBlRqPKaGseSxeIbziY2YKYdkpRUpxJXI=; fh=5JIDPtr1m24UEIajLvRh11pfGrxPinBnvs5yA23VCFE=; b=YW2awchxCGI8+OJELI+WvcNfrvKHW+MH6a0ieDHzK63/eJHolDvzrALyD5BjtAaeuf 0Zr2z3SH3ji8m2+wffKYbRgW07Y+jwvo+cd6InoR3Fh/yxRC/p1iGHYEqo6vZ6AcGFit iFKIUdEMdQQhSO/HD1ZF8jsm1OGelRqGBEvz9TJeyXDPiqQ5sNUjo8MNZZHI2Z/MlnEq 1CjeXQ69ouc1sYR27NMi4dZxLNM3YtmNoVdJufZaJiNsXt0JFkETb7jRL5vxQzNqZTbW oGdViNjW1XB77+gDFfbZdc7Jq5OXkVw0oKtD77jV4FmsN7lQ+t6LGQGfey7pXnQHtAHt imqw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=L0QZBpvV; spf=pass (google.com: domain of linux-kernel+bounces-6142-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6142-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [139.178.88.99]) by mx.google.com with ESMTPS id t1-20020a63eb01000000b005c6bdab9e4dsi20206773pgh.746.2023.12.19.16.20.55 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 19 Dec 2023 16:20:55 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-6142-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) client-ip=139.178.88.99; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=L0QZBpvV; spf=pass (google.com: domain of linux-kernel+bounces-6142-linux.lists.archive=gmail.com@vger.kernel.org designates 139.178.88.99 as permitted sender) smtp.mailfrom="linux-kernel+bounces-6142-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 66E20287F13 for ; Wed, 20 Dec 2023 00:20:55 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 7F666BE68; Wed, 20 Dec 2023 00:19:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="L0QZBpvV" X-Original-To: linux-kernel@vger.kernel.org Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 29E2DBE5E for ; Wed, 20 Dec 2023 00:19:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-5b8d4a559ddso4791559a12.1 for ; Tue, 19 Dec 2023 16:19:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1703031555; x=1703636355; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EqtrdKGdXzPBlRqPKaGseSxeIbziY2YKYdkpRUpxJXI=; b=L0QZBpvVE4egwCYGoVKmekFicU1eN+/jX8NhbaC9y6zeoXV7Coom4tfZCV5T7bHAkt hN9AVz71jmlide+HDAJ+scTkXo01KzLTX6QlcC25WUXUVVsHBln5q3fTItJlLKJI0lAg UQfGzxbHLOwFeIpC0x3I8u0vuX8+2vMq8xzkTKL5Qkdaw6So7nZgLie5o/ZvC7+Ru0OJ PbDgZokMXUTKKh2RMZE7raVk3fOdktOwaYqwSwrg9yH1gVvg4sfXTaTU4FSMkMPpYQZj sih4ch7ZOddIA7g5pTZf5wcaRnTLxvNRRHkkcTVf/sYHabgvenxJsJ91LzudcgA1qWSA NdOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703031555; x=1703636355; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EqtrdKGdXzPBlRqPKaGseSxeIbziY2YKYdkpRUpxJXI=; b=RffBlS8mTh2A+V0mDB0rg6VOObmFEv5QVzNDVhfWGsEvnOTlK7PXI5cGLwlt2Z75x1 5ZKk8uZf8XpLnIsBro8uQTnocyNtx128i6eOACbm6HEMojUzbN6A4Kti/MXApZmFGUva NCvwPvugosZJ/JAT3mZADiFU2i4fxd2qdvE9E7g9NOPV+0RPrOSqcr/4zF8y9JFus9Pf yLlE/SDh4GdF7r4S4kUR5Z/OYL9QkhnAfwQwUf+BCx7k2SCfndfVQPhXbz8MNAaE9T1r /MyxHwk51u4YwsAS3QtduaqGWXdpMc1Mq4cLyzfCFf6oNQHOKt7fNDEpf6rMLp3zMjej xi3g== X-Gm-Message-State: AOJu0YxPu1LRYhjoYYlStUe1BDFgHsnIcKaZkS9qIwI3lttvBmwVywTE d1n4AgfglgnNKNASkui01wvUT3ZuZFAATqoMVOyCeCdGcn2mHoYB3cgqi0m7G8+phJmdA5y0Zsa VgN+3DEhdFzwxlrK3uJzokK6tOazQDSSigkVENP2FB95i4U1kQGMkuRC6wLP5e7+DRo5eYNw= X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6a00:18a8:b0:6d6:35f0:19c5 with SMTP id x40-20020a056a0018a800b006d635f019c5mr59554pfh.0.1703031554968; Tue, 19 Dec 2023 16:19:14 -0800 (PST) Date: Tue, 19 Dec 2023 16:18:18 -0800 In-Reply-To: <20231220001856.3710363-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20231220001856.3710363-1-jstultz@google.com> X-Mailer: git-send-email 2.43.0.472.g3155946c3a-goog Message-ID: <20231220001856.3710363-8-jstultz@google.com> Subject: [PATCH v7 07/23] locking/mutex: Switch to mutex handoffs for CONFIG_SCHED_PROXY_EXEC From: John Stultz To: LKML Cc: Peter Zijlstra , Joel Fernandes , Qais Yousef , Ingo Molnar , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , kernel-team@android.com, Valentin Schneider , "Connor O'Brien" , John Stultz Content-Type: text/plain; charset="UTF-8" From: Peter Zijlstra Since with SCHED_PROXY_EXEC, we will want to hand off locks to the tasks we are running on behalf of, switch to using mutex handoffs. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: kernel-team@android.com Signed-off-by: Peter Zijlstra (Intel) [rebased, added comments and changelog] Signed-off-by: Juri Lelli [Fixed rebase conflicts] [squashed sched: Ensure blocked_on is always guarded by blocked_lock] Signed-off-by: Valentin Schneider [fix rebase conflicts, various fixes & tweaks commented inline] [squashed sched: Use rq->curr vs rq->proxy checks] Signed-off-by: Connor O'Brien [jstultz: Split out only the very basic initial framework for proxy logic from a larger patch.] Signed-off-by: John Stultz --- v5: * Split out from core proxy patch v6: * Rework to use sched_proxy_exec() instead of #ifdef CONFIG_PROXY_EXEC v7: * Avoid disabling optimistic spinning at compile time so booting with sched_proxy_exec=off matches prior performance * Add comment in mutex-design.rst as suggested by Metin Kaya --- Documentation/locking/mutex-design.rst | 3 ++ kernel/locking/mutex.c | 42 +++++++++++++++----------- 2 files changed, 28 insertions(+), 17 deletions(-) diff --git a/Documentation/locking/mutex-design.rst b/Documentation/locking/mutex-design.rst index 78540cd7f54b..57a5cb03f409 100644 --- a/Documentation/locking/mutex-design.rst +++ b/Documentation/locking/mutex-design.rst @@ -61,6 +61,9 @@ taken, depending on the state of the lock: waiting to spin on mutex owner, only to go directly to slowpath upon obtaining the MCS lock. + NOTE: Optimistic spinning will be avoided when using proxy execution + (SCHED_PROXY_EXEC) as we want to hand the lock off to the task that was + boosting the current owner. (iii) slowpath: last resort, if the lock is still unable to be acquired, the task is added to the wait-queue and sleeps until woken up by the diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 6084470773f6..11dc5cb7a5a3 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -416,6 +416,9 @@ static __always_inline bool mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx, struct mutex_waiter *waiter) { + if (sched_proxy_exec()) + return false; + if (!waiter) { /* * The purpose of the mutex_can_spin_on_owner() function is @@ -914,26 +917,31 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne mutex_release(&lock->dep_map, ip); - /* - * Release the lock before (potentially) taking the spinlock such that - * other contenders can get on with things ASAP. - * - * Except when HANDOFF, in that case we must not clear the owner field, - * but instead set it to the top waiter. - */ - owner = atomic_long_read(&lock->owner); - for (;;) { - MUTEX_WARN_ON(__owner_task(owner) != current); - MUTEX_WARN_ON(owner & MUTEX_FLAG_PICKUP); - - if (owner & MUTEX_FLAG_HANDOFF) - break; + if (sched_proxy_exec()) { + /* Always force HANDOFF for Proxy Exec for now. Revisit. */ + owner = MUTEX_FLAG_HANDOFF; + } else { + /* + * Release the lock before (potentially) taking the spinlock + * such that other contenders can get on with things ASAP. + * + * Except when HANDOFF, in that case we must not clear the + * owner field, but instead set it to the top waiter. + */ + owner = atomic_long_read(&lock->owner); + for (;;) { + MUTEX_WARN_ON(__owner_task(owner) != current); + MUTEX_WARN_ON(owner & MUTEX_FLAG_PICKUP); - if (atomic_long_try_cmpxchg_release(&lock->owner, &owner, __owner_flags(owner))) { - if (owner & MUTEX_FLAG_WAITERS) + if (owner & MUTEX_FLAG_HANDOFF) break; - return; + if (atomic_long_try_cmpxchg_release(&lock->owner, &owner, + __owner_flags(owner))) { + if (owner & MUTEX_FLAG_WAITERS) + break; + return; + } } } -- 2.43.0.472.g3155946c3a-goog