Received: by 2002:a05:7412:3b8b:b0:fc:a2b0:25d7 with SMTP id nd11csp1267598rdb; Fri, 9 Feb 2024 16:23:57 -0800 (PST) X-Google-Smtp-Source: AGHT+IFC/NCWXW2I/ghfHXtmh2975IgaPFdboZyTqdPSBvxSb5t+WaySfQX2baQ4IzmXMfa33AUf X-Received: by 2002:a05:6808:2348:b0:3bf:efd2:bab9 with SMTP id ef8-20020a056808234800b003bfefd2bab9mr529403oib.56.1707524637666; Fri, 09 Feb 2024 16:23:57 -0800 (PST) ARC-Seal: i=2; a=rsa-sha256; t=1707524637; cv=pass; d=google.com; s=arc-20160816; b=N/zD5m/9y/WcjJ3QieoP5Uk+mla/HSl+0i1TgWVcs202NYv8C/cPFnT5c6u0g5Fgpx j01k1gAv5625esdHMuypv9gkXJuh2rZnb5JNXV/zes/g+eM3Gwu+FN9j2N2wDYuIPqcS kDAgn05qR4IGz6aGe7AgJpss8W+Xt8VyCGLxnej9y0Ezfh8YwYmHszClkmRKZukEd7s9 gyBXzm5f6t8QOQaVXkzzyC3rmbhDbnxrJRSK0HXARq5gtx1+VRVUlADZInvYeRo8aqEO 889sKsSoeBYXh8/WG62kBSBK+S2f05We5X9YhVuQonXFDQEOAt0hgGfJl0l2ct4TS9Fw N1LA== ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:from:subject:message-id:references:mime-version :list-unsubscribe:list-subscribe:list-id:precedence:in-reply-to:date :dkim-signature; bh=v86i1p3MWyXXcX4NFIra3i25a9IIHW2aJTwm0xruHMc=; fh=T5DZdrZ6HCazxWWvthkWr0ICqKETTgfa4N8cmw5i+EU=; b=K7gNaUAWp114Xy5aZiMoIveq25an+40F2238SKDqJaJdJGMW+L3Uc0PwDWvoYFCJge PlvE25bc+fiseV/h7JTq1ZvhVF14axLyijJH9uhmI4rwrjv4K0ljV1ojW1mqQtPdEbji KiAI4KI3hYYy9eYUQb25lLb8gpqXtAliC+6Plxb+4UA/44N4w8TBDPYwlSjFPp7Wvzf4 so4T/mRKEmwQ3oGGuU6lRCZ89ZEAzyiAUGacbSot+Wu4WalSEyoq5ymW5c5PKxmK+i32 lP9mFFD2Cqeqg/Id+wDqTgvElZk+MBB9XLOalEnGMqok14hf5wfgfKRdYFTLDKkNrgba ds9A==; dara=google.com ARC-Authentication-Results: i=2; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=Q+aknnIy; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-60179-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60179-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com X-Forwarded-Encrypted: i=2; AJvYcCXs3Bo1k3LQf7kiDwNferdsq+qWTxu7owitzmMHM6CELfBeqZHkEbEqOAEJLxRDuW3E6FHl22z8Zkm0rOhVpkZxW4sTESBd3faL6PLzLA== Return-Path: Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org. [2604:1380:45d1:ec00::1]) by mx.google.com with ESMTPS id e29-20020a05620a209d00b00785a436c916si659329qka.59.2024.02.09.16.23.57 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 09 Feb 2024 16:23:57 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-60179-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) client-ip=2604:1380:45d1:ec00::1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=Q+aknnIy; arc=pass (i=1 spf=pass spfdomain=flex--jstultz.bounces.google.com dkim=pass dkdomain=google.com dmarc=pass fromdomain=google.com); spf=pass (google.com: domain of linux-kernel+bounces-60179-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45d1:ec00::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-60179-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 57BF81C224B1 for ; Sat, 10 Feb 2024 00:23:57 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 08B0C1FDA; Sat, 10 Feb 2024 00:23:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Q+aknnIy" Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8E36B366 for ; Sat, 10 Feb 2024 00:23:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707524620; cv=none; b=VPeZx5ytAZXiuXaCK1yDbh+koQWL/tj0MuOL1/VQoKK1DlVPjEFBy5Wl2XGaqc2EE60iW0SY1T+TvlapBskiuq68riDvKbGpOo8fOt+zueGnB9vzJg4Jf93MP89IrtPTx/5AKnF0Rjaovn6Vpcp9iiHA4qYmKiYHm0XAlLQA8RQ= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1707524620; c=relaxed/simple; bh=IKBIZO7VNkBwuj4wxvTwPDEHpZ4rKFOKNisGzOKffb0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jusLE/2FFqiVtekuVfQIVgfUIRR3q3yYUfc0ga7bhKVJzgdvyVqk1J9p6Bhvd5DfXW/hb2NyAiieV9GakN6Bd2tVCFMzwbWGeSUozuM53Xny7E4D1PG7O3wJyrJAjcM4R9UUyCc3NTGpm3jygluaFRZ0Gjx9JzVUJSxXguD4X7Y= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Q+aknnIy; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--jstultz.bounces.google.com Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-5d1bffa322eso1870008a12.1 for ; Fri, 09 Feb 2024 16:23:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1707524618; x=1708129418; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=v86i1p3MWyXXcX4NFIra3i25a9IIHW2aJTwm0xruHMc=; b=Q+aknnIyChfE4JQ0r+h61TDJd/HV6fvE/m8WbfI4VSSzO+DO/LGwz9qe+/xtC+Ck4U 9SOvL/o5pehL64hQbiRUuYAZqAscFoJfwORcYex49zf920fvDRArL9mINb2nAm2TJo4S DIedNAeEzG/0r/iKbuz2MFXIT2i56QWuZ9tg4Nulr1vZKU82JLCyEc8FzBQF+jPofENt uhBIb3MQWE8DeJEIXtFw0nij8hHJRaijAOExrw/O7uxQR9rAGaxC4JYnNhRHVb9wh/wW jQpFpoqWmY11mnZi8yLXf3NiFca2CqiQ4L8iejAKmup/ii6K0b5bBnSXCzdBfDlhI5D+ e+WQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1707524618; x=1708129418; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=v86i1p3MWyXXcX4NFIra3i25a9IIHW2aJTwm0xruHMc=; b=dXK591uV57LGl37AVEa/F9pENaiRrSi9pnrxBwfTObfxt+rjVKk/wxWRfTY19xNLew TaksYeh0XbP6A5Za8Qeq5qfkVfuubEVu65dP+oaFsmziEDV8otuhVHIr+cxt3YwPr1kd aW87YbwVvjchWlc842/RKYNbSkmwi+1+fY3B0YnB4Ao1mIV5Ga3ml3KdRbWyfnhL2oEO 9rFtNeluMP1hP3js0NPPU+xp8sZ6TR9uFf6VxkfqJel24pcv+E0zpJP6GMlTjtySnsJH qsUhFAXe4XTbZyReXjyCv2QkpnzPy7Ft6nrydieoEmcCYATSHqlcgA9eSRRcTWzXR1p4 NdRg== X-Gm-Message-State: AOJu0YxgxdD7ofdTK77ruTzrVisIxPJG9CWeCmrtvCzuhw5lGusiqOmh HrxbQ7z+NjR2sYkrg/4qSdbVOw2UYGK+Fa+ktrPk+wuobGcmFXIoCUylG3seImxWwIjCQuje4GX +SkuUkyI8/Hjn1xeAq6RgJt6kmadq4TsVKbEGdjTqkdxj8yh6/r4BLgvkZukb3GmXBlwAM4HmP0 GYEvBJFLeMDs7GcRpu1Q5eEEaIm5sEdoFRXjY/rVXSs3Ev X-Received: from jstultz-noogler2.c.googlers.com ([fda3:e722:ac3:cc00:24:72f4:c0a8:600]) (user=jstultz job=sendgmr) by 2002:a05:6a02:185:b0:5dc:20fb:6698 with SMTP id bj5-20020a056a02018500b005dc20fb6698mr1422pgb.10.1707524617455; Fri, 09 Feb 2024 16:23:37 -0800 (PST) Date: Fri, 9 Feb 2024 16:23:10 -0800 In-Reply-To: <20240210002328.4126422-1-jstultz@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240210002328.4126422-1-jstultz@google.com> X-Mailer: git-send-email 2.43.0.687.g38aa6559b0-goog Message-ID: <20240210002328.4126422-2-jstultz@google.com> Subject: [PATCH v8 1/7] locking/mutex: Remove wakeups from under mutex::wait_lock From: John Stultz To: LKML Cc: John Stultz , Joel Fernandes , Qais Yousef , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Valentin Schneider , Steven Rostedt , Ben Segall , Zimuzo Ezeozue , Youssef Esmat , Mel Gorman , Daniel Bristot de Oliveira , Will Deacon , Waiman Long , Boqun Feng , "Paul E. McKenney" , Metin Kaya , Xuewen Yan , K Prateek Nayak , Thomas Gleixner , kernel-team@android.com Content-Type: text/plain; charset="UTF-8" In preparation to nest mutex::wait_lock under rq::lock we need to remove wakeups from under it. Cc: Joel Fernandes Cc: Qais Yousef Cc: Ingo Molnar Cc: Peter Zijlstra Cc: Juri Lelli Cc: Vincent Guittot Cc: Dietmar Eggemann Cc: Valentin Schneider Cc: Steven Rostedt Cc: Ben Segall Cc: Zimuzo Ezeozue Cc: Youssef Esmat Cc: Mel Gorman Cc: Daniel Bristot de Oliveira Cc: Will Deacon Cc: Waiman Long Cc: Boqun Feng Cc: "Paul E. McKenney" Cc: Metin Kaya Cc: Xuewen Yan Cc: K Prateek Nayak Cc: Thomas Gleixner Cc: kernel-team@android.com Signed-off-by: Peter Zijlstra (Intel) [Heavily changed after 55f036ca7e74 ("locking: WW mutex cleanup") and 08295b3b5bee ("locking: Implement an algorithm choice for Wound-Wait mutexes")] Signed-off-by: Juri Lelli [jstultz: rebased to mainline, added extra wake_up_q & init to avoid hangs, similar to Connor's rework of this patch] Signed-off-by: John Stultz --- v5: * Reverted back to an earlier version of this patch to undo the change that kept the wake_q in the ctx structure, as that broke the rule that the wake_q must always be on the stack, as its not safe for concurrency. v6: * Made tweaks suggested by Waiman Long v7: * Fixups to pass wake_qs down for PREEMPT_RT logic --- kernel/locking/mutex.c | 17 +++++++++++++---- kernel/locking/rtmutex.c | 26 +++++++++++++++++--------- kernel/locking/rwbase_rt.c | 4 +++- kernel/locking/rwsem.c | 4 ++-- kernel/locking/spinlock_rt.c | 3 ++- kernel/locking/ww_mutex.h | 29 ++++++++++++++++++----------- 6 files changed, 55 insertions(+), 28 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index cbae8c0b89ab..980ce630232c 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -575,6 +575,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas struct lockdep_map *nest_lock, unsigned long ip, struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx) { + DEFINE_WAKE_Q(wake_q); struct mutex_waiter waiter; struct ww_mutex *ww; int ret; @@ -625,7 +626,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas */ if (__mutex_trylock(lock)) { if (ww_ctx) - __ww_mutex_check_waiters(lock, ww_ctx); + __ww_mutex_check_waiters(lock, ww_ctx, &wake_q); goto skip_wait; } @@ -645,7 +646,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas * Add in stamp order, waking up waiters that must kill * themselves. */ - ret = __ww_mutex_add_waiter(&waiter, lock, ww_ctx); + ret = __ww_mutex_add_waiter(&waiter, lock, ww_ctx, &wake_q); if (ret) goto err_early_kill; } @@ -681,6 +682,11 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas } raw_spin_unlock(&lock->wait_lock); + /* Make sure we do wakeups before calling schedule */ + if (!wake_q_empty(&wake_q)) { + wake_up_q(&wake_q); + wake_q_init(&wake_q); + } schedule_preempt_disabled(); first = __mutex_waiter_is_first(lock, &waiter); @@ -714,7 +720,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas */ if (!ww_ctx->is_wait_die && !__mutex_waiter_is_first(lock, &waiter)) - __ww_mutex_check_waiters(lock, ww_ctx); + __ww_mutex_check_waiters(lock, ww_ctx, &wake_q); } __mutex_remove_waiter(lock, &waiter); @@ -730,6 +736,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas ww_mutex_lock_acquired(ww, ww_ctx); raw_spin_unlock(&lock->wait_lock); + wake_up_q(&wake_q); preempt_enable(); return 0; @@ -741,6 +748,7 @@ __mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclas raw_spin_unlock(&lock->wait_lock); debug_mutex_free_waiter(&waiter); mutex_release(&lock->dep_map, ip); + wake_up_q(&wake_q); preempt_enable(); return ret; } @@ -934,6 +942,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne } } + preempt_disable(); raw_spin_lock(&lock->wait_lock); debug_mutex_unlock(lock); if (!list_empty(&lock->wait_list)) { @@ -952,8 +961,8 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne __mutex_handoff(lock, next); raw_spin_unlock(&lock->wait_lock); - wake_up_q(&wake_q); + preempt_enable(); } #ifndef CONFIG_DEBUG_LOCK_ALLOC diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 4a10e8c16fd2..eaac8b196a69 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -34,13 +34,15 @@ static inline int __ww_mutex_add_waiter(struct rt_mutex_waiter *waiter, struct rt_mutex *lock, - struct ww_acquire_ctx *ww_ctx) + struct ww_acquire_ctx *ww_ctx, + struct wake_q_head *wake_q) { return 0; } static inline void __ww_mutex_check_waiters(struct rt_mutex *lock, - struct ww_acquire_ctx *ww_ctx) + struct ww_acquire_ctx *ww_ctx, + struct wake_q_head *wake_q) { } @@ -1206,6 +1208,7 @@ static int __sched task_blocks_on_rt_mutex(struct rt_mutex_base *lock, struct rt_mutex_waiter *top_waiter = waiter; struct rt_mutex_base *next_lock; int chain_walk = 0, res; + DEFINE_WAKE_Q(wake_q); lockdep_assert_held(&lock->wait_lock); @@ -1244,7 +1247,8 @@ static int __sched task_blocks_on_rt_mutex(struct rt_mutex_base *lock, /* Check whether the waiter should back out immediately */ rtm = container_of(lock, struct rt_mutex, rtmutex); - res = __ww_mutex_add_waiter(waiter, rtm, ww_ctx); + res = __ww_mutex_add_waiter(waiter, rtm, ww_ctx, &wake_q); + wake_up_q(&wake_q); if (res) { raw_spin_lock(&task->pi_lock); rt_mutex_dequeue(lock, waiter); @@ -1677,7 +1681,8 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock, struct ww_acquire_ctx *ww_ctx, unsigned int state, enum rtmutex_chainwalk chwalk, - struct rt_mutex_waiter *waiter) + struct rt_mutex_waiter *waiter, + struct wake_q_head *wake_q) { struct rt_mutex *rtm = container_of(lock, struct rt_mutex, rtmutex); struct ww_mutex *ww = ww_container_of(rtm); @@ -1688,7 +1693,7 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock, /* Try to acquire the lock again: */ if (try_to_take_rt_mutex(lock, current, NULL)) { if (build_ww_mutex() && ww_ctx) { - __ww_mutex_check_waiters(rtm, ww_ctx); + __ww_mutex_check_waiters(rtm, ww_ctx, wake_q); ww_mutex_lock_acquired(ww, ww_ctx); } return 0; @@ -1706,7 +1711,7 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock, /* acquired the lock */ if (build_ww_mutex() && ww_ctx) { if (!ww_ctx->is_wait_die) - __ww_mutex_check_waiters(rtm, ww_ctx); + __ww_mutex_check_waiters(rtm, ww_ctx, wake_q); ww_mutex_lock_acquired(ww, ww_ctx); } } else { @@ -1728,7 +1733,8 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock, static inline int __rt_mutex_slowlock_locked(struct rt_mutex_base *lock, struct ww_acquire_ctx *ww_ctx, - unsigned int state) + unsigned int state, + struct wake_q_head *wake_q) { struct rt_mutex_waiter waiter; int ret; @@ -1737,7 +1743,7 @@ static inline int __rt_mutex_slowlock_locked(struct rt_mutex_base *lock, waiter.ww_ctx = ww_ctx; ret = __rt_mutex_slowlock(lock, ww_ctx, state, RT_MUTEX_MIN_CHAINWALK, - &waiter); + &waiter, wake_q); debug_rt_mutex_free_waiter(&waiter); return ret; @@ -1753,6 +1759,7 @@ static int __sched rt_mutex_slowlock(struct rt_mutex_base *lock, struct ww_acquire_ctx *ww_ctx, unsigned int state) { + DEFINE_WAKE_Q(wake_q); unsigned long flags; int ret; @@ -1774,8 +1781,9 @@ static int __sched rt_mutex_slowlock(struct rt_mutex_base *lock, * irqsave/restore variants. */ raw_spin_lock_irqsave(&lock->wait_lock, flags); - ret = __rt_mutex_slowlock_locked(lock, ww_ctx, state); + ret = __rt_mutex_slowlock_locked(lock, ww_ctx, state, &wake_q); raw_spin_unlock_irqrestore(&lock->wait_lock, flags); + wake_up_q(&wake_q); rt_mutex_post_schedule(); return ret; diff --git a/kernel/locking/rwbase_rt.c b/kernel/locking/rwbase_rt.c index 34a59569db6b..e9d2f38b70f3 100644 --- a/kernel/locking/rwbase_rt.c +++ b/kernel/locking/rwbase_rt.c @@ -69,6 +69,7 @@ static int __sched __rwbase_read_lock(struct rwbase_rt *rwb, unsigned int state) { struct rt_mutex_base *rtm = &rwb->rtmutex; + DEFINE_WAKE_Q(wake_q); int ret; rwbase_pre_schedule(); @@ -110,7 +111,7 @@ static int __sched __rwbase_read_lock(struct rwbase_rt *rwb, * For rwlocks this returns 0 unconditionally, so the below * !ret conditionals are optimized out. */ - ret = rwbase_rtmutex_slowlock_locked(rtm, state); + ret = rwbase_rtmutex_slowlock_locked(rtm, state, &wake_q); /* * On success the rtmutex is held, so there can't be a writer @@ -122,6 +123,7 @@ static int __sched __rwbase_read_lock(struct rwbase_rt *rwb, if (!ret) atomic_inc(&rwb->readers); raw_spin_unlock_irq(&rtm->wait_lock); + wake_up_q(&wake_q); if (!ret) rwbase_rtmutex_unlock(rtm); diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c index 2340b6d90ec6..74ebb2915d63 100644 --- a/kernel/locking/rwsem.c +++ b/kernel/locking/rwsem.c @@ -1415,8 +1415,8 @@ static inline void __downgrade_write(struct rw_semaphore *sem) #define rwbase_rtmutex_lock_state(rtm, state) \ __rt_mutex_lock(rtm, state) -#define rwbase_rtmutex_slowlock_locked(rtm, state) \ - __rt_mutex_slowlock_locked(rtm, NULL, state) +#define rwbase_rtmutex_slowlock_locked(rtm, state, wq) \ + __rt_mutex_slowlock_locked(rtm, NULL, state, wq) #define rwbase_rtmutex_unlock(rtm) \ __rt_mutex_unlock(rtm) diff --git a/kernel/locking/spinlock_rt.c b/kernel/locking/spinlock_rt.c index 38e292454fcc..fb1810a14c9d 100644 --- a/kernel/locking/spinlock_rt.c +++ b/kernel/locking/spinlock_rt.c @@ -162,7 +162,8 @@ rwbase_rtmutex_lock_state(struct rt_mutex_base *rtm, unsigned int state) } static __always_inline int -rwbase_rtmutex_slowlock_locked(struct rt_mutex_base *rtm, unsigned int state) +rwbase_rtmutex_slowlock_locked(struct rt_mutex_base *rtm, unsigned int state, + struct wake_q_head *wake_q) { rtlock_slowlock_locked(rtm); return 0; diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h index 3ad2cc4823e5..7189c6631d90 100644 --- a/kernel/locking/ww_mutex.h +++ b/kernel/locking/ww_mutex.h @@ -275,7 +275,7 @@ __ww_ctx_less(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b) */ static bool __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter, - struct ww_acquire_ctx *ww_ctx) + struct ww_acquire_ctx *ww_ctx, struct wake_q_head *wake_q) { if (!ww_ctx->is_wait_die) return false; @@ -284,7 +284,7 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter, #ifndef WW_RT debug_mutex_wake_waiter(lock, waiter); #endif - wake_up_process(waiter->task); + wake_q_add(wake_q, waiter->task); } return true; @@ -299,7 +299,8 @@ __ww_mutex_die(struct MUTEX *lock, struct MUTEX_WAITER *waiter, */ static bool __ww_mutex_wound(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx, - struct ww_acquire_ctx *hold_ctx) + struct ww_acquire_ctx *hold_ctx, + struct wake_q_head *wake_q) { struct task_struct *owner = __ww_mutex_owner(lock); @@ -331,7 +332,7 @@ static bool __ww_mutex_wound(struct MUTEX *lock, * wakeup pending to re-read the wounded state. */ if (owner != current) - wake_up_process(owner); + wake_q_add(wake_q, owner); return true; } @@ -352,7 +353,8 @@ static bool __ww_mutex_wound(struct MUTEX *lock, * The current task must not be on the wait list. */ static void -__ww_mutex_check_waiters(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx) +__ww_mutex_check_waiters(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx, + struct wake_q_head *wake_q) { struct MUTEX_WAITER *cur; @@ -364,8 +366,8 @@ __ww_mutex_check_waiters(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx) if (!cur->ww_ctx) continue; - if (__ww_mutex_die(lock, cur, ww_ctx) || - __ww_mutex_wound(lock, cur->ww_ctx, ww_ctx)) + if (__ww_mutex_die(lock, cur, ww_ctx, wake_q) || + __ww_mutex_wound(lock, cur->ww_ctx, ww_ctx, wake_q)) break; } } @@ -377,6 +379,8 @@ __ww_mutex_check_waiters(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx) static __always_inline void ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) { + DEFINE_WAKE_Q(wake_q); + ww_mutex_lock_acquired(lock, ctx); /* @@ -405,8 +409,10 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx) * die or wound us. */ lock_wait_lock(&lock->base); - __ww_mutex_check_waiters(&lock->base, ctx); + __ww_mutex_check_waiters(&lock->base, ctx, &wake_q); unlock_wait_lock(&lock->base); + + wake_up_q(&wake_q); } static __always_inline int @@ -488,7 +494,8 @@ __ww_mutex_check_kill(struct MUTEX *lock, struct MUTEX_WAITER *waiter, static inline int __ww_mutex_add_waiter(struct MUTEX_WAITER *waiter, struct MUTEX *lock, - struct ww_acquire_ctx *ww_ctx) + struct ww_acquire_ctx *ww_ctx, + struct wake_q_head *wake_q) { struct MUTEX_WAITER *cur, *pos = NULL; bool is_wait_die; @@ -532,7 +539,7 @@ __ww_mutex_add_waiter(struct MUTEX_WAITER *waiter, pos = cur; /* Wait-Die: ensure younger waiters die. */ - __ww_mutex_die(lock, cur, ww_ctx); + __ww_mutex_die(lock, cur, ww_ctx, wake_q); } __ww_waiter_add(lock, waiter, pos); @@ -550,7 +557,7 @@ __ww_mutex_add_waiter(struct MUTEX_WAITER *waiter, * such that either we or the fastpath will wound @ww->ctx. */ smp_mb(); - __ww_mutex_wound(lock, ww_ctx, ww->ctx); + __ww_mutex_wound(lock, ww_ctx, ww->ctx, wake_q); } return 0; -- 2.43.0.687.g38aa6559b0-goog