Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp297007pxf; Thu, 25 Mar 2021 04:29:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyHCcB2aLPeg+ws3xO4ZjZ8MqbwcBKW8MmESTuh38fpb7yR046vH94vFA9KXC8r4GOWM+hz X-Received: by 2002:a17:906:934c:: with SMTP id p12mr8769877ejw.131.1616671768552; Thu, 25 Mar 2021 04:29:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1616671768; cv=none; d=google.com; s=arc-20160816; b=EhLrf9ygZzfXJvDQUvfBMyWiIB7EROPwfisR1oKX57HKV1UcAoPdMNsgoDB4nXNwjG Mvl5qNLcmw0ICE7WVCoL5E7iiKF21eFiyFsAtw7xRnX/1dvW4ufvH9GhUgbPNGRHSES+ i7Wq1GcOCIbqXyWc6sO4ZeL1hFIBA0lUIefabmpX4pf0qDyUl6Zfltju6G1q+8FPu6ob J1mNP00kKzN7B7dDERlH+1FOApS7HBI6Mt9BDyB7n7Doqw5on9cFD6DPlU52kLrWgKcC HlO58igSjV35jcJURW3UvC8dxFL+Xo7dBERUvHDwUWQKUSzaRmrLMqX9CDMW2KHlTd9g 0HWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=5wpnlA5zfgOp9giKpoQMwoWxq+i6Tbox3LqfLBGXb08=; b=SgzjBX3K2w3axA5dpiOUmD+PfR76FHvN19NJUILPiO6Wh1/6n7TMyfebQ+oKivJP/v XMJzSq668tRr2QPCSGSpI322FpD2qsPgIa+5LqjRa9liSJ1/b6dBdww5DdveHr7C+69z 7f8Hbuvo7wwBNsOwDDuXSXyZHouIDdirIlDi5Kx06KiClZ1JFWO4dpVL5vpCMCrtcj6N k0weRczGpk1d1DgC/59UiYU8sblBmP2yR88Sm8vtl91LEtjaWHBC76Yj6b8FmTeXJNoe Nbzxnto+GN5XEnR8KLgdfn2vki9h5HdPVPDPQ/S4SIeUn+7yIhE++0dBTc/NyBTgc+qj nZHQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="nt+kqm/P"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id h7si3792376ede.526.2021.03.25.04.29.05; Thu, 25 Mar 2021 04:29:28 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=k20201202 header.b="nt+kqm/P"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231613AbhCYL2J (ORCPT + 99 others); Thu, 25 Mar 2021 07:28:09 -0400 Received: from mail.kernel.org ([198.145.29.99]:34126 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230315AbhCYLZo (ORCPT ); Thu, 25 Mar 2021 07:25:44 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 4E64161A27; Thu, 25 Mar 2021 11:25:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1616671544; bh=VqP86CCdqmO2kf0N46ywNOO+7o7RwYwlYzdZDNxU3+U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nt+kqm/PLcmsUWq0Nv3d4H9NXxpEjVEYCHtMAIM5OX2lueDaxlRq2M5yU7dP/t4mZ lLzFFeMv49lLk272I966SplULW9ZhKJ/W3TLWA5ME9VVMfEAlxQw75eH4Bo/+joB3i ZDlM9K7O19pyJP/66zBdI/O6xb+D8hb3oF/trBHWQKC6AheUe5rF9BgmG6Z56wAwI8 UvKxQY237ivolIBe1uIHFp6xKltPEJOCrDkaVhqo9if2hvCa7WA9+cqGSeicXlZV29 b8V+UEqyUkeXVdQlftPSsO+xzQebE0dhbHDATHuGDysbBQPhlJF4eS68bvHMUyP7r7 3Xjpy6bD1EyJQ== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Waiman Long , Ingo Molnar , Davidlohr Bueso , Sasha Levin Subject: [PATCH AUTOSEL 5.11 34/44] locking/ww_mutex: Simplify use_ww_ctx & ww_ctx handling Date: Thu, 25 Mar 2021 07:24:49 -0400 Message-Id: <20210325112459.1926846-34-sashal@kernel.org> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210325112459.1926846-1-sashal@kernel.org> References: <20210325112459.1926846-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Waiman Long [ Upstream commit 5de2055d31ea88fd9ae9709ac95c372a505a60fa ] The use_ww_ctx flag is passed to mutex_optimistic_spin(), but the function doesn't use it. The frequent use of the (use_ww_ctx && ww_ctx) combination is repetitive. In fact, ww_ctx should not be used at all if !use_ww_ctx. Simplify ww_mutex code by dropping use_ww_ctx from mutex_optimistic_spin() an clear ww_ctx if !use_ww_ctx. In this way, we can replace (use_ww_ctx && ww_ctx) by just (ww_ctx). Signed-off-by: Waiman Long Signed-off-by: Ingo Molnar Acked-by: Davidlohr Bueso Link: https://lore.kernel.org/r/20210316153119.13802-2-longman@redhat.com Signed-off-by: Sasha Levin --- kernel/locking/mutex.c | 25 ++++++++++++++----------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 5352ce50a97e..2c25b830203c 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -636,7 +636,7 @@ static inline int mutex_can_spin_on_owner(struct mutex *lock) */ static __always_inline bool mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx, - const bool use_ww_ctx, struct mutex_waiter *waiter) + struct mutex_waiter *waiter) { if (!waiter) { /* @@ -712,7 +712,7 @@ mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx, #else static __always_inline bool mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx, - const bool use_ww_ctx, struct mutex_waiter *waiter) + struct mutex_waiter *waiter) { return false; } @@ -932,6 +932,9 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, struct ww_mutex *ww; int ret; + if (!use_ww_ctx) + ww_ctx = NULL; + might_sleep(); #ifdef CONFIG_DEBUG_MUTEXES @@ -939,7 +942,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, #endif ww = container_of(lock, struct ww_mutex, base); - if (use_ww_ctx && ww_ctx) { + if (ww_ctx) { if (unlikely(ww_ctx == READ_ONCE(ww->ctx))) return -EALREADY; @@ -956,10 +959,10 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip); if (__mutex_trylock(lock) || - mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, NULL)) { + mutex_optimistic_spin(lock, ww_ctx, NULL)) { /* got the lock, yay! */ lock_acquired(&lock->dep_map, ip); - if (use_ww_ctx && ww_ctx) + if (ww_ctx) ww_mutex_set_context_fastpath(ww, ww_ctx); preempt_enable(); return 0; @@ -970,7 +973,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, * After waiting to acquire the wait_lock, try again. */ if (__mutex_trylock(lock)) { - if (use_ww_ctx && ww_ctx) + if (ww_ctx) __ww_mutex_check_waiters(lock, ww_ctx); goto skip_wait; @@ -1023,7 +1026,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, goto err; } - if (use_ww_ctx && ww_ctx) { + if (ww_ctx) { ret = __ww_mutex_check_kill(lock, &waiter, ww_ctx); if (ret) goto err; @@ -1036,7 +1039,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, * ww_mutex needs to always recheck its position since its waiter * list is not FIFO ordered. */ - if ((use_ww_ctx && ww_ctx) || !first) { + if (ww_ctx || !first) { first = __mutex_waiter_is_first(lock, &waiter); if (first) __mutex_set_flag(lock, MUTEX_FLAG_HANDOFF); @@ -1049,7 +1052,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, * or we must see its unlock and acquire. */ if (__mutex_trylock(lock) || - (first && mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, &waiter))) + (first && mutex_optimistic_spin(lock, ww_ctx, &waiter))) break; spin_lock(&lock->wait_lock); @@ -1058,7 +1061,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, acquired: __set_current_state(TASK_RUNNING); - if (use_ww_ctx && ww_ctx) { + if (ww_ctx) { /* * Wound-Wait; we stole the lock (!first_waiter), check the * waiters as anyone might want to wound us. @@ -1078,7 +1081,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, /* got the lock - cleanup and rejoice! */ lock_acquired(&lock->dep_map, ip); - if (use_ww_ctx && ww_ctx) + if (ww_ctx) ww_mutex_lock_acquired(ww, ww_ctx); spin_unlock(&lock->wait_lock); -- 2.30.1