Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp4465889pxf; Tue, 16 Mar 2021 14:22:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzRQLtHSOkVDJgMDeOmjYbVmHoK3d1Ldv0LNouV5g3z3IoO6Wt5Nd8X5FUfBu8188nT0mrX X-Received: by 2002:a17:906:ef2:: with SMTP id x18mr32495684eji.323.1615929764355; Tue, 16 Mar 2021 14:22:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1615929764; cv=none; d=google.com; s=arc-20160816; b=Jwpyk86OYjT0FxnoSLM+YhZaTvRd5jlEr3e2c6rSZiDMA6cMn0SEXnbtRa755kkvEv y2ZSJAWdgm6TDSrIqTc3SFFi3Xj2dbP5YlcLGlurjlZlpohxxYs/OcqkvtZa+FfTPXm/ pSgjuEX+j1E5VXFadouYBHpr8cUOz2iSrkJ51V9Nute7hRyGUoOd+6McogVMtH3HTvzV YKao4ZbIXsIr8XiqS247M6h7GHNqVtsgbb3y7gH7I+WiqImzbYVeYlJF1H5bIUGyZ+xl x81tGpgoRZ0ZWjDqLoJdelLtL/ZhftnW/HMbLI/j6ilefEqWEMt7T/Yn1SlO6QNefeYt xV9w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=k2VON33ar1kvnr5UxLRmIMnDCjMoUCt8x21XOkEemXk=; b=BF6kZV+TiHQ9kI6J/FQZDmm9agWBu48m74VXeAszNM/e58QpcI5Wjhluokmm/yLEwU vh1HfG81bsWObhmT08UC1P2H1OCH4+TDdZVj+0h0D1M6bOBebcV47nb7QKXF1KlIiRMQ 8UEjB1fpq/NCx3Qy0yWBuHRFilG3OLYyu3dersPX0lO++uffzjlUa4DityZaQr9XXuRx +1hUr9t+pL5/fdTwHSO6FFi+DmW6f4KQ38W9CDpLgItzAIisjOU0bs1rY+BlranySXtD dMYQOObLYNxr5aaTX7IlhK0hrbwEuGl0zUX4XI4fyOdwhcveyX8tpWeiK4f2sJ6VmLmn 5I7w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id dk21si15087479ejb.324.2021.03.16.14.22.22; Tue, 16 Mar 2021 14:22:44 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240200AbhCPS4D (ORCPT + 99 others); Tue, 16 Mar 2021 14:56:03 -0400 Received: from mx2.suse.de ([195.135.220.15]:41752 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S240212AbhCPSz6 (ORCPT ); Tue, 16 Mar 2021 14:55:58 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 6DCE8AE47; Tue, 16 Mar 2021 18:55:56 +0000 (UTC) Date: Tue, 16 Mar 2021 11:55:47 -0700 From: Davidlohr Bueso To: Waiman Long Cc: Peter Zijlstra , Ingo Molnar , Will Deacon , Boqun Feng , "Paul E. McKenney" , linux-kernel@vger.kernel.org, Juri Lelli Subject: Re: [PATCH 1/4] locking/ww_mutex: Simplify use_ww_ctx & ww_ctx handling Message-ID: <20210316185547.4mu6zj2bwjjs2c62@offworld> References: <20210316153119.13802-1-longman@redhat.com> <20210316153119.13802-2-longman@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Disposition: inline In-Reply-To: <20210316153119.13802-2-longman@redhat.com> User-Agent: NeoMutt/20201120 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, 16 Mar 2021, Waiman Long wrote: >The use_ww_ctx flag is passed to mutex_optimistic_spin(), but the >function doesn't use it. The frequent use of the (use_ww_ctx && ww_ctx) >combination is repetitive. I always found that very fugly. > >In fact, ww_ctx should not be used at all if !use_ww_ctx. Simplify >ww_mutex code by dropping use_ww_ctx from mutex_optimistic_spin() an >clear ww_ctx if !use_ww_ctx. In this way, we can replace (use_ww_ctx && >ww_ctx) by just (ww_ctx). > >Signed-off-by: Waiman Long Acked-by: Davidlohr Bueso >--- > kernel/locking/mutex.c | 25 ++++++++++++++----------- > 1 file changed, 14 insertions(+), 11 deletions(-) > >diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c >index adb935090768..622ebdfcd083 100644 >--- a/kernel/locking/mutex.c >+++ b/kernel/locking/mutex.c >@@ -626,7 +626,7 @@ static inline int mutex_can_spin_on_owner(struct mutex *lock) > */ > static __always_inline bool > mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx, >- const bool use_ww_ctx, struct mutex_waiter *waiter) >+ struct mutex_waiter *waiter) > { > if (!waiter) { > /* >@@ -702,7 +702,7 @@ mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx, > #else > static __always_inline bool > mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx, >- const bool use_ww_ctx, struct mutex_waiter *waiter) >+ struct mutex_waiter *waiter) > { > return false; > } >@@ -922,6 +922,9 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > struct ww_mutex *ww; > int ret; > >+ if (!use_ww_ctx) >+ ww_ctx = NULL; >+ > might_sleep(); > > #ifdef CONFIG_DEBUG_MUTEXES >@@ -929,7 +932,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > #endif > > ww = container_of(lock, struct ww_mutex, base); >- if (use_ww_ctx && ww_ctx) { >+ if (ww_ctx) { > if (unlikely(ww_ctx == READ_ONCE(ww->ctx))) > return -EALREADY; > >@@ -946,10 +949,10 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > mutex_acquire_nest(&lock->dep_map, subclass, 0, nest_lock, ip); > > if (__mutex_trylock(lock) || >- mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, NULL)) { >+ mutex_optimistic_spin(lock, ww_ctx, NULL)) { > /* got the lock, yay! */ > lock_acquired(&lock->dep_map, ip); >- if (use_ww_ctx && ww_ctx) >+ if (ww_ctx) > ww_mutex_set_context_fastpath(ww, ww_ctx); > preempt_enable(); > return 0; >@@ -960,7 +963,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > * After waiting to acquire the wait_lock, try again. > */ > if (__mutex_trylock(lock)) { >- if (use_ww_ctx && ww_ctx) >+ if (ww_ctx) > __ww_mutex_check_waiters(lock, ww_ctx); > > goto skip_wait; >@@ -1013,7 +1016,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > goto err; > } > >- if (use_ww_ctx && ww_ctx) { >+ if (ww_ctx) { > ret = __ww_mutex_check_kill(lock, &waiter, ww_ctx); > if (ret) > goto err; >@@ -1026,7 +1029,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > * ww_mutex needs to always recheck its position since its waiter > * list is not FIFO ordered. > */ >- if ((use_ww_ctx && ww_ctx) || !first) { >+ if (ww_ctx || !first) { > first = __mutex_waiter_is_first(lock, &waiter); > if (first) > __mutex_set_flag(lock, MUTEX_FLAG_HANDOFF); >@@ -1039,7 +1042,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > * or we must see its unlock and acquire. > */ > if (__mutex_trylock(lock) || >- (first && mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, &waiter))) >+ (first && mutex_optimistic_spin(lock, ww_ctx, &waiter))) > break; > > spin_lock(&lock->wait_lock); >@@ -1048,7 +1051,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > acquired: > __set_current_state(TASK_RUNNING); > >- if (use_ww_ctx && ww_ctx) { >+ if (ww_ctx) { > /* > * Wound-Wait; we stole the lock (!first_waiter), check the > * waiters as anyone might want to wound us. >@@ -1068,7 +1071,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > /* got the lock - cleanup and rejoice! */ > lock_acquired(&lock->dep_map, ip); > >- if (use_ww_ctx && ww_ctx) >+ if (ww_ctx) > ww_mutex_lock_acquired(ww, ww_ctx); > > spin_unlock(&lock->wait_lock); >-- >2.18.1 >