Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753567AbcLFP0A (ORCPT ); Tue, 6 Dec 2016 10:26:00 -0500 Received: from merlin.infradead.org ([205.233.59.134]:36852 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753493AbcLFPZ6 (ORCPT ); Tue, 6 Dec 2016 10:25:58 -0500 Date: Tue, 6 Dec 2016 16:25:37 +0100 From: Peter Zijlstra To: Nicolai =?iso-8859-1?Q?H=E4hnle?= Cc: linux-kernel@vger.kernel.org, Nicolai =?iso-8859-1?Q?H=E4hnle?= , Ingo Molnar , Maarten Lankhorst , Daniel Vetter , Chris Wilson , dri-devel@lists.freedesktop.org Subject: Re: [PATCH v2 04/11] locking/ww_mutex: Set use_ww_ctx even when locking without a context Message-ID: <20161206152537.GV3045@worktop.programming.kicks-ass.net> References: <1480601214-26583-1-git-send-email-nhaehnle@gmail.com> <1480601214-26583-5-git-send-email-nhaehnle@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1480601214-26583-5-git-send-email-nhaehnle@gmail.com> User-Agent: Mutt/1.5.22.1 (2013-10-16) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2140 Lines: 68 On Thu, Dec 01, 2016 at 03:06:47PM +0100, Nicolai H?hnle wrote: > @@ -640,10 +640,11 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > struct mutex_waiter waiter; > unsigned long flags; > bool first = false; > - struct ww_mutex *ww; > int ret; > > - if (use_ww_ctx) { > + if (use_ww_ctx && ww_ctx) { > + struct ww_mutex *ww; > + > ww = container_of(lock, struct ww_mutex, base); > if (unlikely(ww_ctx == READ_ONCE(ww->ctx))) > return -EALREADY; So I don't see the point of removing *ww from the function scope, we can still compute that container_of() even if !ww_ctx, right? That would safe a ton of churn below, adding all those struct ww_mutex declarations and container_of() casts. (and note that the container_of() is a fancy NO-OP because base is the first member). > @@ -656,8 +657,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > mutex_optimistic_spin(lock, ww_ctx, use_ww_ctx, false)) { > /* got the lock, yay! */ > lock_acquired(&lock->dep_map, ip); > - if (use_ww_ctx) > + if (use_ww_ctx && ww_ctx) { > + struct ww_mutex *ww; > + > + ww = container_of(lock, struct ww_mutex, base); > ww_mutex_set_context_fastpath(ww, ww_ctx); > + } > preempt_enable(); > return 0; > } > @@ -702,7 +707,7 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > goto err; > } > > - if (use_ww_ctx && ww_ctx->acquired > 0) { > + if (use_ww_ctx && ww_ctx && ww_ctx->acquired > 0) { > ret = __ww_mutex_lock_check_stamp(lock, ww_ctx); > if (ret) > goto err; > @@ -742,8 +747,12 @@ __mutex_lock_common(struct mutex *lock, long state, unsigned int subclass, > /* got the lock - cleanup and rejoice! */ > lock_acquired(&lock->dep_map, ip); > > - if (use_ww_ctx) > + if (use_ww_ctx && ww_ctx) { > + struct ww_mutex *ww; > + > + ww = container_of(lock, struct ww_mutex, base); > ww_mutex_set_context_slowpath(ww, ww_ctx); > + } > > spin_unlock_mutex(&lock->wait_lock, flags); > preempt_enable(); All that then reverts to: - if (use_ww_ctx) + if (use_ww_ctx && ww_ctx)