Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753962AbcLFQDd convert rfc822-to-8bit (ORCPT ); Tue, 6 Dec 2016 11:03:33 -0500 Received: from mx1.redhat.com ([209.132.183.28]:38754 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753629AbcLFQDa (ORCPT ); Tue, 6 Dec 2016 11:03:30 -0500 Subject: Re: [PATCH v2 02/11] locking/ww_mutex: Re-check ww->ctx in the inner optimistic spin loop To: Peter Zijlstra , =?UTF-8?Q?Nicolai_H=c3=a4hnle?= References: <1480601214-26583-1-git-send-email-nhaehnle@gmail.com> <1480601214-26583-3-git-send-email-nhaehnle@gmail.com> <20161206150620.GT3045@worktop.programming.kicks-ass.net> Cc: linux-kernel@vger.kernel.org, =?UTF-8?Q?Nicolai_H=c3=a4hnle?= , Ingo Molnar , Maarten Lankhorst , Daniel Vetter , Chris Wilson , dri-devel@lists.freedesktop.org From: Waiman Long Organization: Red Hat Message-ID: Date: Tue, 6 Dec 2016 11:03:28 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0 MIME-Version: 1.0 In-Reply-To: <20161206150620.GT3045@worktop.programming.kicks-ass.net> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8BIT X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Tue, 06 Dec 2016 16:03:30 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4042 Lines: 113 On 12/06/2016 10:06 AM, Peter Zijlstra wrote: > On Thu, Dec 01, 2016 at 03:06:45PM +0100, Nicolai H?hnle wrote: >> +++ b/kernel/locking/mutex.c >> @@ -350,7 +350,8 @@ ww_mutex_set_context_slowpath(struct ww_mutex *lock, >> * access and not reliable. >> */ >> static noinline >> -bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner) >> +bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner, >> + bool use_ww_ctx, struct ww_acquire_ctx *ww_ctx) >> { >> bool ret = true; >> >> @@ -373,6 +374,28 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner) >> break; >> } >> >> + if (use_ww_ctx && ww_ctx->acquired > 0) { >> + struct ww_mutex *ww; >> + >> + ww = container_of(lock, struct ww_mutex, base); >> + >> + /* >> + * If ww->ctx is set the contents are undefined, only >> + * by acquiring wait_lock there is a guarantee that >> + * they are not invalid when reading. >> + * >> + * As such, when deadlock detection needs to be >> + * performed the optimistic spinning cannot be done. >> + * >> + * Check this in every inner iteration because we may >> + * be racing against another thread's ww_mutex_lock. >> + */ >> + if (READ_ONCE(ww->ctx)) { >> + ret = false; >> + break; >> + } >> + } >> + >> cpu_relax(); >> } >> rcu_read_unlock(); > Aside from the valid question about mutex_can_spin_on_owner() there's > another 'problem' here, mutex_spin_on_owner() is marked noinline, so all > the use_ww_ctx stuff doesn't 'work' here. The mutex_spin_on_owner() function was originally marked noinline because it could be a major consumer of CPU cycles in a contended lock. Having it shown separately in the perf output will help the users have a better understanding of what is consuming all the CPU cycles. So I would still like to keep it this way. If you have concern about additional latency for non-ww_mutex calls, one alternative can be: diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c index 0afa998..777338d 100644 --- a/kernel/locking/mutex.c +++ b/kernel/locking/mutex.c @@ -349,9 +349,9 @@ static __always_inline void ww_mutex_lock_acquired(struct ww * Look out! "owner" is an entirely speculative pointer * access and not reliable. */ -static noinline -bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner, - bool use_ww_ctx, struct ww_acquire_ctx *ww_ctx) +static __always_inline +bool __mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner, + const bool use_ww_ctx, struct ww_acquire_ctx *ww_ctx) { bool ret = true; @@ -403,6 +403,19 @@ bool mutex_spin_on_owner(struct mutex *lock, struct task_st return ret; } +static noinline +bool mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner) +{ + return __mutex_spin_on_owner(lock, owner, false, NULL); +} + +static noinline +bool ww_mutex_spin_on_owner(struct mutex *lock, struct task_struct *owner, + struct ww_acquire_ctx *ww_ctx) +{ + return __mutex_spin_on_owner(lock, owner, true, ww_ctx); +} + /* * Initial check for entering the mutex spinning loop */ @@ -489,13 +502,17 @@ static bool mutex_optimistic_spin(struct mutex *lock, */ owner = __mutex_owner(lock); if (owner) { + bool spin_ok; + if (waiter && owner == task) { smp_mb(); /* ACQUIRE */ break; } - if (!mutex_spin_on_owner(lock, owner, use_ww_ctx, - ww_ctx)) + spin_ok = use_ww_ctx + ? ww_mutex_spin_on_owner(lock, owner, ww_ctx) + : mutex_spin_on_owner(lock, owner); + if (!spin_ok) goto fail_unlock; }