Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1760953AbcLPNfE (ORCPT ); Fri, 16 Dec 2016 08:35:04 -0500 Received: from mail-wm0-f66.google.com ([74.125.82.66]:35395 "EHLO mail-wm0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757583AbcLPNe5 (ORCPT ); Fri, 16 Dec 2016 08:34:57 -0500 Subject: Re: [PATCH v2 05/11] locking/ww_mutex: Add waiters in stamp order To: Peter Zijlstra References: <1480601214-26583-1-git-send-email-nhaehnle@gmail.com> <1480601214-26583-6-git-send-email-nhaehnle@gmail.com> <20161206153606.GW3045@worktop.programming.kicks-ass.net> Cc: linux-kernel@vger.kernel.org, =?UTF-8?Q?Nicolai_H=c3=a4hnle?= , Ingo Molnar , Maarten Lankhorst , Daniel Vetter , Chris Wilson , dri-devel@lists.freedesktop.org From: =?UTF-8?Q?Nicolai_H=c3=a4hnle?= Message-ID: Date: Fri, 16 Dec 2016 14:34:53 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.1 MIME-Version: 1.0 In-Reply-To: <20161206153606.GW3045@worktop.programming.kicks-ass.net> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2382 Lines: 73 On 06.12.2016 16:36, Peter Zijlstra wrote: > On Thu, Dec 01, 2016 at 03:06:48PM +0100, Nicolai H?hnle wrote: >> +static inline int __sched >> +__ww_mutex_add_waiter(struct mutex_waiter *waiter, >> + struct mutex *lock, >> + struct ww_acquire_ctx *ww_ctx) >> +{ >> + struct mutex_waiter *cur; >> + >> + if (!ww_ctx) { >> + list_add_tail(&waiter->list, &lock->wait_list); >> + return 0; >> + } >> + >> + /* >> + * Add the waiter before the first waiter with a higher stamp. >> + * Waiters without a context are skipped to avoid starving >> + * them. >> + */ >> + list_for_each_entry(cur, &lock->wait_list, list) { >> + if (!cur->ww_ctx) >> + continue; >> + >> + if (__ww_mutex_stamp_after(ww_ctx, cur->ww_ctx)) { >> + /* Back off immediately if necessary. */ >> + if (ww_ctx->acquired > 0) { >> +#ifdef CONFIG_DEBUG_MUTEXES >> + struct ww_mutex *ww; >> + >> + ww = container_of(lock, struct ww_mutex, base); >> + DEBUG_LOCKS_WARN_ON(ww_ctx->contending_lock); >> + ww_ctx->contending_lock = ww; >> +#endif >> + return -EDEADLK; >> + } >> + >> + continue; >> + } >> + >> + list_add_tail(&waiter->list, &cur->list); >> + return 0; >> + } >> + >> + list_add_tail(&waiter->list, &lock->wait_list); >> + return 0; >> +} > > So you keep the list in order of stamp, and in general stamps come in, > in-order. That is, barring races on concurrent ww_mutex_lock(), things > are already ordered. > > So it doesn't make sense to scan the entire list forwards, that's almost > guarantees you scan the entire list every single time. > > Or am I reading this wrong? Which in itself is a hint a comment might be > in place. No, it's a reasonable question. Some things to keep in mind: 1. Each ww_acquire_ctx may be used with hundreds of locks. It's not that clear that things will be ordered in a contention scenario, especially since the old stamp is re-used when a context backs off and goes into the slow path (with ww_ctx->acquired == 0). 2. We want to add waiters directly before the first waiter with a higher stamp. Keeping in mind that there may be non-ww_ctx waiters in between, and we don't want to starve them, traversing the list backwards would require keeping the best insertion point around in a separate variable. Clearly possible, but it seemed more awkward. In hindsight, backwards iteration may not be so bad. Nicolai