Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp1826843imm; Thu, 14 Jun 2018 04:37:46 -0700 (PDT) X-Google-Smtp-Source: ADUXVKJQNhRjDKhczP7/gSZVO1b9q9b82b7gf01cUPQJLM9d0z4fS4tZsLXMv5ZIU7BIaEr1yVzv X-Received: by 2002:a17:902:bc4a:: with SMTP id t10-v6mr2608066plz.133.1528976266662; Thu, 14 Jun 2018 04:37:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528976266; cv=none; d=google.com; s=arc-20160816; b=XZTIJp2Du5dgkZpoLuBFTJ35J8eyYf2F+yNZuFbjOqevUrKQZgFA5a8eB+uufZsAEw OfXDLX597oJ1S+2j38hDX3YF+EE9ugbfqFK0j4anTl0vklc0DqPMebJkrap8kvsdkRwH DEmp3XZ+HD7td8UnU+YQB1JWRYkTlvaMnsZVFzoo5EaJZ9kXIbrq3i/cGeB7+0ybR6RA PCWkA+AoRSh54ATmJiWoF484xx3aXGlACxY8oP4WIwQpvAd/J4DTrTRHTBllqEBSzUUH KouKrsaeXkFcO1w2XIMR1+t8AWX5vMt8JYxfmFlhvbt0obCiF1+5/e2cMFF0id33sfhA zKvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=THAvWMxb2gTaCGhNNuNl20C9R+516lmeKZsXysZsCxs=; b=BdNwUF8dwcMQ+h5JFu63CNGYI7lmW5S7SKRxZIIT5sBwVT3/4fygScJzP+vD8cRLrM ggTcy6ZUXlQ+nBXXcaFW/kb+EDDJehri/88GULE175ThJ5AAPyB5GdVINMEajvPYjztI yEJgFjhJ3HY/HUsDrqS0CTLhkVkCYcu+H2fuK9WPMbuNIYOFRBkduKENryZfNPZ8K1rH etWa7OY0ZPZpfyrAWEAV4mHIq5KUle5DVH0UtwgGDKDvMXpYmRJxjens8vFa9D4IfC6q ENrfdbLi53KNKcfWN/UoiPSyhud7PJ7mRRD/acbzzgkjIurxwrQ34LzCFzB+bCMs0LJo 0Xfg== ARC-Authentication-Results: i=1; mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b="qVYmXa/z"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k11-v6si4073423pgc.681.2018.06.14.04.37.32; Thu, 14 Jun 2018 04:37:46 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=fail header.i=@infradead.org header.s=merlin.20170209 header.b="qVYmXa/z"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755195AbeFNLg6 (ORCPT + 99 others); Thu, 14 Jun 2018 07:36:58 -0400 Received: from merlin.infradead.org ([205.233.59.134]:37136 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754810AbeFNLgy (ORCPT ); Thu, 14 Jun 2018 07:36:54 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=merlin.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id: List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive; bh=THAvWMxb2gTaCGhNNuNl20C9R+516lmeKZsXysZsCxs=; b=qVYmXa/zRS4eKDy7vCipDGRJG qiVy+/2h1Zznr91McZrgDTHv9I/X/LrujJAp3tvrU7afqeKy+a6Ac02CFMJYjx1U1WELxAyV+T+Xx nl3OAk0CdbtniowkgOp7D4Qg6t8ezZSe0HB39jXPayhMcOSpZzURqykdPhI3m1dMlKMa6zdbacOj+ rC5iVyvqcU+3wSUHXJzmQCoVHV3QetwQ16NN/vRGInG9pMMjphSCN6xWKMXeGR+pqr4yM0ijI7jtF E9QJsZeD6r0Q1FufDqutFN4JGLQcznGw1dTWUgytGfFmNycsVdBBbo8rXPLHSW+G/EtNwATnSOeg/ uS5ZOxBag==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=hirez.programming.kicks-ass.net) by merlin.infradead.org with esmtpsa (Exim 4.90_1 #2 (Red Hat Linux)) id 1fTQXw-0003zH-4r; Thu, 14 Jun 2018 11:36:09 +0000 Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000) id 3BC04201EA7CA; Thu, 14 Jun 2018 13:36:04 +0200 (CEST) Date: Thu, 14 Jun 2018 13:36:04 +0200 From: Peter Zijlstra To: Thomas Hellstrom Cc: dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org, Ingo Molnar , Jonathan Corbet , Gustavo Padovan , Maarten Lankhorst , Sean Paul , David Airlie , Davidlohr Bueso , "Paul E. McKenney" , Josh Triplett , Thomas Gleixner , Kate Stewart , Philippe Ombredanne , Greg Kroah-Hartman , linux-doc@vger.kernel.org, linux-media@vger.kernel.org, linaro-mm-sig@lists.linaro.org Subject: Re: [PATCH v2 1/2] locking: Implement an algorithm choice for Wound-Wait mutexes Message-ID: <20180614113604.GZ12198@hirez.programming.kicks-ass.net> References: <20180614072922.8114-1-thellstrom@vmware.com> <20180614072922.8114-2-thellstrom@vmware.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180614072922.8114-2-thellstrom@vmware.com> User-Agent: Mutt/1.9.5 (2018-04-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Jun 14, 2018 at 09:29:21AM +0200, Thomas Hellstrom wrote: > __ww_mutex_wakeup_for_backoff(struct mutex *lock, struct ww_acquire_ctx *ww_ctx) > { > struct mutex_waiter *cur; > + unsigned int is_wait_die = ww_ctx->ww_class->is_wait_die; > > lockdep_assert_held(&lock->wait_lock); > > @@ -310,13 +348,14 @@ __ww_mutex_wakeup_for_backoff(struct mutex *lock, struct ww_acquire_ctx *ww_ctx) > if (!cur->ww_ctx) > continue; > > - if (cur->ww_ctx->acquired > 0 && > + if (is_wait_die && cur->ww_ctx->acquired > 0 && > __ww_ctx_stamp_after(cur->ww_ctx, ww_ctx)) { > debug_mutex_wake_waiter(lock, cur); > wake_up_process(cur->task); > } > > - break; > + if (is_wait_die || __ww_mutex_wound(lock, cur->ww_ctx, ww_ctx)) > + break; > } > } I ended up with: static void __sched __ww_mutex_check_waiters(struct mutex *lock, struct ww_acquire_ctx *ww_ctx) { bool is_wait_die = ww_ctx->ww_class->is_wait_die; struct mutex_waiter *cur; lockdep_assert_held(&lock->wait_lock); list_for_each_entry(cur, &lock->wait_list, list) { if (!cur->ww_ctx) continue; if (is_wait_die) { /* * Because __ww_mutex_add_waiter() and * __ww_mutex_check_stamp() wake any but the earliest * context, this can only affect the first waiter (with * a context). */ if (cur->ww_ctx->acquired > 0 && __ww_ctx_stamp_after(cur->ww_ctx, ww_ctx)) { debug_mutex_wake_waiter(lock, cur); wake_up_process(cur->task); } break; } if (__ww_mutex_wound(lock, cur->ww_ctx, ww_ctx)) break; } } Currently you don't allow mixing WD and WW contexts (which is not immediately obvious from the above code), and the above hard relies on that. Are there sensible use cases for mixing them? IOW will your current restriction stand without hassle?