Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754488AbcK1MUa (ORCPT ); Mon, 28 Nov 2016 07:20:30 -0500 Received: from mail-wm0-f50.google.com ([74.125.82.50]:38295 "EHLO mail-wm0-f50.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754353AbcK1MUW (ORCPT ); Mon, 28 Nov 2016 07:20:22 -0500 From: =?UTF-8?q?Nicolai=20H=C3=A4hnle?= To: linux-kernel@vger.kernel.org Subject: [PATCH 00/11] locking/ww_mutex: Keep sorted wait list to avoid stampedes Date: Mon, 28 Nov 2016 13:20:01 +0100 Message-Id: <1480335612-12069-1-git-send-email-nhaehnle@gmail.com> X-Mailer: git-send-email 2.7.4 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1729 Lines: 36 It turns out that the deadlock that I found last week was already implicitly fixed during the lock->owner redesign, by checking the WAITERS bit in the w/w lock fast path. However, since I had already started looking into sorting the wait list, here goes. The basic idea is to make sure that: 1. All waiters that have a ww_ctx appear in stamp order in the wait list. Waiters without a ww_ctx are still supported and appear in FIFO order as before. 2. At most one of the waiters can be in a state where it has to check for back off (i.e., ww_ctx->acquire > 0). Technically, there are short time windows in which more than one such waiter can be on the list, but all but the first one are running. This happens when a new waiter with ww_ctx->acquire > 0 adds itself at the front of the list and wakes up the previous head of the list, and of course multiple such chained cases can be in-flight simultaneously. Then we only ever have to wake up one task at a time. This is _not_ always the head of the wait list, since there may be waiters without a context. But among waiters with a context, we only ever have to wake the first one. To achieve all this, the series adds a new field to mutex_waiter which is only used for the w/w lock case. As a consequence, calling mutex_lock directly on w/w locks is now definitely incorrect. That was likely the intention previously anyway, but grepping through the source I did find one place that had slipped through. I've included timings taken from a contention-heavy stress test to some of the patches. The stress test performs actual GPU operations which take a good chunk of the wall time, but even so, the series still manages to improve the wall time quite a bit. Cheers, Nicolai