Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752820AbdCAPyX (ORCPT ); Wed, 1 Mar 2017 10:54:23 -0500 Received: from merlin.infradead.org ([205.233.59.134]:42622 "EHLO merlin.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752198AbdCAPyQ (ORCPT ); Wed, 1 Mar 2017 10:54:16 -0500 Date: Wed, 1 Mar 2017 16:54:14 +0100 From: Peter Zijlstra To: Fengguang Wu Cc: Boqun Feng , Nicolai =?iso-8859-1?Q?H=E4hnle?= , Chris Wilson , Ingo Molnar , linux-kernel@vger.kernel.org, LKP Subject: Re: [locking/ww_mutex] 2a0c112828 WARNING: CPU: 0 PID: 18 at kernel/locking/mutex.c:305 __ww_mutex_wakeup_for_backoff Message-ID: <20170301155414.GN6515@twins.programming.kicks-ass.net> References: <20170227051409.zbqwtekoa3hvggta@wfg-t540p.sh.intel.com> <20170227102824.GV6500@twins.programming.kicks-ass.net> <20170227103543.GA6536@twins.programming.kicks-ass.net> <20170301150138.hdixnmafzfsox7nn@tardis.cn.ibm.com> <20170301154043.hcsbpgooc3kqt45j@wfg-t540p.sh.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170301154043.hcsbpgooc3kqt45j@wfg-t540p.sh.intel.com> User-Agent: Mutt/1.5.23.1 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2053 Lines: 72 On Wed, Mar 01, 2017 at 11:40:43PM +0800, Fengguang Wu wrote: > Thanks for the patch! I applied the patch on top of "locking/ww_mutex: > Add kselftests for ww_mutex stress", and find no "bad unlock balance > detected" but this warning. Attached is the new dmesg which is a bit > large due to lots of repeated errors. So with all the various patches it works for me. I also have the following on top; which I did when I was looking through this code trying to figure out wth was happening. Chris, does this make sense to you? It makes each loop a fully new 'instance', otherwise we'll never update the ww_class->stamp and the threads will aways have the same order. --- diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c index da6c9a34f62f..d0fd06429c9d 100644 --- a/kernel/locking/test-ww_mutex.c +++ b/kernel/locking/test-ww_mutex.c @@ -398,12 +398,11 @@ static void stress_inorder_work(struct work_struct *work) if (!order) return; - ww_acquire_init(&ctx, &ww_class); - do { int contended = -1; int n, err; + ww_acquire_init(&ctx, &ww_class); retry: err = 0; for (n = 0; n < nlocks; n++) { @@ -433,9 +432,9 @@ static void stress_inorder_work(struct work_struct *work) __func__, err); break; } - } while (--stress->nloops); - ww_acquire_fini(&ctx); + ww_acquire_fini(&ctx); + } while (--stress->nloops); kfree(order); kfree(stress); @@ -470,9 +469,9 @@ static void stress_reorder_work(struct work_struct *work) kfree(order); order = NULL; - ww_acquire_init(&ctx, &ww_class); - do { + ww_acquire_init(&ctx, &ww_class); + list_for_each_entry(ll, &locks, link) { err = ww_mutex_lock(ll->lock, &ctx); if (!err) @@ -495,9 +494,9 @@ static void stress_reorder_work(struct work_struct *work) dummy_load(stress); list_for_each_entry(ll, &locks, link) ww_mutex_unlock(ll->lock); - } while (--stress->nloops); - ww_acquire_fini(&ctx); + ww_acquire_fini(&ctx); + } while (--stress->nloops); out: list_for_each_entry_safe(ll, ln, &locks, link)