Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751872AbdIAKQf (ORCPT ); Fri, 1 Sep 2017 06:16:35 -0400 Received: from LGEAMRELO11.lge.com ([156.147.23.51]:38406 "EHLO lgeamrelo11.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751675AbdIAKQd (ORCPT ); Fri, 1 Sep 2017 06:16:33 -0400 X-Original-SENDERIP: 156.147.1.121 X-Original-MAILFROM: byungchul.park@lge.com X-Original-SENDERIP: 10.177.222.33 X-Original-MAILFROM: byungchul.park@lge.com Date: Fri, 1 Sep 2017 19:16:29 +0900 From: Byungchul Park To: Peter Zijlstra Cc: mingo@kernel.org, tj@kernel.org, boqun.feng@gmail.com, david@fromorbit.com, johannes@sipsolutions.net, oleg@redhat.com, linux-kernel@vger.kernel.org, kernel-team@lge.com Subject: Re: [PATCH 4/4] lockdep: Fix workqueue crossrelease annotation Message-ID: <20170901101629.GL3240@X58A-UD3R> References: <20170830085333.GM32112@worktop.programming.kicks-ass.net> <004601d3216e$a3702030$ea506090$@lge.com> <20170830091223.xxnh3podtcumlabm@hirez.programming.kicks-ass.net> <004701d32171$ce57d4c0$6b077e40$@lge.com> <20170830112546.GH3240@X58A-UD3R> <20170831080442.5vdgoaijzmrc776x@hirez.programming.kicks-ass.net> <20170831081501.GJ3240@X58A-UD3R> <20170831083453.5tfjofzk7idthsof@hirez.programming.kicks-ass.net> <20170901020512.GK3240@X58A-UD3R> <20170901094747.iv6s532ccuuzpry2@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170901094747.iv6s532ccuuzpry2@hirez.programming.kicks-ass.net> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2006 Lines: 58 On Fri, Sep 01, 2017 at 11:47:47AM +0200, Peter Zijlstra wrote: > On Fri, Sep 01, 2017 at 11:05:12AM +0900, Byungchul Park wrote: > > On Thu, Aug 31, 2017 at 10:34:53AM +0200, Peter Zijlstra wrote: > > > On Thu, Aug 31, 2017 at 05:15:01PM +0900, Byungchul Park wrote: > > > > It's not important. Ok, check the following, instead: > > > > > > > > context X context Y > > > > --------- --------- > > > > wait_for_completion(C) > > > > acquire(A) > > > > release(A) > > > > process_one_work() > > > > acquire(B) > > > > release(B) > > > > work->fn() > > > > complete(C) > > > > > > > > We don't need to lose C->A and C->B dependencies unnecessarily. > > > > > > I really can't be arsed about them. Its really only the first few works > > > that will retain that dependency anyway, even if you were to retain > > > them. > > > > Wrong. > > > > Every 'work' doing complete() for different classes of completion > > variable suffers from losing valuable dependencies, every time, not > > first few ones. > > The moment you overrun the history array its gone. So yes, only the It would be gone _only_ at the time the history overrun, and then it will be built again. So, you are wrong. Let me show you an example: (I hope you also show examples.) context X context Y --------- --------- wait_for_completion(D) while (true) acquire(A) release(A) process_one_work() acquire(B) release(B) work->fn() complete(C) acquire(D) release(D) When happening an overrun in a 'work', 'A' and 'B' will be gone _only_ at the time, and then 'D', 'A' and 'B' will be queued into the xhlock *again* from the next loop on, and they can be used to generate useful dependencies again. You are being confused now. Acquisitions we are focusing now are not _stacked_ like hlocks, but _accumulated_ continuously onto the ring buffer e.i. xhlock array.