Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754005AbZIHIUO (ORCPT ); Tue, 8 Sep 2009 04:20:14 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751088AbZIHIUM (ORCPT ); Tue, 8 Sep 2009 04:20:12 -0400 Received: from viefep16-int.chello.at ([62.179.121.36]:22051 "EHLO viefep16-int.chello.at" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753850AbZIHIUL (ORCPT ); Tue, 8 Sep 2009 04:20:11 -0400 X-SourceIP: 213.93.53.227 Subject: Re: [rfc] lru_add_drain_all() vs isolation From: Peter Zijlstra To: KOSAKI Motohiro Cc: Mike Galbraith , Ingo Molnar , linux-mm , Christoph Lameter , Oleg Nesterov , lkml In-Reply-To: <20090908085344.0CBD.A69D9226@jp.fujitsu.com> References: <1252311463.7586.26.camel@marge.simson.net> <1252321596.7959.6.camel@laptop> <20090908085344.0CBD.A69D9226@jp.fujitsu.com> Content-Type: text/plain Date: Tue, 08 Sep 2009 10:20:06 +0200 Message-Id: <1252398006.7746.3.camel@twins> Mime-Version: 1.0 X-Mailer: Evolution 2.26.1 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2545 Lines: 48 On Tue, 2009-09-08 at 08:56 +0900, KOSAKI Motohiro wrote: > Hi Peter, > > > On Mon, 2009-09-07 at 10:17 +0200, Mike Galbraith wrote: > > > > > [ 774.651779] SysRq : Show Blocked State > > > [ 774.655770] task PC stack pid father > > > [ 774.655770] evolution.bin D ffff8800bc1575f0 0 7349 6459 0x00000000 > > > [ 774.676008] ffff8800bc3c9d68 0000000000000086 ffff8800015d9340 ffff8800bb91b780 > > > [ 774.676008] 000000000000dd28 ffff8800bc3c9fd8 0000000000013340 0000000000013340 > > > [ 774.676008] 00000000000000fd ffff8800015d9340 ffff8800bc1575f0 ffff8800bc157888 > > > [ 774.676008] Call Trace: > > > [ 774.676008] [] schedule_timeout+0x2d/0x20c > > > [ 774.676008] [] wait_for_common+0xde/0x155 > > > [ 774.676008] [] ? default_wake_function+0x0/0x14 > > > [ 774.676008] [] ? lru_add_drain_per_cpu+0x0/0x10 > > > [ 774.676008] [] ? lru_add_drain_per_cpu+0x0/0x10 > > > [ 774.676008] [] wait_for_completion+0x1d/0x1f > > > [ 774.676008] [] flush_work+0x7f/0x93 > > > [ 774.676008] [] ? wq_barrier_func+0x0/0x14 > > > [ 774.676008] [] schedule_on_each_cpu+0xb4/0xed > > > [ 774.676008] [] lru_add_drain_all+0x15/0x17 > > > [ 774.676008] [] sys_mlock+0x2e/0xde > > > [ 774.676008] [] system_call_fastpath+0x16/0x1b > > > > FWIW, something like the below (prone to explode since its utterly > > untested) should (mostly) fix that one case. Something similar needs to > > be done for pretty much all machine wide workqueue thingies, possibly > > also flush_workqueue(). > > Can you please explain reproduce way and problem detail? > > AFAIK, mlock() call lru_add_drain_all() _before_ grab semaphoe. Then, > it doesn't cause any deadlock. Suppose you have 2 cpus, cpu1 is busy doing a SCHED_FIFO-99 while(1), cpu0 does mlock()->lru_add_drain_all(), which does schedule_on_each_cpu(), which then waits for all cpus to complete the work. Except that cpu1, which is busy with the RT task, will never run keventd until the RT load goes away. This is not so much an actual deadlock as a serious starvation case. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/