Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754376AbaF1BSu (ORCPT ); Fri, 27 Jun 2014 21:18:50 -0400 Received: from mail-ie0-f172.google.com ([209.85.223.172]:39804 "EHLO mail-ie0-f172.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753454AbaF1BSt (ORCPT ); Fri, 27 Jun 2014 21:18:49 -0400 MIME-Version: 1.0 In-Reply-To: <20140627141903.16817c28@gandalf.local.home> References: <1403873856.5827.56.camel@marge.simpson.net> <20140627100157.6b0143a5@gandalf.local.home> <1403890493.5830.33.camel@marge.simpson.net> <20140627135415.7246e87e@gandalf.local.home> <1403892474.5830.41.camel@marge.simpson.net> <20140627141903.16817c28@gandalf.local.home> From: Austin Schuh Date: Fri, 27 Jun 2014 18:18:27 -0700 Message-ID: Subject: Re: Filesystem lockup with CONFIG_PREEMPT_RT To: Steven Rostedt Cc: Mike Galbraith , Thomas Gleixner , Richard Weinberger , LKML , rt-users Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Jun 27, 2014 at 11:19 AM, Steven Rostedt wrote: > On Fri, 27 Jun 2014 20:07:54 +0200 > Mike Galbraith wrote: > >> > Why do we need the wakeup? the owner of the lock should wake it up >> > shouldn't it? >> >> True, but that can take ages. > > Can it? If the workqueue is of some higher priority, it should boost > the process that owns the lock. Otherwise it just waits like anything > else does. > > I much rather keep the paradigm of the mainline kernel than to add a > bunch of hacks that can cause more unforeseen side effects that may > cause other issues. > > Remember, this would only be for spinlocks converted into a rtmutex, > not for normal mutex or other sleeps. In mainline, the wake up still > would not happen so why are we waking it up here? > > This seems similar to the BKL crap we had to deal with as well. If we > were going to sleep because we were blocked on a spinlock converted > rtmutex we could not release and retake the BKL because we would end up > blocked on two locks. Instead, we made sure that the spinlock would not > release or take the BKL. It kept with the paradigm of mainline and > worked. Sucked, but it worked. > > -- Steve Sounds like you are arguing that we should disable preemption (or whatever the right mechanism is) while holding the pool lock? Workqueues spin up more threads when work that they are executing blocks. This is done through hooks in the scheduler. This means that we have to acquire the pool lock when work blocks on a lock in order to see if there is more work and whether or not we need to spin up a new thread. It would be more context switches, but I wonder if we could kick the workqueue logic completely out of the scheduler into a thread. Have the scheduler increment/decrement an atomic pool counter, and wake up the monitoring thread to spawn new threads when needed? That would get rid of the recursive pool lock problem, and should reduce scheduler latency if we would need to spawn a new thread. Austin -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/