Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S268005AbUIPW3q (ORCPT ); Thu, 16 Sep 2004 18:29:46 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S268269AbUIPW3p (ORCPT ); Thu, 16 Sep 2004 18:29:45 -0400 Received: from smtp.Lynuxworks.com ([207.21.185.24]:24329 "EHLO smtp.lynuxworks.com") by vger.kernel.org with ESMTP id S268005AbUIPW3Q (ORCPT ); Thu, 16 Sep 2004 18:29:16 -0400 Date: Thu, 16 Sep 2004 15:29:03 -0700 To: Bill Davidsen Cc: linux-kernel@vger.kernel.org, Ingo Molnar , "Bill Huey (hui)" Subject: Re: [patch] remove the BKL (Big Kernel Lock), this time for real Message-ID: <20040916222903.GA4089@nietzsche.lynx.com> References: Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.6+20040818i From: Bill Huey (hui) Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1987 Lines: 41 On Thu, Sep 16, 2004 at 10:28:08AM -0400, Bill Davidsen wrote: > Is that (necessarily) a bad thing? If it results in less time waiting > for BKL, and/or more time doing user work, that may drive throughput and > responsiveness up. It depends if the time for two ctx is greater or less > than the spin time on BKL. > > It would be nice to have the best of both worlds, use the semaphore if > there is a process on the run queue, and spin if not. That sounds > complex, and hopefully not worth the effort. FreeBSD-current uses adaptive mutexes. However they spin on that mutex only if the thread owning it is running across another CPU at that time, otherwise it sleeps, maybe priority inherited depending on the circumstance. It's kind of heavy weight, I know, but it might be a good place to look for an example implementation of this thing. I'm sure that a simplified adaptive mutex can be used that's compatible with the Linux SMP/preemption methology if this becomes relevant. > High ctx rates are not necessarily bad, when we implemented O_DIRECT for > an application the rate went up 30%, the outbound bandwidth went up > 10-15%, and waitio dropped by half at peak load. As long as something > useful is being done with the time previously wasted in spinning, I > would expect it to be a win. It's critical for other devices as well, like keeping a GPU fully engaged so that there isn't computational stalls since the process queue is empty, etc... Anybody thought about using this to protect RCU-ed critical sections yet even though it doesn't guarantee quiescience ? or has the recent work softirq-ing them sufficient ? Can't this be used as a kind per CPU-local data guard much like {get,set}_cpu() and friends ? bill - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/