Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1762171AbZAHTTU (ORCPT ); Thu, 8 Jan 2009 14:19:20 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1759045AbZAHTTE (ORCPT ); Thu, 8 Jan 2009 14:19:04 -0500 Received: from acsinet11.oracle.com ([141.146.126.233]:49722 "EHLO acsinet11.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757738AbZAHTTB (ORCPT ); Thu, 8 Jan 2009 14:19:01 -0500 Subject: Re: [PATCH -v7][RFC]: mutex: implement adaptive spinning From: Chris Mason To: Steven Rostedt Cc: Linus Torvalds , Peter Zijlstra , Ingo Molnar , paulmck@linux.vnet.ibm.com, Gregory Haskins , Matthew Wilcox , Andi Kleen , Andrew Morton , Linux Kernel Mailing List , linux-fsdevel , linux-btrfs , Thomas Gleixner , Nick Piggin , Peter Morreale , Sven Dietrich In-Reply-To: References: <1231347442.11687.344.camel@twins> <1231365115.11687.361.camel@twins> <1231366716.11687.377.camel@twins> <1231408718.11687.400.camel@twins> <20090108141808.GC11629@elte.hu> <1231426014.11687.456.camel@twins> <1231434515.14304.27.camel@think.oraclecorp.com> Content-Type: text/plain Date: Thu, 08 Jan 2009 14:17:53 -0500 Message-Id: <1231442273.14304.49.camel@think.oraclecorp.com> Mime-Version: 1.0 X-Mailer: Evolution 2.24.1 Content-Transfer-Encoding: 7bit X-Source-IP: acsmt706.oracle.com [141.146.40.84] X-Auth-Type: Internal IP X-CT-RefId: str=0001.0A09020B.49665167.0124:SCFSTAT928724,ss=1,fgs=0 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1780 Lines: 50 On Thu, 2009-01-08 at 13:14 -0500, Steven Rostedt wrote: > > > On Thu, 8 Jan 2009, Steven Rostedt wrote: > > > In fact, you might not even need a process C: all you need is for B to be > > > on the same runqueue as A, and having enough load on the other CPU's that > > > A never gets migrated away. So "C" might be in user space. > > You're right about not needing process C. > > > > > > > I dunno. There are probably variations on the above. > > > > Ouch! I think you are on to something: > > > > for (;;) { > > struct thread_info *owner; > > > > old_val = atomic_cmpxchg(&lock->count, 1, 0); > > if (old_val == 1) { > > lock_acquired(&lock->dep_map, ip); > > mutex_set_owner(lock); > > return 0; > > } > > > > if (old_val < 0 && !list_empty(&lock->wait_list)) > > break; > > > > /* See who owns it, and spin on him if anybody */ > > owner = ACCESS_ONCE(lock->owner); > > > > The owner was preempted before assigning lock->owner (as you stated). > > If it was the current process that preempted the owner and these are RT > tasks pinned to the same CPU and the owner is of lower priority than the > spinner, we have a deadlock! > > Hmm, I do not think the need_sched here will even fix that :-/ RT tasks could go directly to sleeping. The spinner would see them on the list and break out. -chris -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/