Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753258AbZAFSEZ (ORCPT ); Tue, 6 Jan 2009 13:04:25 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751021AbZAFSEN (ORCPT ); Tue, 6 Jan 2009 13:04:13 -0500 Received: from smtp1.linux-foundation.org ([140.211.169.13]:51359 "EHLO smtp1.linux-foundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750836AbZAFSEL (ORCPT ); Tue, 6 Jan 2009 13:04:11 -0500 Date: Tue, 6 Jan 2009 10:02:56 -0800 (PST) From: Linus Torvalds X-X-Sender: torvalds@localhost.localdomain To: Peter Zijlstra cc: Matthew Wilcox , Andi Kleen , Chris Mason , Andrew Morton , linux-kernel@vger.kernel.org, linux-fsdevel , linux-btrfs , Ingo Molnar , Thomas Gleixner , Steven Rostedt , Gregory Haskins , Nick Piggin Subject: Re: [PATCH][RFC]: mutex: adaptive spin In-Reply-To: <1231242031.11687.97.camel@twins> Message-ID: References: <1230722935.4680.5.camel@think.oraclecorp.com> <20081231104533.abfb1cf9.akpm@linux-foundation.org> <1230765549.7538.8.camel@think.oraclecorp.com> <87r63ljzox.fsf@basil.nowhere.org> <20090103191706.GA2002@parisc-linux.org> <1231093310.27690.5.camel@twins> <20090104184103.GE2002@parisc-linux.org> <1231242031.11687.97.camel@twins> User-Agent: Alpine 2.00 (LFD 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2160 Lines: 61 Ok, last comment, I promise. On Tue, 6 Jan 2009, Peter Zijlstra wrote: > @@ -175,11 +199,19 @@ __mutex_lock_common(struct mutex *lock, > debug_mutex_free_waiter(&waiter); > return -EINTR; > } > - __set_task_state(task, state); > > - /* didnt get the lock, go to sleep: */ > + owner = lock->owner; > + get_task_struct(owner); > spin_unlock_mutex(&lock->wait_lock, flags); > - schedule(); > + > + if (adaptive_wait(&waiter, owner, state)) { > + put_task_struct(owner); > + __set_task_state(task, state); > + /* didnt get the lock, go to sleep: */ > + schedule(); > + } else > + put_task_struct(owner); > + > spin_lock_mutex(&lock->wait_lock, flags); So I really dislike the whole get_task_struct/put_task_struct thing. It seems very annoying. And as far as I can tell, it's there _only_ to protect "task->rq" and nothing else (ie to make sure that the task doesn't exit and get freed and the pointer now points to la-la-land). Wouldn't it be much nicer to just cache the rq pointer (take it while still holding the spinlock), and then pass it in to adaptive_wait()? Then, adaptive_wait() can just do if (lock->owner != owner) return 0; if (rq->task != owner) return 1; Sure - the owner may have rescheduled to another CPU, but if it did that, then we really might as well sleep. So we really don't need to dereference that (possibly stale) owner task_struct at all - because we don't care. All we care about is whether the owner is still busy on that other CPU that it was on. Hmm? So it looks to me that we don't really need that annoying "try to protect the task pointer" crud. We can do the sufficient (and limited) sanity checking without the task even existing, as long as we originally load the ->rq pointer at a point where it was stable (ie inside the spinlock, when we know that the task must be still alive since it owns the lock). Linus -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/