Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756356AbZAGEBK (ORCPT ); Tue, 6 Jan 2009 23:01:10 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1753916AbZAGEA5 (ORCPT ); Tue, 6 Jan 2009 23:00:57 -0500 Received: from cn.fujitsu.com ([222.73.24.84]:51681 "EHLO song.cn.fujitsu.com" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1753728AbZAGEA4 (ORCPT ); Tue, 6 Jan 2009 23:00:56 -0500 Message-ID: <49642829.20006@cn.fujitsu.com> Date: Wed, 07 Jan 2009 11:57:29 +0800 From: Lai Jiangshan User-Agent: Thunderbird 2.0.0.18 (Windows/20081105) MIME-Version: 1.0 To: Peter Zijlstra CC: Linus Torvalds , paulmck@linux.vnet.ibm.com, Gregory Haskins , Ingo Molnar , Matthew Wilcox , Andi Kleen , Chris Mason , Andrew Morton , linux-kernel@vger.kernel.org, linux-fsdevel , linux-btrfs , Thomas Gleixner , Steven Rostedt , Nick Piggin , Peter Morreale , Sven Dietrich Subject: Re: [PATCH][RFC]: mutex: adaptive spin References: <87r63ljzox.fsf@basil.nowhere.org> <20090103191706.GA2002@parisc-linux.org> <1231093310.27690.5.camel@twins> <20090104184103.GE2002@parisc-linux.org> <1231242031.11687.97.camel@twins> <20090106121052.GA27232@elte.hu> <4963584A.4090805@novell.com> <20090106131643.GA15228@elte.hu> <1231248041.11687.107.camel@twins> <49636799.1010109@novell.com> <20090106214229.GD6741@linux.vnet.ibm.com> <1231278275.11687.111.camel@twins> <1231279660.11687.121.camel@twins> <1231281801.11687.125.camel@twins> In-Reply-To: <1231281801.11687.125.camel@twins> Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1503 Lines: 58 Peter Zijlstra wrote: > +void mutex_spin_or_schedule(struct mutex_waiter *waiter, long state, unsigned long *flags) > +{ > + struct mutex *lock = waiter->lock; > + struct task_struct *task = waiter->task; > + struct task_struct *owner = lock->owner; > + struct rq *rq; > + > + if (!owner) > + goto do_schedule; > + > + rq = task_rq(owner); > + > + if (rq->curr != owner) { > +do_schedule: > + __set_task_state(task, state); > + spin_unlock_mutex(&lock->wait_lock, *flags); > + schedule(); > + } else { > + spin_unlock_mutex(&lock->wait_lock, *flags); > + for (;;) { > + /* Stop spinning when there's a pending signal. */ > + if (signal_pending_state(state, task)) > + break; > + > + /* Owner changed, bail to revalidate state */ > + if (lock->owner != owner) > + break; > + > + /* Owner stopped running, bail to revalidate state */ > + if (rq->curr != owner) > + break; > + 2 questions from my immature thought: 1) Do we need keep gcc from optimizing when we access lock->owner and rq->curr in the loop? 2) "if (rq->curr != owner)" need become smarter. schedule() { select_next rq->curr = next; contex_swith } we also spin when owner is select_next-ing in schedule(). but select_next is not fast enough. Lai. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/