Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753816Ab0GLUk6 (ORCPT ); Mon, 12 Jul 2010 16:40:58 -0400 Received: from www.tglx.de ([62.245.132.106]:46508 "EHLO www.tglx.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752421Ab0GLUk5 (ORCPT ); Mon, 12 Jul 2010 16:40:57 -0400 Date: Mon, 12 Jul 2010 22:40:18 +0200 (CEST) From: Thomas Gleixner To: Darren Hart cc: Mike Galbraith , linux-kernel@vger.kernel.org, Peter Zijlstra , Ingo Molnar , Eric Dumazet , John Kacur , Steven Rostedt , linux-rt-users@vger.kernel.org Subject: Re: [PATCH 4/4] futex: convert hash_bucket locks to raw_spinlock_t In-Reply-To: <4C3B68B9.5060404@us.ibm.com> Message-ID: References: <1278714780-788-1-git-send-email-dvhltc@us.ibm.com> <1278714780-788-5-git-send-email-dvhltc@us.ibm.com> <1278790882.7352.101.camel@marge.simson.net> <4C3B68B9.5060404@us.ibm.com> User-Agent: Alpine 2.00 (LFD 1167 2008-08-23) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2980 Lines: 94 On Mon, 12 Jul 2010, Darren Hart wrote: > On 07/10/2010 12:41 PM, Mike Galbraith wrote: > > On Fri, 2010-07-09 at 15:33 -0700, Darren Hart wrote: > > > > Out of curiosity, what's wrong with holding his pi_lock across the > > > > wakeup? He can _try_ to block, but can't until pi state is stable. > > > > > > > > I presume there's a big fat gotcha that's just not obvious to futex > > > > locking newbie :) > > Nor to some of us that have been engrossed in futexes for the last couple > years! I discussed the pi_lock across the wakeup issue with Thomas. While this > fixes the problem for this particular failure case, it doesn't protect > against: > > assume the following: > t1 is on the condvar > t2 does the requeue dance and t1 is now blocked on the outer futex > t3 takes hb->lock for a futex in the same bucket > t2 wakes due to signal/timeout > t2 blocks on hb->lock > > You are likely to have not hit the above scenario because you only had one > condvar, so the hash_buckets were not heavily shared and you weren't likely to > hit: > > t3 takes hb->lock for a futex in the same bucket > > > I'm going to roll up a patchset with your (Mike) spin_trylock patch and run it > through some tests. I'd still prefer a way to detect early wakeup without > having to grab the hb->lock(), but I haven't found it yet. > > + while(!spin_trylock(&hb->lock)) > + cpu_relax(); > ret = handle_early_requeue_pi_wakeup(hb, &q, &key2, to); > spin_unlock(&hb->lock); And this is nasty as it will create unbound priority inversion :( We discussed another solution on IRC in meantime: in futex_wait_requeue_pi() futex_wait_queue_me(hb, &q, to); raw_spin_lock(current->pi_lock); if (current->pi_blocked_on) { /* * We know that we can only be blocked on the outer futex * so we can skip the early wakeup check */ raw_spin_unlock(current->pi_lock); ret = 0; } else { current->pi_blocked_on = PI_WAKEUP_INPROGRESS; raw_spin_unlock(current->pi_lock); spin_lock(&hb->lock); ret = handle_early_requeue_pi_wakeup(); .... spin_lock(&hb->lock); } Now in the rtmutex magic we need in task_blocks_on_rt_mutex(): raw_spin_lock(task->pi_lock); /* * Add big fat comment why this is only relevant to futex * requeue_pi */ if (task != current && task->pi_blocked_on == PI_WAKEUP_INPROGRESS) { raw_spin_lock(task->pi_lock); /* * Returning 0 here is fine. the requeue code is just going to * move the futex_q to the other bucket, but that'll be fixed * up in handle_early_requeue_pi_wakeup() */ return 0; } Thanks, tglx -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/