Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755378AbcDHQZU (ORCPT ); Fri, 8 Apr 2016 12:25:20 -0400 Received: from smtprelay0185.hostedemail.com ([216.40.44.185]:43134 "EHLO smtprelay.hostedemail.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1750825AbcDHQZT (ORCPT ); Fri, 8 Apr 2016 12:25:19 -0400 X-Session-Marker: 726F737465647440676F6F646D69732E6F7267 X-Spam-Summary: 2,0,0,,d41d8cd98f00b204,rostedt@goodmis.org,:::::::::::,RULES_HIT:41:355:379:541:599:800:960:968:973:988:989:1183:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1534:1541:1593:1594:1711:1730:1747:1777:1792:2393:2553:2559:2562:2693:3138:3139:3140:3141:3142:3354:3622:3865:3866:3867:3868:3870:3871:3872:3873:3874:5007:6261:7875:7903:10004:10400:10848:10967:11026:11232:11658:11914:12043:12198:12296:12438:12517:12519:12555:12663:12740:13069:13311:13357:13439:14096:14097:14181:14659:14721:21080:21324:30045:30054:30090:30091,0,RBL:none,CacheIP:none,Bayesian:0.5,0.5,0.5,Netcheck:none,DomainCache:0,MSF:not bulk,SPF:fn,MSBL:0,DNSBL:none,Custom_rules:0:0:0,LFtime:3,LUA_SUMMARY:none X-HE-Tag: horse26_679c6e9559424 X-Filterd-Recvd-Size: 3295 Date: Fri, 8 Apr 2016 12:25:10 -0400 From: Steven Rostedt To: Peter Zijlstra Cc: xlpang@redhat.com, linux-kernel@vger.kernel.org, Juri Lelli , Ingo Molnar , Thomas Gleixner Subject: Re: [PATCH] sched/deadline/rtmutex: Fix a PI crash for deadline tasks Message-ID: <20160408122510.15978179@gandalf.local.home> In-Reply-To: <20160405092954.GC24771@twins.programming.kicks-ass.net> References: <1459508418-25577-1-git-send-email-xlpang@redhat.com> <20160401113827.GQ3430@twins.programming.kicks-ass.net> <56FE685E.6080001@redhat.com> <19912883-8AB1-4DFD-A0E1-F23057785243@infradead.org> <56FE78E0.5060504@redhat.com> <20160401215143.GB2906@worktop> <57037974.1020002@redhat.com> <20160405091954.GI3448@twins.programming.kicks-ass.net> <20160405092954.GC24771@twins.programming.kicks-ass.net> X-Mailer: Claws Mail 3.13.2 (GTK+ 2.24.30; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1929 Lines: 57 On Tue, 5 Apr 2016 11:29:54 +0200 Peter Zijlstra wrote: > -- > kernel/locking/rtmutex.c | 16 +++++++++++++--- > 1 file changed, 13 insertions(+), 3 deletions(-) > > diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c > index 3e746607abe5..1896baf28e9c 100644 > --- a/kernel/locking/rtmutex.c > +++ b/kernel/locking/rtmutex.c > @@ -1390,11 +1390,21 @@ rt_mutex_fastunlock(struct rt_mutex *lock, > } else { > bool deboost = slowfn(lock, &wake_q); > > - wake_up_q(&wake_q); > - > - /* Undo pi boosting if necessary: */ > + /* > + * Undo pi boosting (if necessary) and wake top waiter. > + * > + * We should deboost before waking the high-prio task such that > + * we don't run two tasks with the 'same' state. This however > + * can lead to prio-inversion if we would get preempted after > + * the deboost but before waking our high-prio task, hence the > + * preempt_disable. > + */ > + preempt_disable(); > if (deboost) > rt_mutex_adjust_prio(current); > + > + wake_up_q(&wake_q); > + preempt_enable(); > } > } > So the preempt_disable() is to allow us to set current back to its normal priority first before waking up the other task because we don't want two tasks at the same priority? Just remember, calling preempt_disable() is semantically the same as setting your priority as the highest task on the CPU. Thus the above fix is to set the one task to the highest priority so that we can deboost it when we remove the highest priority (preempt_enable()), after we wake up the task that will be lower priority then current until current calls preempt_enable(). This will of course keep a task that is higher in priority than both current and the waking task from running till this is all completed. What's the point of swapping deboost and the wake up again? Maybe I'm missing something, what exactly do you mean by "same state"? -- Steve