Received: by 10.213.65.68 with SMTP id h4csp1764159imn; Mon, 19 Mar 2018 12:36:13 -0700 (PDT) X-Google-Smtp-Source: AG47ELumBLd0qNlKIU3p13PGALBJcJ0mP9mFAVJTuEd0lADzeO9Rz4gCajS4FDEdGDM/OHmKXXRk X-Received: by 2002:a17:902:5066:: with SMTP id f35-v6mr13748162plh.14.1521488173318; Mon, 19 Mar 2018 12:36:13 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1521488173; cv=none; d=google.com; s=arc-20160816; b=Hv8TaDR9EJ/9+tfL2rDTIZXxnQOUzzZ+KTKrYexBehkIZDSOJJQqnHnB3t6tVfMA+R 3Hd3ncff6wdztjcTz6lZw1KohYL4vZJL7N73g3VLT7RBlMdWkCiAogihMzx+daCcONES GDyI/R8QDa8igMgvzIWu1CJF2gZzvOwWQPcqXqDW5OTtO8/9z3A9JwfUIANu40aD6CLb OLeeYYR1W+o+6Mm7h5ts7p0Bb3b3973WF7gSE7TIMaiKmLQQXrlUG/Mj9dG8zrHoAzGn xEtezEz6rIHaevcfW71Vj7o1lQcSsIao2yrSRokkfCLA5pb0g6YTd51P6qenkykSws9l FKQw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:mime-version:user-agent:references :in-reply-to:message-id:date:subject:cc:to:from :arc-authentication-results; bh=qfNfFyvuxIQIs1VCcGQOJyb0UYnqPzmJjBYtQLgd4rc=; b=X7xGJuUDKj7DBkPVMnzosQC4HaeuqtgOE5mS5Rcq8uPPQhp5PdcJbEpjRpwjMzM4J+ GWWutth2VMOpaRec0tq/+9QlsWEecHpch3DY1XFZsE+8GQLsLfncPE++LKKt6yBk/BYn hbfP78MX8NM+iSjQd5heJoC7tPsfLixzyFuui9kFlMNZRVG6dj0D5PqatCIhNU8qXkK8 oouOAv6M6tOe1YjJf0XNVieLdZQL9V4egXHZujjq+497EVeOjvTzSQtKfM1GCG1n+3l6 eOBBNA+BsBp9TZFWevsJs5fH1u0OPFS3Z7XFnv4iF4BwEDBObc1OPm+qaz0opET7Dn5F oW/Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id d16-v6si484098pli.557.2018.03.19.12.35.57; Mon, 19 Mar 2018 12:36:13 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S969612AbeCSTdB (ORCPT + 99 others); Mon, 19 Mar 2018 15:33:01 -0400 Received: from mail.linuxfoundation.org ([140.211.169.12]:48810 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1031378AbeCSSXL (ORCPT ); Mon, 19 Mar 2018 14:23:11 -0400 Received: from localhost (LFbn-1-12247-202.w90-92.abo.wanadoo.fr [90.92.61.202]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 069AC12AE; Mon, 19 Mar 2018 18:23:11 +0000 (UTC) From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, "Peter Zijlstra (Intel)" , juri.lelli@arm.com, bigeasy@linutronix.de, xlpang@redhat.com, rostedt@goodmis.org, mathieu.desnoyers@efficios.com, jdesfossez@efficios.com, bristot@redhat.com, Thomas Gleixner , Sasha Levin Subject: [PATCH 4.9 101/241] rtmutex: Fix PI chain order integrity Date: Mon, 19 Mar 2018 19:06:06 +0100 Message-Id: <20180319180755.385200641@linuxfoundation.org> X-Mailer: git-send-email 2.16.2 In-Reply-To: <20180319180751.172155436@linuxfoundation.org> References: <20180319180751.172155436@linuxfoundation.org> User-Agent: quilt/0.65 X-stable: review MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 4.9-stable review patch. If anyone has any objections, please let me know. ------------------ From: Peter Zijlstra [ Upstream commit e0aad5b44ff5d28ac1d6ae70cdf84ca228e889dc ] rt_mutex_waiter::prio is a copy of task_struct::prio which is updated during the PI chain walk, such that the PI chain order isn't messed up by (asynchronous) task state updates. Currently rt_mutex_waiter_less() uses task state for deadline tasks; this is broken, since the task state can, as said above, change asynchronously, causing the RB tree order to change without actual tree update -> FAIL. Fix this by also copying the deadline into the rt_mutex_waiter state and updating it along with its prio field. Ideally we would also force PI chain updates whenever DL tasks update their deadline parameter, but for first approximation this is less broken than it was. Signed-off-by: Peter Zijlstra (Intel) Cc: juri.lelli@arm.com Cc: bigeasy@linutronix.de Cc: xlpang@redhat.com Cc: rostedt@goodmis.org Cc: mathieu.desnoyers@efficios.com Cc: jdesfossez@efficios.com Cc: bristot@redhat.com Link: http://lkml.kernel.org/r/20170323150216.403992539@infradead.org Signed-off-by: Thomas Gleixner Signed-off-by: Sasha Levin Signed-off-by: Greg Kroah-Hartman --- kernel/locking/rtmutex.c | 29 +++++++++++++++++++++++++++-- kernel/locking/rtmutex_common.h | 1 + 2 files changed, 28 insertions(+), 2 deletions(-) --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -236,8 +236,7 @@ rt_mutex_waiter_less(struct rt_mutex_wai * then right waiter has a dl_prio() too. */ if (dl_prio(left->prio)) - return dl_time_before(left->task->dl.deadline, - right->task->dl.deadline); + return dl_time_before(left->deadline, right->deadline); return 0; } @@ -704,7 +703,26 @@ static int rt_mutex_adjust_prio_chain(st /* [7] Requeue the waiter in the lock waiter tree. */ rt_mutex_dequeue(lock, waiter); + + /* + * Update the waiter prio fields now that we're dequeued. + * + * These values can have changed through either: + * + * sys_sched_set_scheduler() / sys_sched_setattr() + * + * or + * + * DL CBS enforcement advancing the effective deadline. + * + * Even though pi_waiters also uses these fields, and that tree is only + * updated in [11], we can do this here, since we hold [L], which + * serializes all pi_waiters access and rb_erase() does not care about + * the values of the node being removed. + */ waiter->prio = task->prio; + waiter->deadline = task->dl.deadline; + rt_mutex_enqueue(lock, waiter); /* [8] Release the task */ @@ -831,6 +849,8 @@ static int rt_mutex_adjust_prio_chain(st static int try_to_take_rt_mutex(struct rt_mutex *lock, struct task_struct *task, struct rt_mutex_waiter *waiter) { + lockdep_assert_held(&lock->wait_lock); + /* * Before testing whether we can acquire @lock, we set the * RT_MUTEX_HAS_WAITERS bit in @lock->owner. This forces all @@ -958,6 +978,8 @@ static int task_blocks_on_rt_mutex(struc struct rt_mutex *next_lock; int chain_walk = 0, res; + lockdep_assert_held(&lock->wait_lock); + /* * Early deadlock detection. We really don't want the task to * enqueue on itself just to untangle the mess later. It's not @@ -975,6 +997,7 @@ static int task_blocks_on_rt_mutex(struc waiter->task = task; waiter->lock = lock; waiter->prio = task->prio; + waiter->deadline = task->dl.deadline; /* Get the top priority waiter on the lock */ if (rt_mutex_has_waiters(lock)) @@ -1080,6 +1103,8 @@ static void remove_waiter(struct rt_mute struct task_struct *owner = rt_mutex_owner(lock); struct rt_mutex *next_lock; + lockdep_assert_held(&lock->wait_lock); + raw_spin_lock(¤t->pi_lock); rt_mutex_dequeue(lock, waiter); current->pi_blocked_on = NULL; --- a/kernel/locking/rtmutex_common.h +++ b/kernel/locking/rtmutex_common.h @@ -33,6 +33,7 @@ struct rt_mutex_waiter { struct rt_mutex *deadlock_lock; #endif int prio; + u64 deadline; }; /*