Received: by 2002:a05:6358:a55:b0:ec:fcf4:3ecf with SMTP id 21csp5213746rwb; Tue, 17 Jan 2023 10:33:56 -0800 (PST) X-Google-Smtp-Source: AMrXdXt/6NrfzKs8DiXtALFw/jEJGToy8lXqls3K1y+66C4Ea4UJqzDlvWiegEzhSl7QhyQ33qxc X-Received: by 2002:a05:6a20:cf62:b0:b8:775b:2caa with SMTP id hz34-20020a056a20cf6200b000b8775b2caamr3541543pzb.42.1673980436456; Tue, 17 Jan 2023 10:33:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1673980436; cv=none; d=google.com; s=arc-20160816; b=gcnsvRNgLm0+ZIEJjZuIHhr0PEN2kjd0p0z5fVD7y2o/aJQyphLt3hGuTgWff9kTjf 8Kxtvmp3x0tIdl2FvOTheBW4gw8kXcnL4jFdtHeZdqtwl02uVTAX1rgzVlrwh7GpfI9b qVEhc4JoL6DSlT5ySIoOi2bsmZutM+Wu1fP8DOtJZzPKhWOBZCI/u733MLSBgCRv986b aEKj96YKzMIA3cMj2vt6+D5tT88eKQYBAVxOTpArnzbRgzh5c+yIAf5RMd5BhJ4FJtzL TDL5YLhEx478/cyZDxaQyuPS9aoS9hZKkYp3iokx/5dRsPOwaq82vfgk9Z9B8YVRrdLa MeeQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=G55sfgJuDMMAYIXInt+J1nhGEOtoZVH7aF/U1ojeI9Q=; b=USrGY9sprMEgW+Etw/NXGegT34F46DHBzxPBtNMP6wj9aoUWSk3BqmXj8meqCQ2SmC R+onpuwLbbdTh01drmIlqBleAdM+xHjDzmZB2R/mGpd6QEU2BsQJpM+M+SPtbBHSRO68 jRKGS1Wlnew6Tl8nXKq5bxdcUkEP2lRwD+c3sDh9DgM4HMpY4JJ5Waj9c4Yd4sbSs9Q8 kfAjaTDNaXmaGgJq4RtBtjzcN22Bs7HxIBlztJpLHlJjyugKSsG6v0Fb3Wl0WlwZgsTQ 1Sg8q7KccAbhErD4OJFrQ+21p8HpNqjTpXdxiP2H2+FfacELyQR4lgtVgcdLUG7jyTtn mBtw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=UtpbeYA+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id x17-20020a056a00189100b0056d8f42a69csi11752203pfh.145.2023.01.17.10.33.50; Tue, 17 Jan 2023 10:33:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=UtpbeYA+; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235388AbjAQReM (ORCPT + 48 others); Tue, 17 Jan 2023 12:34:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54576 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235411AbjAQR3R (ORCPT ); Tue, 17 Jan 2023 12:29:17 -0500 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 42C3F4B192 for ; Tue, 17 Jan 2023 09:27:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1673976426; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=G55sfgJuDMMAYIXInt+J1nhGEOtoZVH7aF/U1ojeI9Q=; b=UtpbeYA+hGvIWTAqM1PykGOxMr5z2ffNOZZA55wxaVsc7gImXyiRcwUkJcHgHEAucIYuP0 UhQEQJy1WBwcMYDmd2bbhXbm4lf7yPnq9gJsGd5F1f7cDN+jWUmFwwWng2TwQpP/HphIbN qZl7dNNfq9zsyr9ByhxJMn2B7v7bAi8= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-104-TSDXnP_fNaabKQZIIPOFhQ-1; Tue, 17 Jan 2023 12:27:03 -0500 X-MC-Unique: TSDXnP_fNaabKQZIIPOFhQ-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 661BD1816ECA; Tue, 17 Jan 2023 17:26:56 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.22.32.245]) by smtp.corp.redhat.com (Postfix) with ESMTP id 876B9492B00; Tue, 17 Jan 2023 17:26:54 +0000 (UTC) From: Wander Lairson Costa To: Peter Zijlstra , Ingo Molnar , Will Deacon , Waiman Long , Boqun Feng , linux-kernel@vger.kernel.org (open list:LOCKING PRIMITIVES) Cc: Wander Lairson Costa Subject: [PATCH] rtmutex: ensure we wake up the top waiter Date: Tue, 17 Jan 2023 14:26:49 -0300 Message-Id: <20230117172649.52465-1-wander@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Scanned-By: MIMEDefang 3.1 on 10.11.54.10 X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In task_blocked_on_lock() we save the owner, release the wait_lock and call rt_mutex_adjust_prio_chain(). Before we acquire the wait_lock again, the owner may release the lock and deboost. rt_mutex_adjust_prio_chain() acquires the wait_lock. In the requeue phase, waiter may be initially in the top of the queue, but after dequeued and requeued it may no longer be true. This scenario ends up waking the wrong task, which will verify it is no the top waiter and comes back to sleep. Now we have a situation in which no task is holding the lock but no one acquires it. We can reproduce the bug in PREEMPT_RT with stress-ng: while true; do stress-ng --sched deadline --sched-period 1000000000 \ --sched-runtime 800000000 --sched-deadline \ 1000000000 --mmapfork 23 -t 20 done Signed-off-by: Wander Lairson Costa --- kernel/locking/rtmutex.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 010cf4e6d0b8..728f434de2bb 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -901,8 +901,9 @@ static int __sched rt_mutex_adjust_prio_chain(struct task_struct *task, * then we need to wake the new top waiter up to try * to get the lock. */ - if (prerequeue_top_waiter != rt_mutex_top_waiter(lock)) - wake_up_state(waiter->task, waiter->wake_state); + top_waiter = rt_mutex_top_waiter(lock); + if (prerequeue_top_waiter != top_waiter) + wake_up_state(top_waiter->task, top_waiter->wake_state); raw_spin_unlock_irq(&lock->wait_lock); return 0; } -- 2.39.0