Received: by 2002:a05:6a10:c604:0:0:0:0 with SMTP id y4csp329891pxt; Wed, 4 Aug 2021 12:15:26 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxchtfDKHaK54ZMz6Nkuue5YEQxme7Y1gXoTZ6QcGQWTcIYcK24t4EzdoP/Y0BCdzx0a3wY X-Received: by 2002:a5e:c311:: with SMTP id a17mr754192iok.22.1628104525839; Wed, 04 Aug 2021 12:15:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628104525; cv=none; d=google.com; s=arc-20160816; b=NpYIUTBIIVQcrhMxKaLJ57lShY4g/6WQg6BtMIfH4UYMW0zave1ACIqrQFIw4/lv+b HbXCiVKR9xwbf0ITwWubmpBNShtlH+o9kAKD/0Ip445Vnnj8zTa4EkOdbpWQ9NTzlI4S IRpM518zbsjsgZpG+UHf3MdtlfRc8uQfYrH3x+gw7p/XI2JsS65gdjro/dSQ+IqgemLw 6mr1q8cqvnXW2L1XBQKIXWFR0Gh+bngVp3uY/ul+av9rz4jB/tEs+ay4wUGW5EYuVRCd V5FKjvFEUAbPywBbT1IEIkhm7e5gIGF3gzT7IEefom6OlsP/D8cJRhuWdTNyr+Md1IDP uEBw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:dkim-signature :dkim-signature:date; bh=L8OdJeF6JX172ja4DHg/13gB0TcQV9nvDbrJGCXaKxI=; b=xPyldjNmwCW58valcIjCwKcMGs5oDiYX+L+ev9HO5srFUhT6AI1vY4loCkAnhblLX4 5FlGVTJpyr4V6VxDvehu/agyzHGGQ8g5TAPj4N1NxBwh9ABSR8YmAKrMjuTTOf2uCuQp Ss7sm+3kj9kZBEDf+Uvu5uA+FDyV6E+3DaVdKJx+Mu3D+7IDb6WU/BWg+m35A8FHnbh7 Botd2+quPPuKxNmNFuTBb0Ac67zcvyu8qcbfy1feZii9zZ2PSRorqqkjePiCWOcRGZwz tLWptRbSDDZnnAoGrgTzz4IhWpmv/KFpmg7l8BnW3YlcGwLORC8qMrNv1e7eeLS9TXVY K5NA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=eAJ0WYaB; dkim=neutral (no key) header.i=@linutronix.de header.s=2020e header.b=hMQgg0lY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v8si3044816jas.68.2021.08.04.12.15.13; Wed, 04 Aug 2021 12:15:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@linutronix.de header.s=2020 header.b=eAJ0WYaB; dkim=neutral (no key) header.i=@linutronix.de header.s=2020e header.b=hMQgg0lY; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=QUARANTINE dis=NONE) header.from=linutronix.de Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239263AbhHDPsA (ORCPT + 99 others); Wed, 4 Aug 2021 11:48:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239114AbhHDPr7 (ORCPT ); Wed, 4 Aug 2021 11:47:59 -0400 Received: from galois.linutronix.de (Galois.linutronix.de [IPv6:2a0a:51c0:0:12e:550::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5639AC0613D5; Wed, 4 Aug 2021 08:47:46 -0700 (PDT) Date: Wed, 4 Aug 2021 17:47:43 +0200 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020; t=1628092064; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=L8OdJeF6JX172ja4DHg/13gB0TcQV9nvDbrJGCXaKxI=; b=eAJ0WYaBxEx94IoYGhNmJgpLXavP/dbnM4waqvyYkxxDEaZpXaU5Bu9UaIgu/gURL/wq1K FF5Mhaz113Rqt6wD18suN+6rk06HMZrXkfKMH/G+ZPKCNkpNm3FycGTA+asDYSG1lovGuc mRW5nRnEzCvJY3Bx5OheXTFFrbm0f97rev6evqRCp5NqatlxkVVh2dgD3JNRHdkHWiIvpf JfFuCL4VeHoyj4ABaFMukk/HEP8tl0sLYec+94i9lsHZOI6t+k7XUhTFON3G44k0Nv/M9i DcsISUB9DC+u9690yPCZuLCNz8ugL9Up+w8x7WUG3/QWg4F08k/sAqEYip3ipw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de; s=2020e; t=1628092064; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=L8OdJeF6JX172ja4DHg/13gB0TcQV9nvDbrJGCXaKxI=; b=hMQgg0lYWq9HDBfvQ1C8h3Lc1r6amaA3itFJgjy5MXMF+3tEHcRWcy4Iav8yuWNnPh6oON s2YQC/j9qyh1utDA== From: Sebastian Andrzej Siewior To: Jens Axboe Cc: Peter Zijlstra , Daniel Wagner , Thomas Gleixner , LKML , linux-rt-users@vger.kernel.org, Steven Rostedt Subject: Re: [ANNOUNCE] v5.14-rc4-rt4 Message-ID: <20210804154743.niogqvnladdkfgi2@linutronix.de> References: <20210802162750.santic4y6lzcet5c@linutronix.de> <20210804082418.fbibprcwtzyt5qax@beryllium.lan> <20210804104340.fhdjwn3hruymu3ml@linutronix.de> <20210804104803.4nwxi74sa2vwiujd@linutronix.de> <20210804110057.chsvt7l5xpw7bo5r@linutronix.de> <20210804131731.GG8057@worktop.programming.kicks-ass.net> <4f549344-1040-c677-6a6a-53e243c5f364@kernel.dk> <20210804153308.oasahcxjmcw7vivo@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021-08-04 09:39:30 [-0600], Jens Axboe wrote: > I'm confused, the waitqueue locks are always IRQ disabling. spin_lock_irq() does not disable interrupts on -RT. The patch above produces: | BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:35 | in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 2020, name: iou-wrk-2018 | 1 lock held by iou-wrk-2018/2020: | #0: ffff888111a47de8 (&hash->wait){+.+.}-{0:0}, at: io_worker_handle_work+0x443/0x630 | irq event stamp: 10 | hardirqs last enabled at (9): [] _raw_spin_unlock_irqrestore+0x28/0x70 | hardirqs last disabled at (10): [] _raw_spin_lock_irq+0x3e/0x40 | softirqs last enabled at (0): [] copy_process+0x8f8/0x2020 | softirqs last disabled at (0): [<0000000000000000>] 0x0 | CPU: 5 PID: 2020 Comm: iou-wrk-2018 Tainted: G W 5.14.0-rc4-rt4+ #97 | Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014 | Call Trace: | dump_stack_lvl+0x45/0x59 | ___might_sleep.cold+0xa6/0xb6 | rt_spin_lock+0x35/0xc0 | ? io_worker_handle_work+0x443/0x630 | io_worker_handle_work+0x443/0x630 | io_wqe_worker+0xb4/0x340 | ? lockdep_hardirqs_on_prepare+0xd4/0x170 | ? _raw_spin_unlock_irqrestore+0x28/0x70 | ? _raw_spin_unlock_irqrestore+0x28/0x70 | ? io_worker_handle_work+0x630/0x630 | ? rt_mutex_slowunlock+0x2ba/0x310 | ? io_worker_handle_work+0x630/0x630 | ret_from_fork+0x22/0x30 But indeed, you are right, my snippet breaks non-RT. So this then maybe: diff --git a/fs/io-wq.c b/fs/io-wq.c index 57d3cdddcdb3e..0b931ac3c83e6 100644 --- a/fs/io-wq.c +++ b/fs/io-wq.c @@ -384,7 +384,7 @@ static void io_wait_on_hash(struct io_wqe *wqe, unsigned int hash) { struct io_wq *wq = wqe->wq; - spin_lock(&wq->hash->wait.lock); + spin_lock_irq(&wq->hash->wait.lock); if (list_empty(&wqe->wait.entry)) { __add_wait_queue(&wq->hash->wait, &wqe->wait); if (!test_bit(hash, &wq->hash->map)) { @@ -392,7 +392,7 @@ static void io_wait_on_hash(struct io_wqe *wqe, unsigned int hash) list_del_init(&wqe->wait.entry); } } - spin_unlock(&wq->hash->wait.lock); + spin_unlock_irq(&wq->hash->wait.lock); } static struct io_wq_work *io_get_next_work(struct io_wqe *wqe) @@ -430,9 +430,9 @@ static struct io_wq_work *io_get_next_work(struct io_wqe *wqe) } if (stall_hash != -1U) { - raw_spin_unlock(&wqe->lock); + raw_spin_unlock_irq(&wqe->lock); io_wait_on_hash(wqe, stall_hash); - raw_spin_lock(&wqe->lock); + raw_spin_lock_irq(&wqe->lock); } return NULL; (this is on-top of the patch you sent earlier and Daniel Cc: me on after I checked that the problem/warning still exists). Sebastian