Received: by 2002:a05:6a10:c604:0:0:0:0 with SMTP id y4csp396664pxt; Thu, 5 Aug 2021 02:22:30 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw/80jSS/iz3qxu95boNsDpiVfxNuBTVkNbdJqc3gYgdGnBx24Lyka3eYC7yoiObv/9YEza X-Received: by 2002:a17:906:dc4c:: with SMTP id yz12mr3864341ejb.498.1628155350093; Thu, 05 Aug 2021 02:22:30 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1628155350; cv=none; d=google.com; s=arc-20160816; b=n7I8fi6pX7ABriqwlLFeNNiHapJqNoIZtE8vMJ6pPqs44e+PEfm0xjf8zvs86c+no3 QXZG2SMsb0MSdViGRWv1V2sjo/V27c0TNAgV4jnHS9GT71EIqSb6zJCQGXELu6f3ADVo agcfUyx8yPwqQeq9Jjm3B2O5vB7yehFhsOrpvjeFdzbVjB7corEW/sWBsx99k63GBoCj 1b2+hNKE3P8d+f0h7ld47Ddtchhx7i4mYgUm07iaxpjyOoRcrgw19LCNRtxaor+zttXp XUqkbeft53WHmVkQaBxcZtsHulGY/tidQvqnpuzSazvsV5ZqzlA3ncQ+G8dRMdDJoWcD gLzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=EjjjkwFYj96sLMR/uhVYLccaIfMeHIDI08Rxwy4jL2w=; b=lWJ76TXDAWvk4+F59F9ItsZR9Lc2KNiyyU6yLWAxrWaD86leYhwhHzjtqjqNpNVbsD uDENxvEJ3j3ffiruOFAgvGyGNjdZ8lIBXkTxNnoNdlxRLx6+WShZhPUFZhI0ZVJJyQfg n32axUznkXHsZH/cfZs4BstVphsVUsilv7927piicQpr3K1esHbGOLhbQOo9QzO4BQj3 TKME0ng87i7G/eKPYtj/hS3IShXK5cHTd/6KTxCUJTC1MK/hgQ88Q/aZ+4PzX6Ud6fw/ m6DzoF+sJ3nIcLgNDmH4z8LI7Xbwea9rWDZ5GGb8fPxgoBO/teP+UBAq6ivCl3KtLpAu W2Jg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=hK1EDkkW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id s10si5802466edd.167.2021.08.05.02.22.04; Thu, 05 Aug 2021 02:22:30 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=desiato.20200630 header.b=hK1EDkkW; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239930AbhHEJSM (ORCPT + 99 others); Thu, 5 Aug 2021 05:18:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55040 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239959AbhHEJSA (ORCPT ); Thu, 5 Aug 2021 05:18:00 -0400 Received: from desiato.infradead.org (desiato.infradead.org [IPv6:2001:8b0:10b:1:d65d:64ff:fe57:4e05]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 620ACC06179B; Thu, 5 Aug 2021 02:17:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=EjjjkwFYj96sLMR/uhVYLccaIfMeHIDI08Rxwy4jL2w=; b=hK1EDkkWxzKxclE42S1W79/o8j PBm3ooShAlyy7i/pYJEMqlvY367wSVNFyCfsqyPOkKoNL9fbL6jlqSIk6j1oU8Cd31/sNbmZ5Bupr xfu93SXSh+LkheCfwCohWoOwY3tXFlxBiuvpbFIPIx2EL1MnumfN6Ig3USTilLBEXlR2ZJ2fNpWgE ukJ0fIMcMsu6YaIAI+bDmmaSkayD0af6pw7xvT8RAxOOQ47tGwpCeDSkkaK1PjI9ni6pL8WhmwDdI aG3Cn0bNrdjPVY7Xls8kQuYJFnpUWRJpSisHcWCHYvuBqBZeba2wYaE556L4Mmatf7HkiuerNbhH/ wzCeRQgw==; Received: from j217100.upc-j.chello.nl ([24.132.217.100] helo=worktop.programming.kicks-ass.net) by desiato.infradead.org with esmtpsa (Exim 4.94.2 #2 (Red Hat Linux)) id 1mBZVW-005yqr-RM; Thu, 05 Aug 2021 09:17:43 +0000 Received: by worktop.programming.kicks-ass.net (Postfix, from userid 1000) id BB1C19862B0; Thu, 5 Aug 2021 11:17:41 +0200 (CEST) Date: Thu, 5 Aug 2021 11:17:41 +0200 From: Peter Zijlstra To: Jens Axboe Cc: io-uring , LKML , Daniel Wagner Subject: Re: [PATCH] io-wq: remove GFP_ATOMIC allocation off schedule out path Message-ID: <20210805091741.GB22037@worktop.programming.kicks-ass.net> References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Aug 04, 2021 at 08:43:43AM -0600, Jens Axboe wrote: > Daniel reports that the v5.14-rc4-rt4 kernel throws a BUG when running > stress-ng: > > | [ 90.202543] BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:35 > | [ 90.202549] in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 2047, name: iou-wrk-2041 > | [ 90.202555] CPU: 5 PID: 2047 Comm: iou-wrk-2041 Tainted: G W 5.14.0-rc4-rt4+ #89 > | [ 90.202559] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014 > | [ 90.202561] Call Trace: > | [ 90.202577] dump_stack_lvl+0x34/0x44 > | [ 90.202584] ___might_sleep.cold+0x87/0x94 > | [ 90.202588] rt_spin_lock+0x19/0x70 > | [ 90.202593] ___slab_alloc+0xcb/0x7d0 > | [ 90.202598] ? newidle_balance.constprop.0+0xf5/0x3b0 > | [ 90.202603] ? dequeue_entity+0xc3/0x290 > | [ 90.202605] ? io_wqe_dec_running.isra.0+0x98/0xe0 > | [ 90.202610] ? pick_next_task_fair+0xb9/0x330 > | [ 90.202612] ? __schedule+0x670/0x1410 > | [ 90.202615] ? io_wqe_dec_running.isra.0+0x98/0xe0 > | [ 90.202618] kmem_cache_alloc_trace+0x79/0x1f0 > | [ 90.202621] io_wqe_dec_running.isra.0+0x98/0xe0 > | [ 90.202625] io_wq_worker_sleeping+0x37/0x50 > | [ 90.202628] schedule+0x30/0xd0 > | [ 90.202630] schedule_timeout+0x8f/0x1a0 > | [ 90.202634] ? __bpf_trace_tick_stop+0x10/0x10 > | [ 90.202637] io_wqe_worker+0xfd/0x320 > | [ 90.202641] ? finish_task_switch.isra.0+0xd3/0x290 > | [ 90.202644] ? io_worker_handle_work+0x670/0x670 > | [ 90.202646] ? io_worker_handle_work+0x670/0x670 > | [ 90.202649] ret_from_fork+0x22/0x30 > > which is due to the RT kernel not liking a GFP_ATOMIC allocation inside > a raw spinlock. Besides that not working on RT, doing any kind of > allocation from inside schedule() is kind of nasty and should be avoided > if at all possible. > > This particular path happens when an io-wq worker goes to sleep, and we > need a new worker to handle pending work. We currently allocate a small > data item to hold the information we need to create a new worker, but we > can instead include this data in the io_worker struct itself and just > protect it with a single bit lock. We only really need one per worker > anyway, as we will have run pending work between to sleep cycles. > > https://lore.kernel.org/lkml/20210804082418.fbibprcwtzyt5qax@beryllium.lan/ > Reported-by: Daniel Wagner > Signed-off-by: Jens Axboe Thanks! Acked-by: Peter Zijlstra (Intel)