Received: by 2002:a05:6a10:2726:0:0:0:0 with SMTP id ib38csp1174798pxb; Tue, 29 Mar 2022 19:29:21 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxAohglSlwrAwRcDzJAu03HcPSxpwSAASefsc/UjrYXAUFul30eoPa6/7Iq5JI/akoeqtWI X-Received: by 2002:a17:907:971c:b0:6e0:dd95:9fc6 with SMTP id jg28-20020a170907971c00b006e0dd959fc6mr1174178ejc.256.1648607361265; Tue, 29 Mar 2022 19:29:21 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1648607361; cv=none; d=google.com; s=arc-20160816; b=CwcqSjyxdY+TifRnoKv99G5Os0FfzcOCMJFClvKKmlitZUG1loDlIAeiSXgJEYnQWU x1MJzQ3dXcO47mCkmsCa6FIuYSqmb64Qj3qwSLEMhhEy0cbHOsb1Sl3qOnmNPpfy2jH2 8R+OAuyIYwsRGjGcRFYzBomgdPByw9+4tNLaMEjiimQGc79GX06sVVW/qC26rbQDEVDg kafYJwG/2Fo4ePaVzTrXhM4I8Opnrhgbnk0dkfgp5AdN6539Js1srfp4jh5ZqbVqBThT Y2hW34ZxHqSyRrmjSic7wxroNvDDCLDiVyWg2kREJviwJIXicExTTok28BbwmepSLfZw Susg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to:from :references:cc:to:content-language:subject:user-agent:mime-version :date:message-id:dkim-signature; bh=bAkBU3aYq3LSXumpeOX3hngnomWzXyt6QHgjxSMBGcs=; b=XNmK22RGo07y95VvBZCErixIjXeJAbbY6Oi55o18jUhvN35fmKN7VnkmUgGtcEuImB tODPpfNoiqII2/SbBW3QaRk0Mp7vFS4g8VWXSyPt67LQklfmuzAXvu2OudvSlIRJq3+h HWDldQORXYEj4SBYGM408picr8wE1qYBhMEzNQbYJOs859AqmFltj30HqjIp9CbPNzSS A7HNqHwVqWChWjq4kaQ6b/rZbaYULLdwk5YiHD+rA0LUk/gx4/AgBpS5AreTOnNmckUg G9RKAbordS91Olv0m+FA0NmxfP7JKoeLN1na5tZmP2bDMwdILgxGIS3/TPl/bXifoWEG sDpg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@rasmusvillemoes.dk header.s=google header.b=JGoJpZ8n; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id p8-20020a50cd88000000b00418c2b5be9esi19412202edi.384.2022.03.29.19.28.55; Tue, 29 Mar 2022 19:29:21 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@rasmusvillemoes.dk header.s=google header.b=JGoJpZ8n; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234013AbiC2IfO (ORCPT + 99 others); Tue, 29 Mar 2022 04:35:14 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55208 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234020AbiC2IfH (ORCPT ); Tue, 29 Mar 2022 04:35:07 -0400 Received: from mail-lj1-x22d.google.com (mail-lj1-x22d.google.com [IPv6:2a00:1450:4864:20::22d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 177FB97B98 for ; Tue, 29 Mar 2022 01:33:23 -0700 (PDT) Received: by mail-lj1-x22d.google.com with SMTP id v12so9748813ljd.3 for ; Tue, 29 Mar 2022 01:33:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rasmusvillemoes.dk; s=google; h=message-id:date:mime-version:user-agent:subject:content-language:to :cc:references:from:in-reply-to:content-transfer-encoding; bh=bAkBU3aYq3LSXumpeOX3hngnomWzXyt6QHgjxSMBGcs=; b=JGoJpZ8nwm+dBwKRA6e0NefYl+OlHRKr+aFiRvhfKkPGkGCtPqPLPwMxl3beyBsODF 515O0/8ny/2dOKQQw/CWbiBmNXh3QQesV/Su9xVMRKsFaO4LyNfqfPq4qvFpyc+lsOw3 FrXQ7klQgW8DGHISCmJUpDi/7uKfodbrJ7f9Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent:subject :content-language:to:cc:references:from:in-reply-to :content-transfer-encoding; bh=bAkBU3aYq3LSXumpeOX3hngnomWzXyt6QHgjxSMBGcs=; b=n9syxAXLpSzC7EuajRkz3Q3ajrw/GtJd0H4YAOHx3cx2ct+mTp8yGmV5+5xNMyHJH1 U6FFYkQlbSh0ZGAtn2AcYBawHtAvxdHfLxUD95sAO2KwmXQr4ruDENlJjEkfXXdcsFHB kJaWDnKS4yM5yxPrLASpOdGKTOv5AisNn84RS3+DunQijwkqZcXgwis/59jw8Otq2Wlb cX9YpjaeScW8++kHZ6vGP0m5ZNXIJzb8n4VHtQ9pTMKExCR1U+t4uX2p1Q9RqaTaKR5E YEJ+c0UBLTknN4LBBmHK2TcygbjI8f+a+3oPw5A2RwDmQNB8uuW+WtB2gRHmox1dn2Zx WdSw== X-Gm-Message-State: AOAM530wLVS/uPNE/7Tn4hnW5MUJQVCOGEauJ+c0K/ghC+E2zi49L31x NVVrmw1BRnTV6A2jBNJlb33x7Q== X-Received: by 2002:a2e:a796:0:b0:249:86ac:f836 with SMTP id c22-20020a2ea796000000b0024986acf836mr1634809ljf.104.1648542802048; Tue, 29 Mar 2022 01:33:22 -0700 (PDT) Received: from [172.16.11.74] ([81.216.59.226]) by smtp.gmail.com with ESMTPSA id h5-20020a056512054500b0044a36e91a90sm1913269lfl.197.2022.03.29.01.33.20 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 29 Mar 2022 01:33:21 -0700 (PDT) Message-ID: <8b21ad64-ea9c-84f2-c798-222c9383e3de@rasmusvillemoes.dk> Date: Tue, 29 Mar 2022 10:33:19 +0200 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.5.0 Subject: Re: [RFC PATCH 0/2] RT scheduling policies for workqueues Content-Language: en-US To: Sebastian Andrzej Siewior , Tejun Heo Cc: Marc Kleine-Budde , Peter Hurley , Lai Jiangshan , Esben Haabendal , Steven Walter , linux-kernel@vger.kernel.org, Oleksij Rempel , Pengutronix Kernel Team , =?UTF-8?Q?Andr=c3=a9_Pribil?= , Jiri Slaby , linux-rt-users@vger.kernel.org References: <20220323145600.2156689-1-linux@rasmusvillemoes.dk> <20220328100927.5ax34nea7sp7jdsy@pengutronix.de> From: Rasmus Villemoes In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,NICE_REPLY_A,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 29/03/2022 08.30, Sebastian Andrzej Siewior wrote: > On 2022-03-28 07:39:25 [-1000], Tejun Heo wrote: >> Hello, > Hi, > >> I wonder whether it'd be useful to provide a set of wrappers which can make >> switching between workqueue and kworker easy. Semantics-wise, they're >> already mostly aligned and it shouldn't be too difficult to e.g. make an >> unbounded workqueue be backed by a dedicated kthread_worker instead of >> shared pool depending on a flag, or even allow switching dynamically. Well, that would certainly not make it any easier for userspace to discover the thread it needs to chrt(). > This could work. For the tty layer it could use 'lowlatency' attribute > to decide which implementation makes sense. I have patches that merely touch the tty layer, but tying it to the lowlatency attribute is quite painful (which has also come up in previous discussions on this) - because the lowlatency flag can be flipped from userspace, but synchronizing which variant is used and switching dynamically is at least beyond my skills to make work robustly. So in my patches, the choice is made at open() time. However, I'm still not convinced code like struct tty_bufhead { struct tty_buffer *head; /* Queue head */ struct work_struct work; + struct kthread_work kwork; + struct kthread_worker *kworker; bool tty_buffer_restart_work(struct tty_port *port) { - return queue_work(system_unbound_wq, &port->buf.work); + struct tty_bufhead *buf = &port->buf; + + if (buf->kworker) + return kthread_queue_work(buf->kworker, &buf->kwork); + else + return queue_work(system_unbound_wq, &buf->work); } etc. is the way to go. === Here's another idea: In an ideal world, the irq thread itself [people caring about latency use threaded interrupts] could just do the work immediately - then the admin only has one kernel thread to properly configure. However, as Sebastian pointed out, doing that leads to a lockdep splat [1], and it also means that there's no work item involved, so some other thread calling tty_buffer_flush_work() might not actually wait for a concurrent flush_to_ldisc() to finish. So could we create a struct hybrid_work { } which, when enqueued, does something like bool current_is_irqthread(void) { return in_task() && kthread_func(current) == irq_thread; } hwork_queue(struct hybrid_work *hwork, struct workqueue_struct *wq) if (current_is_irqthread()) { task_work_add(current, &hwork->twork) } else { queue_work(wq, &hwork->work); } (with extra bookkeeping so _flush and _cancel_sync methods can also be created). It would require irqthread to learn to run its queued task_works in its main loop, which in turn would require finding some other way to do the irq_thread_dtor() cleanup, but that should be doable. While the implementation of hybrid_work might be a bit complex, I think this would have potential for being used in other situations, and for the users, the API would be as simple as the current workqueue/struct kwork APIs. By letting the irq thread do more/all of the work, we'd probably also win some latency due to fewer threads involved and better cache locality. And the admin/BSP is already setting the rt priorities of the [irq/...] threads. Rasmus [1] https://lore.kernel.org/linux-rt-users/20180711080957.f6txdmzrrrrdm7ig@linutronix.de/