Received: by 2002:a05:6a10:9e8c:0:0:0:0 with SMTP id y12csp3492993pxx; Mon, 2 Nov 2020 10:17:08 -0800 (PST) X-Google-Smtp-Source: ABdhPJxzNAKT9/M18WoiPPm5JBoWtOX1qEkvVJmIvrJsJi8A8M/fvngJO40q0wYghK7xnvnRFThK X-Received: by 2002:a17:906:383:: with SMTP id b3mr16017484eja.351.1604341028300; Mon, 02 Nov 2020 10:17:08 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1604341028; cv=none; d=google.com; s=arc-20160816; b=Xmr8I79m6XPXMB8nAGGYvzNqhhfN7Vk2KUW4dCt/el2kx0UMB9vBLnzgONT/75xjFx bhcxTwHkKXklpXirLRGEa+7nFMD2I6XGR+zdZzGVxuCeQuLjIkZD4zuWLu84LjfJuXBy vHnTAjGzRWhpbqy4+EDE3IXgVCnUrAYjZHZ9TxzYQOd99+YwRtTQnawcP2f64qax/9O/ zdml6oz7/OGGgQAqWIm+bsfWl7YE2hzsPkItVT0vPyGOlW55H6/rPIC7F7nHPshVPic0 XXdNK3cynd6rv8USIE0OqTQAdt6nb1dhy0owOjUoLXjYvBY4EpPggVvK3c2RPeJ41DI9 KB0A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date:dkim-signature; bh=tzLpJEtqWMoAnuZtioMuRPejBU3helvAx3OnKp5WyT4=; b=xgl96LMhEfp2fEFOaAXAr4FWG4fwnqW80IH2zxDzdMSO0CiAF+q6HJyhQuetNr5lKj U4evV6m1wcfogF1Z3L5ZDqXra/uvquy/0sMFrenDi81nYx7x7CqWM/Yz5T2I5pWxvED1 CynKhcE64zRlh6wl0gxC8LpJYL/sKBR+bPCdLOxELcuTkjpCWPZir7BKYShp6xOb0qtp IXqYXFct50Fzy/yJbdPiiPEcEWkyuj1Ia/z7iZHpFjyxOj+9YQs5x1LZeILqzatkM+7K NtFZnnh1gFgE4+hmv/l2WrPe3ldVk121fCaDATqrW3ROmdGzVh8VGZ/EBb7um0drQM7P URBA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=K+gfwNBh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id j19si3308235ejs.634.2020.11.02.10.16.44; Mon, 02 Nov 2020 10:17:08 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@infradead.org header.s=casper.20170209 header.b=K+gfwNBh; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726299AbgKBSMp (ORCPT + 99 others); Mon, 2 Nov 2020 13:12:45 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:55828 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725797AbgKBSMp (ORCPT ); Mon, 2 Nov 2020 13:12:45 -0500 Received: from casper.infradead.org (casper.infradead.org [IPv6:2001:8b0:10b:1236::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 473A0C0617A6; Mon, 2 Nov 2020 10:12:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=casper.20170209; h=In-Reply-To:Content-Type:MIME-Version: References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=tzLpJEtqWMoAnuZtioMuRPejBU3helvAx3OnKp5WyT4=; b=K+gfwNBhSTHctxo7GZiCazthuJ i7/UcMkPU6eDSMzvQagAC4kJJ6q8TBj23rOLaAkkxbILe5cwPbOnbIray2do6ta8FtzHrvqznk5lv 2Qxfu8C6iW3Es6Mi/wIwYE2yI86zxSCtrB6aEsbbiCJqhvdEXjbjqdWoue3sPVOp/xfN3MyyN0/Jr 8Fl1JcFLYsZSthool5iJ3c/2vllWq33UAZs/mJ3S7cFVwhAec8CMwbUD43yebsjy7U1dVABCpkd0l UgdXYrqbAAQJo5WQmJ2nHlALg4VqG7M8jH/x7/TvrJqeRN7Nu/YBcssAwhCQ+Bljm5jMlgdUeWYjV 2siQtS2Q==; Received: from hch by casper.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1kZeJq-0004v4-I3; Mon, 02 Nov 2020 18:12:38 +0000 Date: Mon, 2 Nov 2020 18:12:38 +0000 From: Christoph Hellwig To: Sebastian Andrzej Siewior Cc: Jens Axboe , Sagi Grimberg , Christoph Hellwig , linux-block@vger.kernel.org, Thomas Gleixner , David Runge , linux-rt-users@vger.kernel.org, linux-kernel@vger.kernel.org, Peter Zijlstra , Daniel Wagner , Mike Galbraith Subject: Re: [PATCH 3/3] blk-mq: Use llist_head for blk_cpu_done Message-ID: <20201102181238.GA17806@infradead.org> References: <20201029131212.dsulzvsb6pahahbs@linutronix.de> <20201029140536.GA6376@infradead.org> <20201029145623.3zry7o6nh6ks5tjj@linutronix.de> <20201029145743.GA19379@infradead.org> <20201029210103.ocufuvj6i4idf5hj@linutronix.de> <20201031104108.wjjdiklqrgyqmj54@linutronix.de> <3bbfb5e1-c5d7-8f3b-4b96-6dc02be0550d@kernel.dk> <20201102095533.fxc2xpauzsoju7cm@linutronix.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201102095533.fxc2xpauzsoju7cm@linutronix.de> X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org. See http://www.infradead.org/rpr.html Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Nov 02, 2020 at 10:55:33AM +0100, Sebastian Andrzej Siewior wrote: > On 2020-10-31 09:00:49 [-0600], Jens Axboe wrote: > > There really aren't any rules for this, and it's perfectly legit to > > complete from process context. Maybe you're a kthread driven driver and > > that's how you handle completions. The block completion path has always > > been hard IRQ safe, but possible to call from anywhere. > > I'm not trying to put restrictions and forbidding completions from a > kthread. I'm trying to avoid the pointless softirq dance for no added > value. We could: > to not break that assumption you just mentioned and provide > |static inline void blk_mq_complete_request_local(struct request *rq) > |{ > | rq->q->mq_ops->complete(rq); > |} > > so that completion issued from from process context (like those from > usb-storage) don't end up waking `ksoftird' (running at SCHED_OTHER) > completing the requests but rather performing it right away. The softirq > dance makes no sense here. Agreed. But I don't think your above blk_mq_complete_request_local is all that useful either as ->complete is defined by the caller, so we could just do a direct call. Basically we should just return false from blk_mq_complete_request_remote after updating the state when called from process context. But given that IIRC we are not supposed to check what state we are called from we'll need a helper just for updating the state instead and ensure the driver uses the right helper. Now of course we might have process context callers that still want to bounce to the submitting CPU, but in that case we should go directly to a workqueue or similar. Either way doing this properly will probabl involve an audit of all drivers, but I think that is worth it.