Received: by 2002:a05:6a10:9e8c:0:0:0:0 with SMTP id y12csp731376pxx; Thu, 29 Oct 2020 13:05:39 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxJolQcK22VezNAEvrpiDHPvdtvVe8kLsFyx7yscudSstfTMxEliodewJV6OdQ/twAktX9z X-Received: by 2002:a17:906:711:: with SMTP id y17mr5870406ejb.271.1604001938774; Thu, 29 Oct 2020 13:05:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1604001938; cv=none; d=google.com; s=arc-20160816; b=eUTmjox2pv0qjQYJw2yeFQlfoybg4QxQLCMvNyGV6M9IGu9vfZ+QRHalPCgN4RvpKm /Xesm7OpWVJEyHwNLhWfnXnV8yDbClMjBsgUaW3am9v+NnxYtrkhpBQ3wUX0fCFCy4no 4oEukwUjSkwnjImKrZ5Ng0FkpYyVMkXBFTxgApeUz58g5lqt/cF7mlq7PslDzkvA7h48 IkBsIPFnb5uiEuRxcYVM5Na27H4YFPB8R6P/Huksvm4grmL+GZ1oa64FVbMGQdkmd0R7 oLyLZiYrr6CA3zPlOoQ/z9aPJQR28M4f7vqSSVHMuSG1eTrn93Fs4edR47VdpMiWRxWD ju9A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=L2/NcJj+iHYkGDNkmgcYCv1Kv0O8+QeL39KIf4j5s4g=; b=BXo8r8qgKKp8Af5TazMwd2vX9JSsoWarNJyP2c7IFgWS7lpWbUqvK4ahoYQAyPwapm hB7rTe5Uqor7bjEnrKO07qtTddLX7Lj1LEGoTrhzYWuJD1nSw6FUo1U0E6tw56iY6lMo XxX2IXYFmHB/gfvuzYa9XPQyLeBnfapNMzgIf5Q2Xpp0tnd1e4mrtR8Vk56JM0kyko+R P8zG7FPxxFOJTzC6CX+jNQSrs6CtmeEc7u2bMJI3gAfgOTzknV15iEwKHXcC9nW/ZkkF kYwwWsrP54CWi0mlYDfUM1DOd5caNBt3gZERg3GESoj0tUqWp45cS+EHVY7U02d4o6vQ K4vw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id bs11si2925449edb.24.2020.10.29.13.05.15; Thu, 29 Oct 2020 13:05:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726152AbgJ2UDf (ORCPT + 99 others); Thu, 29 Oct 2020 16:03:35 -0400 Received: from mail-wr1-f66.google.com ([209.85.221.66]:43752 "EHLO mail-wr1-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725764AbgJ2UDe (ORCPT ); Thu, 29 Oct 2020 16:03:34 -0400 Received: by mail-wr1-f66.google.com with SMTP id g12so4091782wrp.10; Thu, 29 Oct 2020 13:03:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=L2/NcJj+iHYkGDNkmgcYCv1Kv0O8+QeL39KIf4j5s4g=; b=hKGxXo1BI5C8fKuk0N6xNS+pDYiIw9dbPdhtQMfjMGCUkYdSKE2sQPpn8R8IyuvUsa hqchIgDyp5Bixq0kNdG55IdCgCWJTE62ogSJKoOp7PPyMxKUB+55t5ak7yPLm+WGvIlA uOi4yJxb92l6v04dYp3QduMz3yRRyn4p5qm0ZmuBqk32SRHc6SCY1IzDSOGZEbsEVjUi ejj3Y6GW9lRtU6Pr8FG0rboSUvQlzFZ91cIUT716iTDOXbvDcNjMv1QsL9nDoyQYlnCO RThdknu9ukJ6vQQKGEuwJIVAvXYaVIB08ryqAB2rbHtckYBGkmdb0/ZUN89Vsn9F7A5O KRmA== X-Gm-Message-State: AOAM530zXL8w3fJcy+uBDmVV+QhgFUxJRh/0nkVTDSBGj+pHg1syp4lk FJH/buf19XY8XVdBh5MRIn0kdwCxizo= X-Received: by 2002:a5d:660a:: with SMTP id n10mr4364043wru.59.1604001812246; Thu, 29 Oct 2020 13:03:32 -0700 (PDT) Received: from ?IPv6:2601:647:4802:9070:d32:e3ef:ad74:6ea9? ([2601:647:4802:9070:d32:e3ef:ad74:6ea9]) by smtp.gmail.com with ESMTPSA id r18sm7687328wrj.50.2020.10.29.13.03.29 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Thu, 29 Oct 2020 13:03:31 -0700 (PDT) Subject: Re: [PATCH 3/3] blk-mq: Use llist_head for blk_cpu_done To: Christoph Hellwig , Sebastian Andrzej Siewior Cc: linux-block@vger.kernel.org, Thomas Gleixner , David Runge , linux-rt-users@vger.kernel.org, Jens Axboe , linux-kernel@vger.kernel.org, Peter Zijlstra , Daniel Wagner , Mike Galbraith References: <20201028065616.GA24449@infradead.org> <20201028141251.3608598-1-bigeasy@linutronix.de> <20201028141251.3608598-3-bigeasy@linutronix.de> <20201029131212.dsulzvsb6pahahbs@linutronix.de> <20201029140536.GA6376@infradead.org> <20201029145623.3zry7o6nh6ks5tjj@linutronix.de> <20201029145743.GA19379@infradead.org> From: Sagi Grimberg Message-ID: Date: Thu, 29 Oct 2020 13:03:26 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <20201029145743.GA19379@infradead.org> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org >>> Well, usb-storage obviously seems to do it, and the block layer >>> does not prohibit it. >> >> Also loop, nvme-tcp and then I stopped looking. >> Any objections about adding local_bh_disable() around it? > > To me it seems like the whole IPI plus potentially softirq dance is > a little pointless when completing from process context. I agree. > Sagi, any opinion on that from the nvme-tcp POV? nvme-tcp should (almost) always complete from the context that matches the rq->mq_ctx->cpu as the thread that processes incoming completions (per hctx) should be affinitized to match it (unless cpus come and go). So for nvme-tcp I don't expect blk_mq_complete_need_ipi to return true in normal operation. That leaves the teardowns+aborts, which aren't very interesting here. I would note that nvme-tcp does not go to sleep after completing every I/O like how sebastian indicated usb does. Having said that, today the network stack is calling nvme_tcp_data_ready in napi context (softirq) which in turn triggers the queue thread to handle network rx (and complete the I/O). It's been measured recently that running the rx context directly in softirq will save some latency (possible because nvme-tcp rx context is non-blocking). So I'd think that patch #2 is unnecessary and just add overhead for nvme-tcp.. do note that the napi softirq cpu mapping depends on the RSS steering, which is unlikely to match rq->mq_ctx->cpu, hence if completed from napi context, nvme-tcp will probably always go to the IPI path.