Received: by 2002:a05:6359:c8b:b0:c7:702f:21d4 with SMTP id go11csp2634358rwb; Mon, 19 Sep 2022 07:51:19 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6j9PMELl/HDumH4dzGVmPSXa0zwT6ukD7eKrr006k0170ABktAEEsNSikauLRW47mMhyrU X-Received: by 2002:a05:6a00:2387:b0:550:bd29:7d7 with SMTP id f7-20020a056a00238700b00550bd2907d7mr384613pfc.17.1663599079117; Mon, 19 Sep 2022 07:51:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1663599079; cv=none; d=google.com; s=arc-20160816; b=b7x15Zv2DlNiujvIytMVFOx74qCMg2prg0W8g5evy3IYXMGyWLN4rkaeWivShb0ThF bc0IN0tTzfXMywMvZ71pcF2e3/j6awbi9CudOcUVR8gHqywuT/2ZUffPXW9oLCY8HbD2 5reJYVZw4dxdhoKWAS+qYcPQR/WxcgJJdgVDx7nB1YvBZQsecI7p7pQE5GIQ1TQfjLID L9/xatilI7O4gA4cMNHpfwsmQrfg6QsH4/xebq0iOZH4ILbc+cA28pvwGl3VfS4bOvDt 9f1Vm6uSKrw6aeobdhFpiAGOQYDHJafmUrpbIdKXU6ewFI8BsHjwUb/jbGXhIhM63kFj HO3g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:user-agent:in-reply-to:content-disposition :mime-version:references:message-id:subject:cc:to:from:date; bh=0UzJLAyJxUX5RTFzbCin7lqKfwX21buu7VEm5Jm6Q5k=; b=JIYFe9zY0l48y3/h4mtBFrrnmXzzfqE1qXj5biQF2JkgbU88VZL/rU/nx42FctNlhq cqJRN9tEIy5yrcDuyx5Faxa4wECgMGtcpRi1VLKieEvpMQarUOklyWDjolvvrPCZx8LT XZ66hE2uH3uoy9oa31lRaKSsVJBoxLKzp42sUF9I4gRSlR3TEUWf1V7CHde2frfbWxi1 q7Y7CxqEg9NnMC9LgUuh5uEvLMuS2Cn4kIs1RPUKD4SwnxFflSX6GtaB2Q+7U16MFYpo ilDXk/Bm/0e36cxFja/PAXzYRHvkqg0+Pey+Ngj6HysPLOAYGHRK4ubfS9kSNbCdNYJr BMIg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id l70-20020a638849000000b0043a279baf66si3698915pgd.115.2022.09.19.07.51.07; Mon, 19 Sep 2022 07:51:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229939AbiISOgD (ORCPT + 99 others); Mon, 19 Sep 2022 10:36:03 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:39528 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229540AbiISOgB (ORCPT ); Mon, 19 Sep 2022 10:36:01 -0400 Received: from verein.lst.de (verein.lst.de [213.95.11.211]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 47F64B85A for ; Mon, 19 Sep 2022 07:36:00 -0700 (PDT) Received: by verein.lst.de (Postfix, from userid 2407) id 44D1568BEB; Mon, 19 Sep 2022 16:35:56 +0200 (CEST) Date: Mon, 19 Sep 2022 16:35:56 +0200 From: Christoph Hellwig To: Jens Axboe Cc: Liu Song , kbusch@kernel.org, hch@lst.de, sagi@grimberg.me, linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org Subject: Re: [RFC PATCH] nvme: request remote is usually not involved for nvme devices Message-ID: <20220919143556.GA28122@lst.de> References: <1663432858-99743-1-git-send-email-liusong@linux.alibaba.com> <7b28925a-cbee-620f-fde7-d16f256836cc@linux.alibaba.com> <894e18a4-4504-df48-6429-a04c222ca064@kernel.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <894e18a4-4504-df48-6429-a04c222ca064@kernel.dk> User-Agent: Mutt/1.5.17 (2007-11-01) X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,SPF_HELO_NONE, SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Sep 19, 2022 at 08:10:31AM -0600, Jens Axboe wrote: > I'm not disagreeing with any of that, my point is just that you're > hacking around this in the nvme driver. This is problematic whenever > core changes happen, because now we have to touch individual drivers. > While the expectation is that there are no remote IPI completions for > NVMe, queue starved devices do exist and those do see remote > completions. > > This optimization belongs in the blk-mq core, not in nvme. I do think it > makes sense, you just need to solve it in blk-mq rather than in the nvme > driver. I'd also really see solid numbers to justify it. And btw, having more than one core per queue is quite common in nvme. Even many enterprise SSDs only have 64 queues, and some of the low-end consumers ones have very low number of queues that are not enough for the number of cores in even mid-end desktop CPUs.