Received: by 10.223.185.116 with SMTP id b49csp7548454wrg; Thu, 1 Mar 2018 07:16:15 -0800 (PST) X-Google-Smtp-Source: AG47ELtJ8odBEVs/mYIRdWaNsnQFXUqvdmK5XWCQjpFQ2E3FUilJRUMEGuLayEf+g5fCGaL8tios X-Received: by 10.98.185.11 with SMTP id z11mr2253885pfe.153.1519917375310; Thu, 01 Mar 2018 07:16:15 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519917375; cv=none; d=google.com; s=arc-20160816; b=jXoSWgodXn/lJERIFNBGjr/m+STk+WmcN/bPX9ejTot1TbMGHvCMLoPMjlCRZd6D5S 0JndvAQ7wdPj6AgBQdN0e5TBacMlA8DOKpktfSNiJnEAbeGsUkCunQbI3DWYv2vFVbD2 UcZ2gycA1S+ngDXvVYwgsjDumw+jkxJklVQ4IPf/lhN0Q8zySougHKecBBO2K5rw/6JE tY2IC0BeTO6jJHkOgY6mQJooqNyaZLEnoGN3TDojp8aWf7hkq/8QRTYBffmpdq5YuBTL 5Ox1TzsFyqCKZt0qOfFcxUx0WHaF/bHhZwAeTadSRYukCgFxsVNCZtJYBvIfXgmKKPBn 8JVw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=JBRGvdEFoWjtjhZxqltlEp4rWNfQx7aS+uOFKoODLAI=; b=Dp07nj4ZCF5uqzLy6U9Y7oK7jDVL6ywruomCtYOJDiVcmtAfAnBiGccQxtd9++VUeu WIw1KEvEPmVmts1sMhjEyZFu1pBYvqs9Xc970z6RFVpSZsUNZyvSV75duBAWwXawtd/c oBpkNZNMltbM3I5iDPC6+aniED607PpyO2PkN/oDXMIJjv3QQaxOBbWMTkQoDRZdZssp SsX0ucWtQPcntvALPqeHcjynwGDNZumWyqUf2pFUk/0iGU25ZgteOU25f8gH41aRDAxf l6HF1CsgRY3eLW+A+HyPLzbNzMrvSdSs6No/SiCqN2Y+2ST2aRe4RKJAr8rSkGslv2GF SAwQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a9-v6si3258605pln.89.2018.03.01.07.16.00; Thu, 01 Mar 2018 07:16:15 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1031819AbeCAPPY (ORCPT + 99 others); Thu, 1 Mar 2018 10:15:24 -0500 Received: from mga06.intel.com ([134.134.136.31]:19026 "EHLO mga06.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1031631AbeCAPPX (ORCPT ); Thu, 1 Mar 2018 10:15:23 -0500 X-Amp-Result: UNSCANNABLE X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga104.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 01 Mar 2018 07:15:22 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.47,408,1515484800"; d="scan'208";a="179107974" Received: from unknown (HELO localhost.localdomain) ([10.232.112.44]) by orsmga004.jf.intel.com with ESMTP; 01 Mar 2018 07:15:21 -0800 Date: Thu, 1 Mar 2018 08:15:44 -0700 From: Keith Busch To: "jianchao.wang" Cc: Sagi Grimberg , Christoph Hellwig , axboe@fb.com, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org Subject: Re: [PATCH V2] nvme-pci: assign separate irq vectors for adminq and ioq0 Message-ID: <20180301151544.GA17676@localhost.localdomain> References: <1519832921-13915-1-git-send-email-jianchao.w.wang@oracle.com> <20180228164726.GB16536@lst.de> <66e4ad3e-4019-13ec-94c0-e168cc1d95b4@oracle.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <66e4ad3e-4019-13ec-94c0-e168cc1d95b4@oracle.com> User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Mar 01, 2018 at 06:05:53PM +0800, jianchao.wang wrote: > When the adminq is free, ioq0 irq completion path has to invoke nvme_irq twice, one for itself, > one for adminq completion irq action. Let's be a little more careful on the terminology when referring to spec defined features: there is no such thing as "ioq0". The IO queues start at 1. The admin queue is the '0' index queue. > We are trying to save every cpu cycle across the nvme host path, why we waste nvme_irq cycles here. > If we have enough vectors, we could allocate another irq vector for adminq to avoid this. Please understand the _overwhelming_ majority of time spent for IRQ handling is the context switches. There's a reason you're not able to measure a perf difference between IOQ1 and IOQ2: the number of CPU cycles to chain a second action is negligible.