Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932885AbcLGWjp (ORCPT ); Wed, 7 Dec 2016 17:39:45 -0500 Received: from mga09.intel.com ([134.134.136.24]:40430 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932243AbcLGWjo (ORCPT ); Wed, 7 Dec 2016 17:39:44 -0500 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,315,1477983600"; d="scan'208";a="1096114468" Date: Wed, 7 Dec 2016 17:49:42 -0500 From: Keith Busch To: Dan Streetman Cc: Jens Axboe , linux-nvme@lists.infradead.org, linux-kernel , Dan Streetman Subject: Re: [PATCH] nvme: use the correct msix vector for each queue Message-ID: <20161207224941.GA25959@localhost.localdomain> References: <20161207220348.8572-1-ddstreet@ieee.org> <20161207224414.GE22478@localhost.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.7.0 (2016-08-17) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 775 Lines: 16 On Wed, Dec 07, 2016 at 05:36:00PM -0500, Dan Streetman wrote: > On Wed, Dec 7, 2016 at 5:44 PM, Keith Busch wrote: > > pci_alloc_irq_vectors doesn't know you intend to make the first > > vector special, so it's going to come up with a CPU affinity from > > blk_mq_pci_map_queues that clashes with what you've programmed in the > > IO completion queues. > > I don't follow. You're saying you mean to share cq_vector 0 between > the admin queue and io queue 1? I'm just saying that blk-mq's hctx mapping will end up choosing a queue who's vector is mapped to a different CPU, and we don't want that. We are currently sharing the first IO queue's interrupt vector with the admin queue's on purpose. Are you saying there's something wrong with that?