Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753451AbbGOSsG (ORCPT ); Wed, 15 Jul 2015 14:48:06 -0400 Received: from mga09.intel.com ([134.134.136.24]:19975 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752490AbbGOSsD (ORCPT ); Wed, 15 Jul 2015 14:48:03 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,481,1432623600"; d="scan'208";a="606877065" Date: Wed, 15 Jul 2015 14:48:00 -0400 From: Matthew Wilcox To: Jens Axboe Cc: Keith Busch , Bart Van Assche , ksummit-discuss@lists.linuxfoundation.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, Christoph Hellwig , Thomas Gleixner Subject: Re: [Ksummit-discuss] [TECH TOPIC] IRQ affinity Message-ID: <20150715184800.GL13681@linux.intel.com> References: <20150715120708.GA24534@infradead.org> <55A67F11.1030709@sandisk.com> <55A697A3.3090305@kernel.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <55A697A3.3090305@kernel.dk> User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2609 Lines: 46 On Wed, Jul 15, 2015 at 11:25:55AM -0600, Jens Axboe wrote: > On 07/15/2015 11:19 AM, Keith Busch wrote: > >On Wed, 15 Jul 2015, Bart Van Assche wrote: > >>* With blk-mq and scsi-mq optimal performance can only be achieved if > >> the relationship between MSI-X vector and NUMA node does not change > >> over time. This is necessary to allow a blk-mq/scsi-mq driver to > >> ensure that interrupts are processed on the same NUMA node as the > >> node on which the data structures for a communication channel have > >> been allocated. However, today there is no API that allows > >> blk-mq/scsi-mq drivers and irqbalanced to exchange information > >> about the relationship between MSI-X vector ranges and NUMA nodes. > > > >We could have low-level drivers provide blk-mq the controller's irq > >associated with a particular h/w context, and the block layer can provide > >the context's cpumask to irqbalance with the smp affinity hint. > > > >The nvme driver already uses the hwctx cpumask to set hints, but this > >doesn't seems like it should be a driver responsibility. It currently > >doesn't work correctly anyway with hot-cpu since blk-mq could rebalance > >the h/w contexts without syncing with the low-level driver. > > > >If we can add this to blk-mq, one additional case to consider is if the > >same interrupt vector is used with multiple h/w contexts. Blk-mq's cpu > >assignment needs to be aware of this to prevent sharing a vector across > >NUMA nodes. > > Exactly. I may have promised to do just that at the last LSF/MM conference, > just haven't done it yet. The point is to share the mask, I'd ideally like > to take it all the way where the driver just asks for a number of vecs > through a nice API that takes care of all this. Lots of duplicated code in > drivers for this these days, and it's a mess. Yes. I think the fundamental problem is that our MSI-X API is so funky. We have this incredibly flexible scheme where each MSI-X vector could have its own interrupt handler, but that's not what drivers want. They want to say "Give me eight MSI-X vectors spread across the CPUs, and use this interrupt handler for all of them". That is, instead of the current scheme where each MSI-X vector gets its own Linux interrupt, we should have one interrupt handler (of the per-cpu interrupt type), which shows up with N bits set in its CPU mask. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/