Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752427AbbGPGNm (ORCPT ); Thu, 16 Jul 2015 02:13:42 -0400 Received: from mx1.redhat.com ([209.132.183.28]:54523 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751055AbbGPGNl (ORCPT ); Thu, 16 Jul 2015 02:13:41 -0400 Date: Thu, 16 Jul 2015 09:13:37 +0300 From: "Michael S. Tsirkin" To: Matthew Wilcox Cc: Jens Axboe , Christoph Hellwig , ksummit-discuss@lists.linuxfoundation.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, Keith Busch , Bart Van Assche Subject: Re: [Ksummit-discuss] [TECH TOPIC] IRQ affinity Message-ID: <20150716075454-mutt-send-email-mst@redhat.com> References: <20150715120708.GA24534@infradead.org> <55A67F11.1030709@sandisk.com> <55A697A3.3090305@kernel.dk> <20150715184800.GL13681@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150715184800.GL13681@linux.intel.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3127 Lines: 55 On Wed, Jul 15, 2015 at 02:48:00PM -0400, Matthew Wilcox wrote: > On Wed, Jul 15, 2015 at 11:25:55AM -0600, Jens Axboe wrote: > > On 07/15/2015 11:19 AM, Keith Busch wrote: > > >On Wed, 15 Jul 2015, Bart Van Assche wrote: > > >>* With blk-mq and scsi-mq optimal performance can only be achieved if > > >> the relationship between MSI-X vector and NUMA node does not change > > >> over time. This is necessary to allow a blk-mq/scsi-mq driver to > > >> ensure that interrupts are processed on the same NUMA node as the > > >> node on which the data structures for a communication channel have > > >> been allocated. However, today there is no API that allows > > >> blk-mq/scsi-mq drivers and irqbalanced to exchange information > > >> about the relationship between MSI-X vector ranges and NUMA nodes. > > > > > >We could have low-level drivers provide blk-mq the controller's irq > > >associated with a particular h/w context, and the block layer can provide > > >the context's cpumask to irqbalance with the smp affinity hint. > > > > > >The nvme driver already uses the hwctx cpumask to set hints, but this > > >doesn't seems like it should be a driver responsibility. It currently > > >doesn't work correctly anyway with hot-cpu since blk-mq could rebalance > > >the h/w contexts without syncing with the low-level driver. > > > > > >If we can add this to blk-mq, one additional case to consider is if the > > >same interrupt vector is used with multiple h/w contexts. Blk-mq's cpu > > >assignment needs to be aware of this to prevent sharing a vector across > > >NUMA nodes. > > > > Exactly. I may have promised to do just that at the last LSF/MM conference, > > just haven't done it yet. The point is to share the mask, I'd ideally like > > to take it all the way where the driver just asks for a number of vecs > > through a nice API that takes care of all this. Lots of duplicated code in > > drivers for this these days, and it's a mess. > > Yes. I think the fundamental problem is that our MSI-X API is so funky. > We have this incredibly flexible scheme where each MSI-X vector could > have its own interrupt handler, but that's not what drivers want. > They want to say "Give me eight MSI-X vectors spread across the CPUs, > and use this interrupt handler for all of them". That is, instead of > the current scheme where each MSI-X vector gets its own Linux interrupt, > we should have one interrupt handler (of the per-cpu interrupt type), > which shows up with N bits set in its CPU mask. It would definitely be nice to have a way to express that. But it's also pretty common for drivers to have e.g. RX and TX use separate vectors, and these need separate handlers. > _______________________________________________ > Ksummit-discuss mailing list > Ksummit-discuss@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/