Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752672AbbGOMHM (ORCPT ); Wed, 15 Jul 2015 08:07:12 -0400 Received: from bombadil.infradead.org ([198.137.202.9]:40188 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751597AbbGOMHJ (ORCPT ); Wed, 15 Jul 2015 08:07:09 -0400 Date: Wed, 15 Jul 2015 05:07:08 -0700 From: Christoph Hellwig To: ksummit-discuss@lists.linuxfoundation.org Cc: linux-rdma@vger.kernel.org, linux-nvme@lists.infradead.org, linux-kernel@vger.kernel.org Subject: [TECH TOPIC] IRQ affinity Message-ID: <20150715120708.GA24534@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline User-Agent: Mutt/1.5.23 (2014-03-12) X-SRS-Rewrite: SMTP reverse-path rewritten from by bombadil.infradead.org See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 996 Lines: 20 Many years ago we decided to move setting of IRQ to core affnities to userspace with the irqbalance daemon. These days we have systems with lots of MSI-X vector, and we have hardware and subsystem support for per-CPU I/O queues in the block layer, the RDMA subsystem and probably the network stack (I'm not too familar with the recent developments there). It would really help the out of the box performance and experience if we could allow such subsystems to bind interrupt vectors to the node that the queue is configured on. I'd like to discuss if the rationale for moving the IRQ affinity setting fully to userspace are still correct in todays world any any pitfalls we'll have to learn from in irqbalanced and the old in-kernel affinity code. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/