Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753123AbbGOQFp (ORCPT ); Wed, 15 Jul 2015 12:05:45 -0400 Received: from mx1.redhat.com ([209.132.183.28]:35057 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751466AbbGOQFo (ORCPT ); Wed, 15 Jul 2015 12:05:44 -0400 Date: Wed, 15 Jul 2015 19:05:41 +0300 From: "Michael S. Tsirkin" To: Christoph Hellwig Cc: ksummit-discuss@lists.linuxfoundation.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org Subject: Re: [Ksummit-discuss] [TECH TOPIC] IRQ affinity Message-ID: <20150715160540.GA757@redhat.com> References: <20150715120708.GA24534@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20150715120708.GA24534@infradead.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1948 Lines: 43 On Wed, Jul 15, 2015 at 05:07:08AM -0700, Christoph Hellwig wrote: > Many years ago we decided to move setting of IRQ to core affnities to > userspace with the irqbalance daemon. > > These days we have systems with lots of MSI-X vector, and we have > hardware and subsystem support for per-CPU I/O queues in the block > layer, the RDMA subsystem and probably the network stack (I'm not too > familar with the recent developments there). It would really help the > out of the box performance and experience if we could allow such > subsystems to bind interrupt vectors to the node that the queue is > configured on. I think you are right, it's true for networking. Whenever someone tries to benchmark networking, first thing done is always disabling irqbalance and pinning IRQs manually away from whereever the benchmark is running, but at the same numa node. Without that, interrupts don't let the benchmark make progress. Alternatively, people give up on interrupts completely and start polling hardware aggressively. Nice for a benchmark, not nice for the environment. > > I'd like to discuss if the rationale for moving the IRQ affinity setting > fully to userspace are still correct in todays world any any pitfalls > we'll have to learn from in irqbalanced and the old in-kernel affinity > code. IMHO there could be a benefit from a better integration with the scheduler. Maybe an interrupt handler can be viewed as a kind of thread, so scheduler can make decisions about where to run it next? > _______________________________________________ > Ksummit-discuss mailing list > Ksummit-discuss@lists.linuxfoundation.org > https://lists.linuxfoundation.org/mailman/listinfo/ksummit-discuss -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/