Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757558AbYH2QwT (ORCPT ); Fri, 29 Aug 2008 12:52:19 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1754398AbYH2QwD (ORCPT ); Fri, 29 Aug 2008 12:52:03 -0400 Received: from casper.infradead.org ([85.118.1.10]:39087 "EHLO casper.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752769AbYH2QwB (ORCPT ); Fri, 29 Aug 2008 12:52:01 -0400 Date: Fri, 29 Aug 2008 09:52:03 -0700 From: Arjan van de Ven To: Andi Kleen Cc: Brice Goglin , LKML , netdev@vger.kernel.org Subject: Re: [RFC] export irq_set/get_affinity() for multiqueue network drivers Message-ID: <20080829095203.5732f331@infradead.org> In-Reply-To: <8763pjpy2b.fsf@basil.nowhere.org> References: <48B708E1.4070001@inria.fr> <20080829055023.07966b4a@infradead.org> <8763pjpy2b.fsf@basil.nowhere.org> Organization: Intel X-Mailer: Claws Mail 3.5.0 (GTK+ 2.12.11; i386-redhat-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-SRS-Rewrite: SMTP reverse-path rewritten from by casper.infradead.org See http://www.infradead.org/rpr.html Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1782 Lines: 48 On Fri, 29 Aug 2008 18:48:12 +0200 Andi Kleen wrote: > Arjan van de Ven writes: > > > On Thu, 28 Aug 2008 22:21:53 +0200 > > Brice Goglin wrote: > > > >> Hello, > >> > >> Is there any way to setup IRQ masks from within a driver? myri10ge > >> currently relies on an external script (writing in > >> /proc/irq/*/smp_affinity) to bind each queue/MSI-X to a different > >> processor. By default, Linux will either: > >> * round-robin the interrupts (killing the benefit of DCA for > >> instance) > >> * put all IRQs on the same CPU (killing much of th > > > > * do the right thing with the userspace irq balancer > > It probably also needs to be hooked up the sched_mc_power_savings > When the switch is on the interrupts shouldn't be spread out over > that many sockets. that's what irqbalance already does today. > > Also I suspect handling SMT explicitely is a good idea. e.g. I would > always set the affinity to all thread siblings in a core, not > just a single one, because context switch is very cheap between them. that is what irqbalance already does today, at least for what it considers somewhat slower irqs. for networking it still sucks because the packet reordering logic is per logical cpu so you still don't want to receive packets from the same "stream" over multiple logical cpus. -- If you want to reach me at my work email, use arjan@linux.intel.com For development, discussion and tips for power savings, visit http://www.lesswatts.org -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/