Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755573AbZGDJWe (ORCPT ); Sat, 4 Jul 2009 05:22:34 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751790AbZGDJWX (ORCPT ); Sat, 4 Jul 2009 05:22:23 -0400 Received: from srv5.dvmed.net ([207.36.208.214]:38605 "EHLO mail.dvmed.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751414AbZGDJWW (ORCPT ); Sat, 4 Jul 2009 05:22:22 -0400 Message-ID: <4A4F1EA0.3070501@garzik.org> Date: Sat, 04 Jul 2009 05:19:28 -0400 From: Jeff Garzik User-Agent: Thunderbird 2.0.0.21 (X11/20090320) MIME-Version: 1.0 To: Andi Kleen CC: Arjan van de Ven , Matthew Wilcox , Jens Axboe , linux-kernel@vger.kernel.org, "Styner, Douglas W" , Chinang Ma , "Prickett, Terry O" , Matthew Wilcox , Eric.Moore@lsi.com, DL-MPTFusionLinux@lsi.com, NetDev Subject: Re: >10% performance degradation since 2.6.18 References: <20090703025607.GK5480@parisc-linux.org> <87skhdaaub.fsf@basil.nowhere.org> <20090703185414.GP23611@kernel.dk> <20090703191321.GO5480@parisc-linux.org> <20090703192235.GV23611@kernel.dk> <20090703194557.GQ5480@parisc-linux.org> <20090703195458.GK2041@one.firstfloor.org> <20090703130421.646fe5cb@infradead.org> <20090703233505.GL2041@one.firstfloor.org> <20090703230408.4433ee39@infradead.org> <20090704084430.GO2041@one.firstfloor.org> In-Reply-To: <20090704084430.GO2041@one.firstfloor.org> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-Spam-Score: -4.4 (----) X-Spam-Report: SpamAssassin version 3.2.5 on srv5.dvmed.net summary: Content analysis details: (-4.4 points, 5.0 required) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1646 Lines: 42 Andi Kleen wrote: >> for networking, especially for incoming data such as new connections, >> that isn't the case.. that's more or less randomly (well hash based) >> distributed. > > Ok. Still binding them all to a single CPU all is quite dumb. It > makes MSI-X quite useless and probably even harmful. > > We don't default to socket power saving for normal scheduling either, > but only when you specify a special knob. I don't see why interrupts > should be different. In the pre-MSI-X days, you'd have cachelines bouncing all over the place if you distributed networking interrupts across CPUs, particularly given that NAPI would run some things on a single CPU anyway. Today, machines are faster, we have multiple interrupts per device, and we have multiple RX/TX queues. I would be interested to see hard numbers (as opposed to guesses) about various new ways to distributed interrupts across CPUs. What's the best setup for power usage? What's the best setup for performance? Are they the same? Is it most optimal to have the interrupt for socket $X occur on the same CPU as where the app is running? If yes, how to best handle when the scheduler moves app to another CPU? Should we reprogram the NIC hardware flow steering mechanism at that point? Interesting questions, and I hope we'd see some hard number comparisons before solutions start flowing into the kernel. Jeff -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/