Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id ; Sun, 15 Sep 2002 07:46:45 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id ; Sun, 15 Sep 2002 07:46:45 -0400 Received: from nat-pool-rdu.redhat.com ([66.187.233.200]:2718 "EHLO devserv.devel.redhat.com") by vger.kernel.org with ESMTP id ; Sun, 15 Sep 2002 07:46:44 -0400 Date: Sun, 15 Sep 2002 07:51:39 -0400 (EDT) From: Ingo Molnar X-X-Sender: mingo@devserv.devel.redhat.com To: David Howells cc: arjanv@redhat.com, , , Subject: Re: [PATCH] per-interrupt stacks - try 2 In-Reply-To: <15885.1031830472@warthog.cambridge.redhat.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1153 Lines: 30 On Thu, 12 Sep 2002, David Howells wrote: > > per-CPU per-IRQ i mean, of course. It's a basic performance issue, on > > SMP we do not want dirty IRQ stacks to bounce between CPUs ... > > Do you have benchmarks or something to show that is this actually a > _significant_ problem? you need benchmarks to tell that pure per-IRQ stacks are bad for SMP performance? per-IRQ+per-CPU and pure per-CPU IRQ stacks should perform rougly equally well on SMP - with per-CPU IRQ stacks having lower runtime setup cost. > After all, unless you bind the interrupts to particular IRQs, loads of > data - including the irq_desc[] table - are going to be bouncing too. there's a difference between bouncing 1-2 cachelines and bouncing a *full, dirtied stack*. The irq_desc[] bouncing is pretty much unavoidable (IRQs do need some global state) - the stack bouncing is just plain stupid and perfectly avoidable. Ingo - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/