Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755887AbYFTWdU (ORCPT ); Fri, 20 Jun 2008 18:33:20 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752305AbYFTWdL (ORCPT ); Fri, 20 Jun 2008 18:33:11 -0400 Received: from mx1.redhat.com ([66.187.233.31]:40367 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751099AbYFTWdK (ORCPT ); Fri, 20 Jun 2008 18:33:10 -0400 Date: Fri, 20 Jun 2008 18:33:06 -0400 (EDT) From: Mikulas Patocka To: David Miller cc: sparclinux@vger.kernel.org, linux-kernel@vger.kernel.org, agk@redhat.com Subject: Re: stack overflow on Sparc64 In-Reply-To: <20080620.144424.168785883.davem@davemloft.net> Message-ID: References: <20080620.142034.30884440.davem@davemloft.net> <20080620.144424.168785883.davem@davemloft.net> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2241 Lines: 64 On Fri, 20 Jun 2008, David Miller wrote: > From: Mikulas Patocka > Date: Fri, 20 Jun 2008 17:25:26 -0400 (EDT) > >> On Fri, 20 Jun 2008, David Miller wrote: >> >>> From: Mikulas Patocka >>> Date: Fri, 20 Jun 2008 17:14:41 -0400 (EDT) >>> >>> It means i386 and every other platform potentially has the same exact >>> problem. >>> >>> What point wrt. sparc64 are you trying to make here? :-) >> >> The difference is that i386 takes minimum 4 bytes per stack frame and >> sparc64 192 bytes per stack frame. So this problem will kill sparc64 >> sooner. >> >> But yes, it is general problem and should be solved in arch-independent >> code. > > I agree on both counts. Although I'm curious what the average stack > frame sizes look like on x86_64 and i386, and also how this area > appears on powerpc. If I look at an old oops that I have in my log on i386: it's 1104 stack bytes ~ 38 functions. > One mitigating factor on sparc64 is that typically when there are lots > of devices with interrupts there are also lots of cpus, and we evenly > distribute the IRQ targetting amongst the available cpus on sparc64. > > This is probably why, in practice, these problems tend to not surface > often. > > In any event, with the work you've accomplished and my implementation > of IRQ stacks for sparc64 we should be able to get things in much > better shape. I created this to help with nested irqs: --- linux-2.6.26-rc5-devel.orig/include/linux/interrupt.h 2008-06-20 23:34:04.000000000 +0200 +++ linux-2.6.26-rc5-devel/include/linux/interrupt.h 2008-06-20 23:36:03.000000000 +0200 @@ -95,7 +95,7 @@ #ifdef CONFIG_LOCKDEP # define local_irq_enable_in_hardirq() do { } while (0) #else -# define local_irq_enable_in_hardirq() local_irq_enable() +# define local_irq_enable_in_hardirq() do { if (hardirq_count() <= (1 << HARDIRQ_SHIFT)) local_irq_enable(); } while (0) #endif extern void disable_irq_nosync(unsigned int irq); Mikulas -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/