Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S269754AbUINUII (ORCPT ); Tue, 14 Sep 2004 16:08:08 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S269742AbUINUDm (ORCPT ); Tue, 14 Sep 2004 16:03:42 -0400 Received: from holomorphy.com ([207.189.100.168]:10902 "EHLO holomorphy.com") by vger.kernel.org with ESMTP id S269494AbUINUCg (ORCPT ); Tue, 14 Sep 2004 16:02:36 -0400 Date: Tue, 14 Sep 2004 13:02:20 -0700 From: William Lee Irwin III To: Rusty Russell Cc: Andrea Arcangeli , Andrew Morton , Ray Bryant , hawkes@sgi.com, linux-kernel@vger.kernel.org, Jesse Barnes Subject: Re: [profile] amortize atomic hit count increments Message-ID: <20040914200220.GH9106@holomorphy.com> References: <20040913015003.5406abae.akpm@osdl.org> <20040914155103.GR9106@holomorphy.com> <20040914160531.GP4180@dualathlon.random> <200409140916.48786.jbarnes@engr.sgi.com> <20040914190030.GZ9106@holomorphy.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20040914190030.GZ9106@holomorphy.com> Organization: The Domain of Holomorphy User-Agent: Mutt/1.5.6+20040722i Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3100 Lines: 85 On Tue, Sep 14, 2004 at 09:16:48AM -0700, Jesse Barnes wrote: >> It looks like the 512p we have here is pretty heavily reserved this >> week, so I'm not sure if I'll be able to test this (someone else >> might, John?). I think the balance we're looking for is between >> simplicity and non-brokenness. Builtin profiling is *supposed* to be >> simple and dumb, and were it not for the readprofile times, I'd say >> per-cpu would be the way to go just because it retains the simplicity >> of the current approach while allowing it to work on large machines >> (as well as limiting the performance impact of builtin profiling in >> general). wli's approach seems like a reasonable tradeoff though, >> assuming what you suggest doesn't work. On Tue, Sep 14, 2004 at 12:00:30PM -0700, William Lee Irwin III wrote: > Goddamn fscking short-format VHPT crap. Rusty, how the hell do I > hotplug-ize this? Okay, here's an attempt to hotplug-ize it. I have no clue whether this actually works, compiles, or follows whatever rules there are about dynamically allocated data referenced by per_cpu areas. -- wli Index: mm5-2.6.9-rc1/kernel/profile.c =================================================================== --- mm5-2.6.9-rc1.orig/kernel/profile.c 2004-09-14 10:20:43.000000000 -0700 +++ mm5-2.6.9-rc1/kernel/profile.c 2004-09-14 12:52:16.064352624 -0700 @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -297,6 +298,44 @@ local_irq_restore(flags); put_cpu(); } + +#ifdef CONFIG_HOTPLUG_CPU +static int __devinit profile_cpu_callback(struct notifier_block *info, + unsigned long action, void *__cpu) +{ + int cpu = (unsigned long)__cpu; + + switch (action) { + case CPU_UP_PREPARE: + per_cpu(cpu_profile_flip, cpu) = 0; + if (!per_cpu(cpu_profile_hits, cpu)[1]) + per_cpu(cpu_profile_hits, cpu)[1] + = (void *)get_zeroed_page(GFP_KERNEL); + if (!per_cpu(cpu_profile_hits, cpu)[1]) + return NOTIFY_BAD; + if (!per_cpu(cpu_profile_hits, cpu)[0]) + per_cpu(cpu_profile_hits, cpu)[0] + = (void *)get_zeroed_page(GFP_KERNEL); + if (per_cpu(cpu_profile_hits, cpu)[0]) + break; + free_page((unsigned long)per_cpu(cpu_profile_hits, cpu)[1]); + return NOTIFY_BAD; + break; + case CPU_ONLINE: + cpu_set(cpu, prof_cpu_mask); + break; + case CPU_UP_CANCELED: + case CPU_DEAD: + cpu_clear(cpu, prof_cpu_mask); + free_page((unsigned long)per_cpu(cpu_profile_hits, cpu)[0]); + per_cpu(cpu_profile_hits, cpu)[0] = NULL; + free_page((unsigned long)per_cpu(cpu_profile_hits, cpu)[1]); + per_cpu(cpu_profile_hits, cpu)[1] = NULL; + break; + } + return NOTIFY_OK; +} +#endif /* CONFIG_HOTPLUG_CPU */ #else /* !CONFIG_SMP */ #define profile_flip_buffers() do { } while (0) - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/