Hi,
Very nice patch.
Andrew, would you consider adding this one?
--Shai
-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of Erich Focht
Sent: Friday, March 26, 2004 00:31
To: Paul Jackson; Xavier Bru
Cc: [email protected]; [email protected]; [email protected];
Erik Jacobson
Subject: Re: [Lse-tech] Re: NUMA scheduler issue
Hi Paul,
On Wednesday 24 March 2004 20:03, Paul Jackson wrote:
> Where are you getting the printouts that look like:
>
> initial CPU = 2
> cpu 18491 16
> cpu0 17125 2
> cpu1 441 0
> cpu2 700 14
> cpu3 225 0
> ...
> current_cpu 0
>
> We have something in our SGI 2.4 kernels (/proc/<pid>/cpu) that
> displays this sort of per-cpu usage, but I don't see anything
> in the 2.6 kernels that seems to do this.
its probably the attached patch. Sorry, I'm travelling and couldn't
rediff against a current version...
<[email protected]> wrote:
>
> Hi,
>
> Very nice patch.
> Andrew, would you consider adding this one?
Not really.
--- 2.6.0-test1-ia64-0/include/linux/sched.h 2003-07-14 05:30:40.000000000 +0200
+++ 2.6.0-test1-ia64-na/include/linux/sched.h 2003-07-18 13:38:02.000000000 +0200
@@ -390,6 +390,9 @@
struct list_head posix_timers; /* POSIX.1b Interval Timers */
unsigned long utime, stime, cutime, cstime;
u64 start_time;
+#ifdef CONFIG_SMP
+ long per_cpu_utime[NR_CPUS], per_cpu_stime[NR_CPUS];
+#endif
On 512p 64-bit that's 8 kilobytes added to the task_struct.