2004-06-24 09:16:03

by Yusuf Goolamabbas

[permalink] [raw]
Subject: finish_task_switch high in profiles in 2.6.7

Hi, I have a fairly busy mailserver which also has a simple iptables
ruleset (blocking some IP's) running 2.6.7 with the deadline i/o
scheduler. vmstat was reporting that system time was around 80%. I did
the following

readprofile -r ; sleep 240 ; readprofile -n -m /boot/System.map-`uname
-r` | sort -rn -k 1,1 | head -22

48036 total 0.2274
9684 finish_task_switch 80.0331
6992 default_idle 158.9091
6335 __wake_up 87.9861
5671 remove_wait_queue 76.6351
4150 add_wait_queue 59.2857
2459 sysenter_past_esp 21.7611
1304 find_get_page 21.0323
1160 __mod_timer 4.1281
938 do_gettimeofday 5.0430
756 del_timer_sync 2.4787
753 del_timer 7.1714
599 handle_IRQ_event 5.9900
570 do_page_fault 0.4612
485 __do_softirq 2.8529
476 do_sigaction 0.9136
469 do_getitimer 1.7055
464 do_setitimer 0.9768
392 get_offset_tsc 17.0435
340 current_kernel_time 4.9275
293 in_group_p 2.4417
272 eligible_child 1.4093

On a 2.6.5 box with a similar workload, the profile is as follows

131202 total 0.3704
42080 fget 667.9365
12868 schedule 8.1084
11231 default_idle 255.2500
9157 fput 436.0476
6642 remove_wait_queue 89.7568
6076 __wake_up 93.4769
5056 page_remove_rmap 13.4111
4144 add_wait_queue 59.2000
2255 fget_light 17.4806
2176 sysenter_past_esp 19.2566
1271 kfree 11.6606
1148 kmem_cache_alloc 14.9091
934 __kmalloc 6.6714
890 __mod_timer 3.1673
768 del_timer_sync 2.9538
717 buffered_rmqueue 1.5621
715 kmem_cache_free 8.2184
613 free_hot_cold_page 2.4618
612 del_timer 5.8286
588 __find_get_block 3.0466
588 do_page_fault 0.4757

I am trying to determine where the system time is going and don't have
much zen to begin with. Any assistance would be appreciated ?


2004-06-24 09:27:44

by Nick Piggin

[permalink] [raw]
Subject: Re: finish_task_switch high in profiles in 2.6.7

Yusuf Goolamabbas wrote:
> Hi, I have a fairly busy mailserver which also has a simple iptables
> ruleset (blocking some IP's) running 2.6.7 with the deadline i/o
> scheduler. vmstat was reporting that system time was around 80%. I did
> the following
>
> readprofile -r ; sleep 240 ; readprofile -n -m /boot/System.map-`uname
> -r` | sort -rn -k 1,1 | head -22
>
> 48036 total 0.2274
> 9684 finish_task_switch 80.0331
> 6992 default_idle 158.9091
> 6335 __wake_up 87.9861
> 5671 remove_wait_queue 76.6351
> 4150 add_wait_queue 59.2857
> 2459 sysenter_past_esp 21.7611
> 1304 find_get_page 21.0323
> 1160 __mod_timer 4.1281
> 938 do_gettimeofday 5.0430
> 756 del_timer_sync 2.4787
> 753 del_timer 7.1714
> 599 handle_IRQ_event 5.9900
> 570 do_page_fault 0.4612
> 485 __do_softirq 2.8529
> 476 do_sigaction 0.9136
> 469 do_getitimer 1.7055
> 464 do_setitimer 0.9768
> 392 get_offset_tsc 17.0435
> 340 current_kernel_time 4.9275
> 293 in_group_p 2.4417
> 272 eligible_child 1.4093
>
> On a 2.6.5 box with a similar workload, the profile is as follows
>
> 131202 total 0.3704
> 42080 fget 667.9365
> 12868 schedule 8.1084
> 11231 default_idle 255.2500
> 9157 fput 436.0476
> 6642 remove_wait_queue 89.7568
> 6076 __wake_up 93.4769
> 5056 page_remove_rmap 13.4111
> 4144 add_wait_queue 59.2000
> 2255 fget_light 17.4806
> 2176 sysenter_past_esp 19.2566
> 1271 kfree 11.6606
> 1148 kmem_cache_alloc 14.9091
> 934 __kmalloc 6.6714
> 890 __mod_timer 3.1673
> 768 del_timer_sync 2.9538
> 717 buffered_rmqueue 1.5621
> 715 kmem_cache_free 8.2184
> 613 free_hot_cold_page 2.4618
> 612 del_timer 5.8286
> 588 __find_get_block 3.0466
> 588 do_page_fault 0.4757
>
> I am trying to determine where the system time is going and don't have
> much zen to begin with. Any assistance would be appreciated ?
>

Is it an SMP system? What sort of workload is it? Does it use threads?
Check vmstat to see how much context switching is going on with each
kernel.

Thanks.

2004-06-24 09:34:46

by Yusuf Goolamabbas

[permalink] [raw]
Subject: Re: finish_task_switch high in profiles in 2.6.7

> Is it an SMP system? What sort of workload is it? Does it use threads?
> Check vmstat to see how much context switching is going on with each
> kernel.

Yes, It's an SMP (Dual P3-800). Workload is a busy mailserver (get lots
of SMTP traffic, validate users against a remote database, reject a
truckload of connections). CONFIG_4K_STACKS=y on the 2.6.7 box, e100
driver with NAPI turned off. No threads

The 2.6.7 box shows this wrt context swithes

procs memory swap io system cpu
r b swpd free buff cache si so bi bo in cs us sy wa id
3 0 0 73096 63000 78100 0 0 0 524 5571 14772 25 73 0 2
6 0 0 70932 63000 78100 0 0 0 448 5861 11368 34 65 0 1
7 0 0 72916 63008 78092 0 0 0 12 5838 14956 30 70 0 1
7 0 0 70852 63016 78084 0 0 0 1008 5551 13951 30 69 0 1
22 0 0 65300 63016 78084 0 0 0 0 5989 16043 34 66 0 1
19 0 0 66516 63020 78148 0 0 0 1252 6100 14653 31 69 0 0
29 1 0 67620 63024 78212 0 0 0 992 6314 14747 31 69 0 0

The 2.6.5 box shows this

procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
166 0 0 137752 38148 188836 0 0 0 2424 5973 21695 30 70 0 0
34 0 0 135928 38160 189028 0 0 0 2160 5923 22532 32 68 0 0
28 1 0 136928 38184 189344 0 0 0 2904 6393 22098 31 69 0 0
32 1 0 136672 38204 189324 0 0 0 3240 6412 21362 32 68 0 0
33 0 0 135456 38216 189380 0 0 0 1708 6044 24735 28 72 0 0
17 0 0 135372 38264 189536 0 0 0 3044 6305 22326 35 64 0 0
229 0 0 135060 38272 189732 0 0 0 2340 6416 23697 33 67 0 0 32 0 0 134100 38288 189852 0 0 0 3068 6342 24016 33 67 0 0
16 0 0 134292 38300 190044 0 0 0 2408 6451 24727 31 69 0 0

2004-06-24 09:46:06

by Nick Piggin

[permalink] [raw]
Subject: Re: finish_task_switch high in profiles in 2.6.7

Yusuf Goolamabbas wrote:
>>Is it an SMP system? What sort of workload is it? Does it use threads?
>>Check vmstat to see how much context switching is going on with each
>>kernel.
>
>
> Yes, It's an SMP (Dual P3-800). Workload is a busy mailserver (get lots
> of SMTP traffic, validate users against a remote database, reject a
> truckload of connections). CONFIG_4K_STACKS=y on the 2.6.7 box, e100
> driver with NAPI turned off. No threads
>

OK

> The 2.6.7 box shows this wrt context swithes
>
> procs memory swap io system cpu
> r b swpd free buff cache si so bi bo in cs us sy wa id
> 3 0 0 73096 63000 78100 0 0 0 524 5571 14772 25 73 0 2
> 6 0 0 70932 63000 78100 0 0 0 448 5861 11368 34 65 0 1
> 7 0 0 72916 63008 78092 0 0 0 12 5838 14956 30 70 0 1
> 7 0 0 70852 63016 78084 0 0 0 1008 5551 13951 30 69 0 1
> 22 0 0 65300 63016 78084 0 0 0 0 5989 16043 34 66 0 1
> 19 0 0 66516 63020 78148 0 0 0 1252 6100 14653 31 69 0 0
> 29 1 0 67620 63024 78212 0 0 0 992 6314 14747 31 69 0 0
>
> The 2.6.5 box shows this
>
> procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
> r b swpd free buff cache si so bi bo in cs us sy id wa
> 166 0 0 137752 38148 188836 0 0 0 2424 5973 21695 30 70 0 0
> 34 0 0 135928 38160 189028 0 0 0 2160 5923 22532 32 68 0 0
> 28 1 0 136928 38184 189344 0 0 0 2904 6393 22098 31 69 0 0
> 32 1 0 136672 38204 189324 0 0 0 3240 6412 21362 32 68 0 0
> 33 0 0 135456 38216 189380 0 0 0 1708 6044 24735 28 72 0 0
> 17 0 0 135372 38264 189536 0 0 0 3044 6305 22326 35 64 0 0
> 229 0 0 135060 38272 189732 0 0 0 2340 6416 23697 33 67 0 0 32 0 0 134100 38288 189852 0 0 0 3068 6342 24016 33 67 0 0
> 16 0 0 134292 38300 190044 0 0 0 2408 6451 24727 31 69 0 0
>

OK. They're both using 100% CPU... is 2.6.5 getting more work done?

2004-06-24 10:05:49

by Yusuf Goolamabbas

[permalink] [raw]
Subject: Re: finish_task_switch high in profiles in 2.6.7

> OK. They're both using 100% CPU... is 2.6.5 getting more work done?

So far, it looks like both are doing similar amount of work. am getting
more measurements from other boxes

My concern was the high system usage, I had suspected that it might have
been due interrupts generated on the NIC. This was not evident in the
profile I generated

Regards, Yusuf

2004-06-24 10:10:54

by Nick Piggin

[permalink] [raw]
Subject: Re: finish_task_switch high in profiles in 2.6.7

Yusuf Goolamabbas wrote:
>>OK. They're both using 100% CPU... is 2.6.5 getting more work done?
>
>
> So far, it looks like both are doing similar amount of work. am getting
> more measurements from other boxes
>
> My concern was the high system usage, I had suspected that it might have
> been due interrupts generated on the NIC. This was not evident in the
> profile I generated
>

OK, keep me CCed on any further developments. In particular if
context switches are up or performance is down.

Thanks
Nick

2004-06-24 10:25:36

by Andi Kleen

[permalink] [raw]
Subject: Re: finish_task_switch high in profiles in 2.6.7

Yusuf Goolamabbas <[email protected]> writes:

> Hi, I have a fairly busy mailserver which also has a simple iptables
> ruleset (blocking some IP's) running 2.6.7 with the deadline i/o
> scheduler. vmstat was reporting that system time was around 80%. I did
> the following

How many context switches do you get in vmstat?

Most likely you just have far too many of them. readprofile will attribute
most of the cost to finish_task_switch, because that one reenables the
interrupts (and the profiling only works with interrupts on)

Too many context switches are usually caused by user space.

-Andi

2004-06-24 10:44:29

by Yusuf Goolamabbas

[permalink] [raw]
Subject: Re: finish_task_switch high in profiles in 2.6.7

> How many context switches do you get in vmstat?

Mentioned in a subsequent message

http://marc.theaimsgroup.com/?l=linux-kernel&m=108806991921498&w=2

> Most likely you just have far too many of them. readprofile will attribute
> most of the cost to finish_task_switch, because that one reenables the
> interrupts (and the profiling only works with interrupts on)
>
> Too many context switches are usually caused by user space.

Hmm, Is there a way to determine which syscall would be the culprit. I
guess this is where something like DTrace would be invaluable

http://www.sun.com/bigadmin/content/dtrace/

2004-06-24 11:36:11

by Andi Kleen

[permalink] [raw]
Subject: Re: finish_task_switch high in profiles in 2.6.7

> Hmm, Is there a way to determine which syscall would be the culprit. I
> guess this is where something like DTrace would be invaluable

Find out which program does it (most likely those with the most
system time) and then strace it.
>
> http://www.sun.com/bigadmin/content/dtrace/

Sounds like a inferior clone of dprobes to me. But I doubt it
would help tracking this down.

-Andi

2004-06-24 14:06:09

by William Lee Irwin III

[permalink] [raw]
Subject: Re: finish_task_switch high in profiles in 2.6.7

On Thu, Jun 24, 2004 at 01:36:08PM +0200, Andi Kleen wrote:
> Sounds like a inferior clone of dprobes to me. But I doubt it
> would help tracking this down.

The schedprof thing I wrote to track down the source of context
switches during database creation may prove useful, since it has
at least demonstrated where thundering herds came from properly once
before and is damn near idiotproof -- it requires no more than
readprofile(1) from userspace. I'll dredge that up again and maybe
we'll see if it helps here. It will also properly point to
sys_sched_yield() and the like in the event of badly-behaved userspace.


-- wli

2004-06-24 14:30:18

by William Lee Irwin III

[permalink] [raw]
Subject: Re: finish_task_switch high in profiles in 2.6.7

On Thu, Jun 24, 2004 at 05:34:40PM +0800, Yusuf Goolamabbas wrote:
> Yes, It's an SMP (Dual P3-800). Workload is a busy mailserver (get lots
> of SMTP traffic, validate users against a remote database, reject a
> truckload of connections). CONFIG_4K_STACKS=y on the 2.6.7 box, e100
> driver with NAPI turned off. No threads
> The 2.6.7 box shows this wrt context swithes
> procs memory swap io system cpu
> r b swpd free buff cache si so bi bo in cs us sy wa id
> 19 0 0 66516 63020 78148 0 0 0 1252 6100 14653 31 69 0 0
> 22 0 0 65300 63016 78084 0 0 0 0 5989 16043 34 66 0 1
> 29 1 0 67620 63024 78212 0 0 0 992 6314 14747 31 69 0 0
> The 2.6.5 box shows this
> procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
> r b swpd free buff cache si so bi bo in cs us sy id wa
> 34 0 0 135928 38160 189028 0 0 0 2160 5923 22532 32 68 0 0
> 166 0 0 137752 38148 188836 0 0 0 2424 5973 21695 30 70 0 0
> 229 0 0 135060 38272 189732 0 0 0 2340 6416 23697 33 67 0 0 32 0 0 134100 38288 189852 0 0 0 3068 6342 24016 33 67 0 0

This doesn't look like very intense context switching in either case. 2.6.7
appears to be doing less context switching. I don't see a significant
difference in system time, either.

Could you please send me complete profiles?


-- wli

2004-06-24 15:34:15

by William Lee Irwin III

[permalink] [raw]
Subject: Re: finish_task_switch high in profiles in 2.6.7

On Thu, Jun 24, 2004 at 07:30:00AM -0700, William Lee Irwin III wrote:
> This doesn't look like very intense context switching in either case. 2.6.7
> appears to be doing less context switching. I don't see a significant
> difference in system time, either.
> Could you please send me complete profiles?

$ awk '{ A[$2] = $1 } END { tot = A["total"]; for (x in A) { if (x != "total") { printf "%4.4f\t%s\n", 100.0*A[x]/tot, x } } }' < /tmp/yusuf-2.6.5 |sort -k 2,2 > /tmp/scaled-2.6.5
$ awk '{ A[$2] = $1 } END { tot = A["total"]; for (x in A) { if (x != "total") { printf "%4.4f\t%s\n", 100.0*A[x]/tot, x } } }' < /tmp/yusuf-2.6.7 |sort -k 2,2 > /tmp/scaled-2.6.7
$ join -1 2 -2 2 -e 0 -a 1 -a 2 /tmp/scaled-2.6.5 /tmp/scaled-2.6.7 | awk 'BEGIN { printf "%%before %%after %%diff func\n" } { A[$1] = $3 ; B[$1] = $2 ; D[$1] = $3 - $2 } END { n = 0; for (x in D) { ++n }; while (n != 0) { z = -1000.0; for (x in D) { if (D[x] > z) { y = x; z = A[x] } }; printf "%8.4f %8.4f %8.4f %s\n", B[y], A[y], D[y], y; delete D[y]; delete A[y]; delete B[y]; --n }; }'
%before %after %diff func
0.0000 20.1599 20.1599 finish_task_switch
5.0624 11.8057 6.7433 remove_wait_queue
3.1585 8.6394 5.4809 add_wait_queue
4.6310 13.1880 8.5570 __wake_up
8.5601 14.5557 5.9956 default_idle
1.6585 5.1191 3.4606 sysenter_past_esp
0.0000 2.7146 2.7146 find_get_page
0.6783 2.4149 1.7366 __mod_timer
0.0000 1.9527 1.9527 do_gettimeofday
0.0000 1.2470 1.2470 handle_IRQ_event
0.0000 1.1866 1.1866 do_page_fault
0.4665 1.5676 1.1011 del_timer
0.0000 1.0097 1.0097 __do_softirq
0.0000 0.9909 0.9909 do_sigaction
0.5854 1.5738 0.9884 del_timer_sync
0.0000 0.9764 0.9764 do_getitimer
0.0000 0.9659 0.9659 do_setitimer
0.0000 0.8161 0.8161 get_offset_tsc
0.0000 0.7078 0.7078 current_kernel_time
0.0000 0.6100 0.6100 in_group_p
9.8078 0.0000 -9.8078 schedule
3.8536 0.0000 -3.8536 page_remove_rmap
0.4672 0.0000 -0.4672 free_hot_cold_page
0.4482 0.0000 -0.4482 __find_get_block
0.7119 0.0000 -0.7119 __kmalloc
6.9793 0.0000 -6.9793 fput
32.0727 0.0000 -32.0727 fget
0.5465 0.0000 -0.5465 buffered_rmqueue
0.5450 0.0000 -0.5450 kmem_cache_free
0.8750 0.0000 -0.8750 kmem_cache_alloc
0.9687 0.0000 -0.9687 kfree
1.7187 0.0000 -1.7187 fget_light

2004-06-24 21:24:47

by William Lee Irwin III

[permalink] [raw]
Subject: Re: finish_task_switch high in profiles in 2.6.7

On Thu, Jun 24, 2004 at 07:05:39AM -0700, William Lee Irwin III wrote:
> The schedprof thing I wrote to track down the source of context
> switches during database creation may prove useful, since it has
> at least demonstrated where thundering herds came from properly once
> before and is damn near idiotproof -- it requires no more than
> readprofile(1) from userspace. I'll dredge that up again and maybe
> we'll see if it helps here. It will also properly point to
> sys_sched_yield() and the like in the event of badly-behaved userspace.

Brute-force port of schedprof to 2.6.7-final. Compiletested on sparc64
only. No runtime testing.

Given that the context switch rate is actually *reduced* in 2.6.7 vs.
2.6.5, I expect that this will not, in fact, reveal anything useful.


-- wli

Index: schedprof-2.6.7/include/linux/sched.h
===================================================================
--- schedprof-2.6.7.orig/include/linux/sched.h 2004-06-15 22:18:57.000000000 -0700
+++ schedprof-2.6.7/include/linux/sched.h 2004-06-24 14:02:48.273041152 -0700
@@ -180,7 +180,11 @@

#define MAX_SCHEDULE_TIMEOUT LONG_MAX
extern signed long FASTCALL(schedule_timeout(signed long timeout));
+extern signed long FASTCALL(__schedule_timeout(signed long timeout));
asmlinkage void schedule(void);
+asmlinkage void __schedule(void);
+void __sched_profile(void *);
+#define sched_profile() __sched_profile(__builtin_return_address(0))

struct namespace;

Index: schedprof-2.6.7/include/linux/profile.h
===================================================================
--- schedprof-2.6.7.orig/include/linux/profile.h 2004-06-15 22:19:22.000000000 -0700
+++ schedprof-2.6.7/include/linux/profile.h 2004-06-24 14:02:48.275040848 -0700
@@ -13,6 +13,7 @@

/* init basic kernel profiler */
void __init profile_init(void);
+void schedprof_init(void);

extern unsigned int * prof_buffer;
extern unsigned long prof_len;
Index: schedprof-2.6.7/kernel/sched.c
===================================================================
--- schedprof-2.6.7.orig/kernel/sched.c 2004-06-15 22:19:51.000000000 -0700
+++ schedprof-2.6.7/kernel/sched.c 2004-06-24 14:02:48.292038264 -0700
@@ -2181,7 +2181,7 @@
/*
* schedule() is the main scheduler function.
*/
-asmlinkage void __sched schedule(void)
+asmlinkage void __sched __schedule(void)
{
long *switch_count;
task_t *prev, *next;
@@ -2317,6 +2317,11 @@
goto need_resched;
}

+asmlinkage void __sched schedule(void)
+{
+ sched_profile();
+ __schedule();
+}
EXPORT_SYMBOL(schedule);

#ifdef CONFIG_PREEMPT
@@ -2338,7 +2343,8 @@

need_resched:
ti->preempt_count = PREEMPT_ACTIVE;
- schedule();
+ sched_profile();
+ __schedule();
ti->preempt_count = 0;

/* we could miss a preemption opportunity between schedule and now */
@@ -2476,7 +2482,8 @@
do {
__set_current_state(TASK_UNINTERRUPTIBLE);
spin_unlock_irq(&x->wait.lock);
- schedule();
+ sched_profile();
+ __schedule();
spin_lock_irq(&x->wait.lock);
} while (!x->done);
__remove_wait_queue(&x->wait, &wait);
@@ -2508,7 +2515,8 @@
current->state = TASK_INTERRUPTIBLE;

SLEEP_ON_HEAD
- schedule();
+ sched_profile();
+ __schedule();
SLEEP_ON_TAIL
}

@@ -2521,7 +2529,8 @@
current->state = TASK_INTERRUPTIBLE;

SLEEP_ON_HEAD
- timeout = schedule_timeout(timeout);
+ sched_profile();
+ timeout = __schedule_timeout(timeout);
SLEEP_ON_TAIL

return timeout;
@@ -2536,7 +2545,8 @@
current->state = TASK_UNINTERRUPTIBLE;

SLEEP_ON_HEAD
- schedule();
+ sched_profile();
+ __schedule();
SLEEP_ON_TAIL
}

@@ -2549,7 +2559,8 @@
current->state = TASK_UNINTERRUPTIBLE;

SLEEP_ON_HEAD
- timeout = schedule_timeout(timeout);
+ sched_profile();
+ timeout = __schedule_timeout(timeout);
SLEEP_ON_TAIL

return timeout;
@@ -2987,7 +2998,7 @@
* to the expired array. If there are no other threads running on this
* CPU then this function will return.
*/
-asmlinkage long sys_sched_yield(void)
+static long sched_yield(void)
{
runqueue_t *rq = this_rq_lock();
prio_array_t *array = current->array;
@@ -3013,15 +3024,22 @@
_raw_spin_unlock(&rq->lock);
preempt_enable_no_resched();

- schedule();
+ __schedule();

return 0;
}

+asmlinkage long sys_sched_yield(void)
+{
+ __sched_profile(sys_sched_yield);
+ return sched_yield();
+}
+
void __sched __cond_resched(void)
{
set_current_state(TASK_RUNNING);
- schedule();
+ sched_profile();
+ __schedule();
}

EXPORT_SYMBOL(__cond_resched);
@@ -3035,7 +3053,8 @@
void __sched yield(void)
{
set_current_state(TASK_RUNNING);
- sys_sched_yield();
+ sched_profile();
+ sched_yield();
}

EXPORT_SYMBOL(yield);
@@ -3052,7 +3071,8 @@
struct runqueue *rq = this_rq();

atomic_inc(&rq->nr_iowait);
- schedule();
+ sched_profile();
+ __schedule();
atomic_dec(&rq->nr_iowait);
}

@@ -3064,7 +3084,8 @@
long ret;

atomic_inc(&rq->nr_iowait);
- ret = schedule_timeout(timeout);
+ sched_profile();
+ ret = __schedule_timeout(timeout);
atomic_dec(&rq->nr_iowait);
return ret;
}
@@ -4029,3 +4050,93 @@

EXPORT_SYMBOL(__preempt_write_lock);
#endif /* defined(CONFIG_SMP) && defined(CONFIG_PREEMPT) */
+
+static atomic_t *schedprof_buf;
+static int sched_profiling;
+static unsigned long schedprof_len;
+
+#include <linux/bootmem.h>
+#include <asm/sections.h>
+
+void __sched_profile(void *__pc)
+{
+ if (schedprof_buf) {
+ unsigned long pc = (unsigned long)__pc;
+ pc -= min(pc, (unsigned long)_stext);
+ atomic_inc(&schedprof_buf[min(pc, schedprof_len)]);
+ }
+}
+
+static int __init schedprof_setup(char *s)
+{
+ int n;
+ if (get_option(&s, &n))
+ sched_profiling = 1;
+ return 1;
+}
+__setup("schedprof=", schedprof_setup);
+
+void __init schedprof_init(void)
+{
+ if (!sched_profiling)
+ return;
+ schedprof_len = (unsigned long)(_etext - _stext) + 1;
+ schedprof_buf = alloc_bootmem(schedprof_len*sizeof(atomic_t));
+ printk(KERN_INFO "Scheduler call profiling enabled\n");
+}
+
+#ifdef CONFIG_PROC_FS
+#include <linux/proc_fs.h>
+
+static ssize_t
+read_sched_profile(struct file *file, char __user *buf, size_t count, loff_t *ppos)
+{
+ unsigned long p = *ppos;
+ ssize_t read;
+ char * pnt;
+ unsigned int sample_step = 1;
+
+ if (p >= (schedprof_len+1)*sizeof(atomic_t))
+ return 0;
+ if (count > (schedprof_len+1)*sizeof(atomic_t) - p)
+ count = (schedprof_len+1)*sizeof(atomic_t) - p;
+ read = 0;
+
+ while (p < sizeof(atomic_t) && count > 0) {
+ put_user(*((char *)(&sample_step)+p),buf);
+ buf++; p++; count--; read++;
+ }
+ pnt = (char *)schedprof_buf + p - sizeof(atomic_t);
+ if (copy_to_user(buf,(void *)pnt,count))
+ return -EFAULT;
+ read += count;
+ *ppos += read;
+ return read;
+}
+
+static ssize_t write_sched_profile(struct file *file, const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ memset(schedprof_buf, 0, sizeof(atomic_t)*schedprof_len);
+ return count;
+}
+
+static struct file_operations sched_profile_operations = {
+ .read = read_sched_profile,
+ .write = write_sched_profile,
+};
+
+static int proc_schedprof_init(void)
+{
+ struct proc_dir_entry *entry;
+ if (!sched_profiling)
+ return 1;
+ entry = create_proc_entry("schedprof", S_IWUSR | S_IRUGO, NULL);
+ if (entry) {
+ entry->proc_fops = &sched_profile_operations;
+ entry->size = sizeof(atomic_t)*(schedprof_len + 1);
+ }
+ return !!entry;
+}
+module_init(proc_schedprof_init);
+#endif
Index: schedprof-2.6.7/kernel/timer.c
===================================================================
--- schedprof-2.6.7.orig/kernel/timer.c 2004-06-15 22:19:52.000000000 -0700
+++ schedprof-2.6.7/kernel/timer.c 2004-06-24 14:03:30.242660800 -0700
@@ -1100,7 +1100,7 @@
*
* In all cases the return value is guaranteed to be non-negative.
*/
-fastcall signed long __sched schedule_timeout(signed long timeout)
+fastcall signed long __sched __schedule_timeout(signed long timeout)
{
struct timer_list timer;
unsigned long expire;
@@ -1115,7 +1115,7 @@
* but I' d like to return a valid offset (>=0) to allow
* the caller to do everything it want with the retval.
*/
- schedule();
+ __schedule();
goto out;
default:
/*
@@ -1143,7 +1143,7 @@
timer.function = process_timeout;

add_timer(&timer);
- schedule();
+ __schedule();
del_singleshot_timer_sync(&timer);

timeout = expire - jiffies;
@@ -1152,6 +1152,11 @@
return timeout < 0 ? 0 : timeout;
}

+fastcall signed long __sched schedule_timeout(signed long timeout)
+{
+ sched_profile();
+ return __schedule_timeout(timeout);
+}
EXPORT_SYMBOL(schedule_timeout);

/* Thread ID - the internal kernel "pid" */
Index: schedprof-2.6.7/arch/alpha/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/alpha/kernel/semaphore.c 2004-06-15 22:19:23.000000000 -0700
+++ schedprof-2.6.7/arch/alpha/kernel/semaphore.c 2004-06-24 14:02:48.302036744 -0700
@@ -66,7 +66,6 @@
{
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
-
#ifdef CONFIG_DEBUG_SEMAPHORE
printk("%s(%d): down failed(%p)\n",
tsk->comm, tsk->pid, sem);
@@ -83,7 +82,8 @@
* that we are asleep, and then sleep.
*/
while (__sem_update_count(sem, -1) <= 0) {
- schedule();
+ sched_profile();
+ __schedule();
set_task_state(tsk, TASK_UNINTERRUPTIBLE);
}
remove_wait_queue(&sem->wait, &wait);
@@ -108,7 +108,6 @@
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
long ret = 0;
-
#ifdef CONFIG_DEBUG_SEMAPHORE
printk("%s(%d): down failed(%p)\n",
tsk->comm, tsk->pid, sem);
@@ -129,7 +128,8 @@
ret = -EINTR;
break;
}
- schedule();
+ sched_profile();
+ __schedule();
set_task_state(tsk, TASK_INTERRUPTIBLE);
}

Index: schedprof-2.6.7/arch/arm/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/arm/kernel/semaphore.c 2004-06-15 22:19:17.000000000 -0700
+++ schedprof-2.6.7/arch/arm/kernel/semaphore.c 2004-06-24 14:02:48.308035832 -0700
@@ -78,8 +78,8 @@
}
sem->sleepers = 1; /* us - see -1 above */
spin_unlock_irq(&semaphore_lock);
-
- schedule();
+ sched_profile();
+ __schedule();
tsk->state = TASK_UNINTERRUPTIBLE;
spin_lock_irq(&semaphore_lock);
}
@@ -128,8 +128,8 @@
}
sem->sleepers = 1; /* us - see -1 above */
spin_unlock_irq(&semaphore_lock);
-
- schedule();
+ sched_profile();
+ __schedule();
tsk->state = TASK_INTERRUPTIBLE;
spin_lock_irq(&semaphore_lock);
}
Index: schedprof-2.6.7/arch/arm26/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/arm26/kernel/semaphore.c 2004-06-15 22:19:02.000000000 -0700
+++ schedprof-2.6.7/arch/arm26/kernel/semaphore.c 2004-06-24 14:02:48.310035528 -0700
@@ -79,8 +79,8 @@
}
sem->sleepers = 1; /* us - see -1 above */
spin_unlock_irq(&semaphore_lock);
-
- schedule();
+ sched_profile();
+ __schedule();
tsk->state = TASK_UNINTERRUPTIBLE;
spin_lock_irq(&semaphore_lock);
}
@@ -129,8 +129,8 @@
}
sem->sleepers = 1; /* us - see -1 above */
spin_unlock_irq(&semaphore_lock);
-
- schedule();
+ sched_profile();
+ __schedule();
tsk->state = TASK_INTERRUPTIBLE;
spin_lock_irq(&semaphore_lock);
}
Index: schedprof-2.6.7/arch/cris/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/cris/kernel/semaphore.c 2004-06-15 22:19:11.000000000 -0700
+++ schedprof-2.6.7/arch/cris/kernel/semaphore.c 2004-06-24 14:02:48.312035224 -0700
@@ -101,7 +101,8 @@
DOWN_HEAD(TASK_UNINTERRUPTIBLE)
if (waking_non_zero(sem))
break;
- schedule();
+ sched_profile();
+ __schedule();
DOWN_TAIL(TASK_UNINTERRUPTIBLE)
}

@@ -119,7 +120,8 @@
ret = 0;
break;
}
- schedule();
+ sched_profile();
+ __schedule();
DOWN_TAIL(TASK_INTERRUPTIBLE)
return ret;
}
Index: schedprof-2.6.7/arch/h8300/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/h8300/kernel/semaphore.c 2004-06-15 22:19:36.000000000 -0700
+++ schedprof-2.6.7/arch/h8300/kernel/semaphore.c 2004-06-24 14:02:48.314034920 -0700
@@ -103,7 +103,8 @@
DOWN_HEAD(TASK_UNINTERRUPTIBLE)
if (waking_non_zero(sem))
break;
- schedule();
+ sched_profile();
+ __schedule();
DOWN_TAIL(TASK_UNINTERRUPTIBLE)
}

@@ -122,7 +123,8 @@
ret = 0;
break;
}
- schedule();
+ sched_profile();
+ __schedule();
DOWN_TAIL(TASK_INTERRUPTIBLE)
return ret;
}
Index: schedprof-2.6.7/arch/i386/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/i386/kernel/semaphore.c 2004-06-15 22:19:17.000000000 -0700
+++ schedprof-2.6.7/arch/i386/kernel/semaphore.c 2004-06-24 14:02:48.316034616 -0700
@@ -79,8 +79,8 @@
}
sem->sleepers = 1; /* us - see -1 above */
spin_unlock_irqrestore(&sem->wait.lock, flags);
-
- schedule();
+ sched_profile();
+ __schedule();

spin_lock_irqsave(&sem->wait.lock, flags);
tsk->state = TASK_UNINTERRUPTIBLE;
@@ -132,8 +132,8 @@
}
sem->sleepers = 1; /* us - see -1 above */
spin_unlock_irqrestore(&sem->wait.lock, flags);
-
- schedule();
+ sched_profile();
+ __schedule();

spin_lock_irqsave(&sem->wait.lock, flags);
tsk->state = TASK_INTERRUPTIBLE;
Index: schedprof-2.6.7/arch/ia64/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/ia64/kernel/semaphore.c 2004-06-15 22:19:17.000000000 -0700
+++ schedprof-2.6.7/arch/ia64/kernel/semaphore.c 2004-06-24 14:02:48.318034312 -0700
@@ -70,8 +70,8 @@
}
sem->sleepers = 1; /* us - see -1 above */
spin_unlock_irqrestore(&sem->wait.lock, flags);
-
- schedule();
+ sched_profile();
+ __schedule();

spin_lock_irqsave(&sem->wait.lock, flags);
tsk->state = TASK_UNINTERRUPTIBLE;
@@ -123,8 +123,8 @@
}
sem->sleepers = 1; /* us - see -1 above */
spin_unlock_irqrestore(&sem->wait.lock, flags);
-
- schedule();
+ sched_profile();
+ __schedule();

spin_lock_irqsave(&sem->wait.lock, flags);
tsk->state = TASK_INTERRUPTIBLE;
Index: schedprof-2.6.7/arch/m68k/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/m68k/kernel/semaphore.c 2004-06-15 22:19:43.000000000 -0700
+++ schedprof-2.6.7/arch/m68k/kernel/semaphore.c 2004-06-24 14:02:48.320034008 -0700
@@ -103,7 +103,8 @@
DOWN_HEAD(TASK_UNINTERRUPTIBLE)
if (waking_non_zero(sem))
break;
- schedule();
+ sched_profile();
+ __schedule();
DOWN_TAIL(TASK_UNINTERRUPTIBLE)
}

@@ -122,7 +123,8 @@
ret = 0;
break;
}
- schedule();
+ sched_profile();
+ __schedule();
DOWN_TAIL(TASK_INTERRUPTIBLE)
return ret;
}
Index: schedprof-2.6.7/arch/m68knommu/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/m68knommu/kernel/semaphore.c 2004-06-15 22:20:26.000000000 -0700
+++ schedprof-2.6.7/arch/m68knommu/kernel/semaphore.c 2004-06-24 14:02:48.321033856 -0700
@@ -104,7 +104,8 @@
DOWN_HEAD(TASK_UNINTERRUPTIBLE)
if (waking_non_zero(sem))
break;
- schedule();
+ sched_profile();
+ __schedule();
DOWN_TAIL(TASK_UNINTERRUPTIBLE)
}

@@ -123,7 +124,8 @@
ret = 0;
break;
}
- schedule();
+ sched_profile();
+ __schedule();
DOWN_TAIL(TASK_INTERRUPTIBLE)
return ret;
}
Index: schedprof-2.6.7/arch/mips/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/mips/kernel/semaphore.c 2004-06-15 22:19:37.000000000 -0700
+++ schedprof-2.6.7/arch/mips/kernel/semaphore.c 2004-06-24 14:02:48.323033552 -0700
@@ -132,7 +132,8 @@
for (;;) {
if (waking_non_zero(sem))
break;
- schedule();
+ sched_profile();
+ __schedule();
__set_current_state(TASK_UNINTERRUPTIBLE);
}
__set_current_state(TASK_RUNNING);
@@ -261,7 +262,8 @@
ret = 0;
break;
}
- schedule();
+ sched_profile();
+ __schedule();
__set_current_state(TASK_INTERRUPTIBLE);
}
__set_current_state(TASK_RUNNING);
Index: schedprof-2.6.7/arch/parisc/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/parisc/kernel/semaphore.c 2004-06-15 22:19:43.000000000 -0700
+++ schedprof-2.6.7/arch/parisc/kernel/semaphore.c 2004-06-24 14:02:48.325033248 -0700
@@ -68,7 +68,8 @@
/* we can _read_ this without the sentry */
if (sem->count != -1)
break;
- schedule();
+ sched_profile();
+ __schedule();
}

DOWN_TAIL
@@ -89,7 +90,8 @@
ret = -EINTR;
break;
}
- schedule();
+ sched_profile();
+ __schedule();
}

DOWN_TAIL
Index: schedprof-2.6.7/arch/ppc/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/ppc/kernel/semaphore.c 2004-06-15 22:19:44.000000000 -0700
+++ schedprof-2.6.7/arch/ppc/kernel/semaphore.c 2004-06-24 14:02:48.327032944 -0700
@@ -86,7 +86,8 @@
* that we are asleep, and then sleep.
*/
while (__sem_update_count(sem, -1) <= 0) {
- schedule();
+ sched_profile();
+ __schedule();
tsk->state = TASK_UNINTERRUPTIBLE;
}
remove_wait_queue(&sem->wait, &wait);
@@ -121,7 +122,8 @@
retval = -EINTR;
break;
}
- schedule();
+ sched_profile();
+ __schedule();
tsk->state = TASK_INTERRUPTIBLE;
}
tsk->state = TASK_RUNNING;
Index: schedprof-2.6.7/arch/ppc64/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/ppc64/kernel/semaphore.c 2004-06-15 22:18:57.000000000 -0700
+++ schedprof-2.6.7/arch/ppc64/kernel/semaphore.c 2004-06-24 14:02:48.329032640 -0700
@@ -86,7 +86,8 @@
* that we are asleep, and then sleep.
*/
while (__sem_update_count(sem, -1) <= 0) {
- schedule();
+ sched_profile();
+ __schedule();
set_task_state(tsk, TASK_UNINTERRUPTIBLE);
}
remove_wait_queue(&sem->wait, &wait);
@@ -120,7 +121,8 @@
retval = -EINTR;
break;
}
- schedule();
+ sched_profile();
+ __schedule();
set_task_state(tsk, TASK_INTERRUPTIBLE);
}
remove_wait_queue(&sem->wait, &wait);
Index: schedprof-2.6.7/arch/s390/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/s390/kernel/semaphore.c 2004-06-15 22:20:03.000000000 -0700
+++ schedprof-2.6.7/arch/s390/kernel/semaphore.c 2004-06-24 14:02:48.331032336 -0700
@@ -69,7 +69,8 @@
__set_task_state(tsk, TASK_UNINTERRUPTIBLE);
add_wait_queue_exclusive(&sem->wait, &wait);
while (__sem_update_count(sem, -1) <= 0) {
- schedule();
+ sched_profile();
+ __schedule();
set_task_state(tsk, TASK_UNINTERRUPTIBLE);
}
remove_wait_queue(&sem->wait, &wait);
@@ -97,7 +98,8 @@
retval = -EINTR;
break;
}
- schedule();
+ sched_profile();
+ __schedule();
set_task_state(tsk, TASK_INTERRUPTIBLE);
}
remove_wait_queue(&sem->wait, &wait);
Index: schedprof-2.6.7/arch/sh/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/sh/kernel/semaphore.c 2004-06-15 22:19:44.000000000 -0700
+++ schedprof-2.6.7/arch/sh/kernel/semaphore.c 2004-06-24 14:02:48.332032184 -0700
@@ -110,7 +110,8 @@
DOWN_HEAD(TASK_UNINTERRUPTIBLE)
if (waking_non_zero(sem))
break;
- schedule();
+ sched_profile();
+ __schedule();
DOWN_TAIL(TASK_UNINTERRUPTIBLE)
}

@@ -128,7 +129,8 @@
ret = 0;
break;
}
- schedule();
+ sched_profile();
+ __schedule();
DOWN_TAIL(TASK_INTERRUPTIBLE)
return ret;
}
Index: schedprof-2.6.7/arch/sparc/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/sparc/kernel/semaphore.c 2004-06-15 22:20:04.000000000 -0700
+++ schedprof-2.6.7/arch/sparc/kernel/semaphore.c 2004-06-24 14:02:48.334031880 -0700
@@ -68,8 +68,8 @@
}
sem->sleepers = 1; /* us - see -1 above */
spin_unlock_irq(&semaphore_lock);
-
- schedule();
+ sched_profile();
+ __schedule();
tsk->state = TASK_UNINTERRUPTIBLE;
spin_lock_irq(&semaphore_lock);
}
@@ -118,8 +118,8 @@
}
sem->sleepers = 1; /* us - see -1 above */
spin_unlock_irq(&semaphore_lock);
-
- schedule();
+ sched_profile();
+ __schedule();
tsk->state = TASK_INTERRUPTIBLE;
spin_lock_irq(&semaphore_lock);
}
Index: schedprof-2.6.7/arch/sparc64/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/sparc64/kernel/semaphore.c 2004-06-15 22:19:44.000000000 -0700
+++ schedprof-2.6.7/arch/sparc64/kernel/semaphore.c 2004-06-24 14:02:48.336031576 -0700
@@ -100,7 +100,8 @@
add_wait_queue_exclusive(&sem->wait, &wait);

while (__sem_update_count(sem, -1) <= 0) {
- schedule();
+ sched_profile();
+ __schedule();
tsk->state = TASK_UNINTERRUPTIBLE;
}
remove_wait_queue(&sem->wait, &wait);
@@ -208,7 +209,8 @@
retval = -EINTR;
break;
}
- schedule();
+ sched_profile();
+ __schedule();
tsk->state = TASK_INTERRUPTIBLE;
}
tsk->state = TASK_RUNNING;
Index: schedprof-2.6.7/arch/v850/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/v850/kernel/semaphore.c 2004-06-15 22:19:44.000000000 -0700
+++ schedprof-2.6.7/arch/v850/kernel/semaphore.c 2004-06-24 14:02:48.338031272 -0700
@@ -79,8 +79,8 @@
}
sem->sleepers = 1; /* us - see -1 above */
spin_unlock_irq(&semaphore_lock);
-
- schedule();
+ sched_profile();
+ __schedule();
tsk->state = TASK_UNINTERRUPTIBLE;
spin_lock_irq(&semaphore_lock);
}
@@ -129,8 +129,8 @@
}
sem->sleepers = 1; /* us - see -1 above */
spin_unlock_irq(&semaphore_lock);
-
- schedule();
+ sched_profile();
+ __schedule();
tsk->state = TASK_INTERRUPTIBLE;
spin_lock_irq(&semaphore_lock);
}
Index: schedprof-2.6.7/arch/x86_64/kernel/semaphore.c
===================================================================
--- schedprof-2.6.7.orig/arch/x86_64/kernel/semaphore.c 2004-06-15 22:20:26.000000000 -0700
+++ schedprof-2.6.7/arch/x86_64/kernel/semaphore.c 2004-06-24 14:02:48.340030968 -0700
@@ -80,8 +80,8 @@
}
sem->sleepers = 1; /* us - see -1 above */
spin_unlock_irqrestore(&sem->wait.lock, flags);
-
- schedule();
+ sched_profile();
+ __schedule();

spin_lock_irqsave(&sem->wait.lock, flags);
tsk->state = TASK_UNINTERRUPTIBLE;
@@ -133,8 +133,8 @@
}
sem->sleepers = 1; /* us - see -1 above */
spin_unlock_irqrestore(&sem->wait.lock, flags);
-
- schedule();
+ sched_profile();
+ __schedule();

spin_lock_irqsave(&sem->wait.lock, flags);
tsk->state = TASK_INTERRUPTIBLE;
Index: schedprof-2.6.7/lib/rwsem.c
===================================================================
--- schedprof-2.6.7.orig/lib/rwsem.c 2004-06-15 22:18:54.000000000 -0700
+++ schedprof-2.6.7/lib/rwsem.c 2004-06-24 14:02:48.343030512 -0700
@@ -163,7 +163,7 @@
for (;;) {
if (!waiter->task)
break;
- schedule();
+ __schedule();
set_task_state(tsk, TASK_UNINTERRUPTIBLE);
}

@@ -178,7 +178,7 @@
struct rw_semaphore fastcall __sched *rwsem_down_read_failed(struct rw_semaphore *sem)
{
struct rwsem_waiter waiter;
-
+ sched_profile();
rwsemtrace(sem,"Entering rwsem_down_read_failed");

waiter.flags = RWSEM_WAITING_FOR_READ;
@@ -194,7 +194,7 @@
struct rw_semaphore fastcall __sched *rwsem_down_write_failed(struct rw_semaphore *sem)
{
struct rwsem_waiter waiter;
-
+ sched_profile();
rwsemtrace(sem,"Entering rwsem_down_write_failed");

waiter.flags = RWSEM_WAITING_FOR_WRITE;
Index: schedprof-2.6.7/init/main.c
===================================================================
--- schedprof-2.6.7.orig/init/main.c 2004-06-15 22:19:01.000000000 -0700
+++ schedprof-2.6.7/init/main.c 2004-06-24 14:02:48.346030056 -0700
@@ -445,6 +445,7 @@
if (panic_later)
panic(panic_later, panic_param);
profile_init();
+ schedprof_init();
local_irq_enable();
#ifdef CONFIG_BLK_DEV_INITRD
if (initrd_start && !initrd_below_start_ok &&

2004-06-24 22:04:47

by William Lee Irwin III

[permalink] [raw]
Subject: Re: finish_task_switch high in profiles in 2.6.7

On Thu, Jun 24, 2004 at 02:22:48PM -0700, William Lee Irwin III wrote:
> Brute-force port of schedprof to 2.6.7-final. Compiletested on sparc64
> only. No runtime testing.
> Given that the context switch rate is actually *reduced* in 2.6.7 vs.
> 2.6.5, I expect that this will not, in fact, reveal anything useful.

While I'm spraying out untested code, I might as well do these, which
I've not even compiled. =)


-- wli


Attachments:
(No filename) (422.00 B)
schedprof_mmap-2.6.7 (3.35 kB)
schedprof_mmap-2.6.7
schedprof_proc_init-2.6.7 (627.00 B)
schedprof_proc_init-2.6.7
schedprof_shift-2.6.7 (1.43 kB)
schedprof_shift-2.6.7
Download all attachments

2004-06-25 06:52:53

by William Lee Irwin III

[permalink] [raw]
Subject: Re: finish_task_switch high in profiles in 2.6.7

On Thu, Jun 24, 2004 at 02:56:45PM -0700, William Lee Irwin III wrote:
> While I'm spraying out untested code, I might as well do these, which
> I've not even compiled. =)

Okay, these compile -- ship it! =)


-- wli


Attachments:
(No filename) (217.00 B)
schedprof_mmap-2.6.7 (3.67 kB)
schedprof_mmap-2.6.7
schedprof_proc_init-2.6.7 (627.00 B)
schedprof_proc_init-2.6.7
schedprof_shift-2.6.7 (1.55 kB)
schedprof_shift-2.6.7
Download all attachments