Hi,
I'd like to propose the following for 2.6.1-mm/2.6.2. On systems with a
large number of CPUs the number of printk's flowing by for each CPU
booting starts becoming a real console hog.
The following patch eliminates a couple of them (already sent a patch to
David for the ia64 specific ones) as well as changes the
"Building zonelist : X" in "Built Y zonelists". IMHO it doesn't make any
sense to print for each zonelist since it's run in a for loop running
from 0 to Y-1 anyway.
The patch nukes a few new printk's that were introduced with the
scheduler changes to the NUMA code in -mm3, if these are still needed
then I won't fight for that part of the patch.
Cheers,
Jes
diff -urN -X /usr/people/jes/exclude-linux --exclude=acpi --exclude=ia64 orig/linux-2.6.1-mm3-boot/init/main.c linux-2.6.1-mm3/init/main.c
--- orig/linux-2.6.1-mm3-boot/init/main.c Wed Jan 14 02:59:53 2004
+++ linux-2.6.1-mm3/init/main.c Wed Jan 14 05:34:20 2004
@@ -346,13 +346,11 @@
if (num_online_cpus() >= max_cpus)
break;
if (cpu_possible(i) && !cpu_online(i)) {
- printk("Bringing up %i\n", i);
cpu_up(i);
}
}
/* Any cleanup work */
- printk("CPUS done %u\n", max_cpus);
smp_cpus_done(max_cpus);
#if 0
/* Get other processors into their bootup holding patterns. */
diff -urN -X /usr/people/jes/exclude-linux --exclude=acpi --exclude=ia64 orig/linux-2.6.1-mm3-boot/kernel/cpu.c linux-2.6.1-mm3/kernel/cpu.c
--- orig/linux-2.6.1-mm3-boot/kernel/cpu.c Wed Dec 17 18:59:35 2003
+++ linux-2.6.1-mm3/kernel/cpu.c Wed Jan 14 05:34:45 2004
@@ -55,7 +55,6 @@
BUG();
/* Now call notifier in preparation. */
- printk("CPU %u IS NOW UP!\n", cpu);
notifier_call_chain(&cpu_chain, CPU_ONLINE, hcpu);
out_notify:
diff -urN -X /usr/people/jes/exclude-linux --exclude=acpi --exclude=ia64 orig/linux-2.6.1-mm3-boot/kernel/sched.c linux-2.6.1-mm3/kernel/sched.c
--- orig/linux-2.6.1-mm3-boot/kernel/sched.c Wed Jan 14 03:18:28 2004
+++ linux-2.6.1-mm3/kernel/sched.c Wed Jan 14 05:35:03 2004
@@ -3242,8 +3242,6 @@
if (cpus_empty(nodemask))
continue;
- printk(KERN_INFO "NODE%d\n", i);
-
node->cpumask = nodemask;
for_each_cpu_mask(j, node->cpumask) {
@@ -3252,8 +3250,6 @@
cpus_clear(cpu->cpumask);
cpu_set(j, cpu->cpumask);
- printk(KERN_INFO "CPU%d\n", j);
-
if (!first_cpu)
first_cpu = cpu;
if (last_cpu)
diff -urN -X /usr/people/jes/exclude-linux --exclude=acpi --exclude=ia64 orig/linux-2.6.1-mm3-boot/mm/page_alloc.c linux-2.6.1-mm3/mm/page_alloc.c
--- orig/linux-2.6.1-mm3-boot/mm/page_alloc.c Wed Jan 14 02:59:53 2004
+++ linux-2.6.1-mm3/mm/page_alloc.c Wed Jan 14 05:54:32 2004
@@ -1080,7 +1080,6 @@
int i, j, k, node, local_node;
local_node = pgdat->node_id;
- printk("Building zonelist for node : %d\n", local_node);
for (i = 0; i < MAX_NR_ZONES; i++) {
struct zonelist *zonelist;
@@ -1118,6 +1117,7 @@
for(i = 0 ; i < numnodes ; i++)
build_zonelists(NODE_DATA(i));
+ printk("Built %i zonelists\n", numnodes);
}
/*
On Wed, Jan 14, 2004 at 09:30:42AM -0500, Jes Sorensen wrote:
> @@ -1118,6 +1117,7 @@
>
> for(i = 0 ; i < numnodes ; i++)
> build_zonelists(NODE_DATA(i));
> + printk("Built %i zonelists\n", numnodes);
> }
How many of these for (i = 0; i < numnodes; i++) loops do we have?
Should we have a for_each_node() function like we do for CPUs? Isn't
there a node_online() thing that many loops are missing?
Thanks,
Jesse
Jes Sorensen wrote:
>Hi,
>
>I'd like to propose the following for 2.6.1-mm/2.6.2. On systems with a
>large number of CPUs the number of printk's flowing by for each CPU
>booting starts becoming a real console hog.
>
>The following patch eliminates a couple of them (already sent a patch to
>David for the ia64 specific ones) as well as changes the
>"Building zonelist : X" in "Built Y zonelists". IMHO it doesn't make any
>sense to print for each zonelist since it's run in a for loop running
>from 0 to Y-1 anyway.
>
>The patch nukes a few new printk's that were introduced with the
>scheduler changes to the NUMA code in -mm3, if these are still needed
>then I won't fight for that part of the patch.
>
Thanks, I forgot to remove those printks because I don't have a NUMA
handy I guess. They're just to make sure the sched domains where being
initialized properly. They can go.
I like the rest of the patch too.