This patch simply switches over to per-CPU runqueues as defined by the new
per cpu api.
Index: linux-2.5.59/kernel/sched.c
===================================================================
RCS file: /build/cvsroot/linux-2.5.59/kernel/sched.c,v
retrieving revision 1.1.1.1
diff -u -r1.1.1.1 sched.c
--- linux-2.5.59/kernel/sched.c 17 Jan 2003 02:46:29 -0000 1.1.1.1
+++ linux-2.5.59/kernel/sched.c 17 Jan 2003 08:33:12 -0000
@@ -160,9 +160,9 @@
atomic_t nr_iowait;
} ____cacheline_aligned;
-static struct runqueue runqueues[NR_CPUS] __cacheline_aligned;
+static DEFINE_PER_CPU(struct runqueue, runqueues);
-#define cpu_rq(cpu) (runqueues + (cpu))
+#define cpu_rq(cpu) (&per_cpu(runqueues, cpu))
#define this_rq() cpu_rq(smp_processor_id())
#define task_rq(p) cpu_rq(task_cpu(p))
#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
--
function.linuxpower.ca
Zwane Mwaikambo <[email protected]> wrote:
>
> This patch simply switches over to per-CPU runqueues as defined by the new
> per cpu api.
> ...
> +static DEFINE_PER_CPU(struct runqueue, runqueues);
These must be initialised to something. gcc-2.91/92 bug. There is a
build-time check for this, but it only detects the mistake if you're using a
compiler which has the bug.
Your patch works here, but I was never able to get this working when the
per-cpu areas were allocated as the CPU's come online, which is something we
kinda should work towards. This patch would need to be reverted if we try to
do that again. Which is a shame, because appreciable amounts of memory would
be saved if nr_cpus < NR_CPUS. scheduler startup is fragile..
I don't think it buys us a lot, really. These structures are so humongous
that we don't get much per-cpuness in accessing them.
On Fri, 17 Jan 2003, Andrew Morton wrote:
> Zwane Mwaikambo <[email protected]> wrote:
> >
> > This patch simply switches over to per-CPU runqueues as defined by the new
> > per cpu api.
> > ...
> > +static DEFINE_PER_CPU(struct runqueue, runqueues);
>
> These must be initialised to something. gcc-2.91/92 bug. There is a
> build-time check for this, but it only detects the mistake if you're using a
> compiler which has the bug.
Thanks i can work around that.
> Your patch works here, but I was never able to get this working when the
> per-cpu areas were allocated as the CPU's come online, which is something we
> kinda should work towards. This patch would need to be reverted if we try to
> do that again. Which is a shame, because appreciable amounts of memory would
> be saved if nr_cpus < NR_CPUS. scheduler startup is fragile..
I think i'll have a stab at that.
> I don't think it buys us a lot, really. These structures are so humongous
> that we don't get much per-cpuness in accessing them.
Point.
Index: linux-2.5.59/kernel/sched.c
===================================================================
RCS file: /build/cvsroot/linux-2.5.59/kernel/sched.c,v
retrieving revision 1.1.1.1
diff -u -r1.1.1.1 sched.c
--- linux-2.5.59/kernel/sched.c 17 Jan 2003 02:46:29 -0000 1.1.1.1
+++ linux-2.5.59/kernel/sched.c 17 Jan 2003 10:03:31 -0000
@@ -160,9 +160,9 @@
atomic_t nr_iowait;
} ____cacheline_aligned;
-static struct runqueue runqueues[NR_CPUS] __cacheline_aligned;
+static DEFINE_PER_CPU(struct runqueue, runqueues) = {{ 0 }};
-#define cpu_rq(cpu) (runqueues + (cpu))
+#define cpu_rq(cpu) (&per_cpu(runqueues, cpu))
#define this_rq() cpu_rq(smp_processor_id())
#define task_rq(p) cpu_rq(task_cpu(p))
#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
--
function.linuxpower.ca