2008-03-02 05:55:06

by Allan Menezes

[permalink] [raw]
Subject: HPL Benchmark performance degradation of kernel 2.6.24.3 vs 2.6.23.14

Hi,
I have a five node intel Q6600 quad core cluster and I benchmarked
it with open source open mpi software using fc8 and it's supplied
kernels recompiled and that of kernel.org with kernel 2.6.23.14 and
2.6.24.3.
With GotoBlas v 1.24 and open mpi beta both cases (v 1.3a) for kernels
2.6.23.14 with web100 i get 158GFlops.
But when i recompile with web100 for kernel 2.6.24 / without web100 and
having 6gig DDR2800MHz ram on each node i get only 28GFLOPS AND
22GFLOPS for 5 nodes whereas with or without web 100 for kernel
2.6.23.14 i get 156-8 GfLOPS. wITH OR WITHOUT web 100 i get for kernel
2.6.24.3 22- 28 Gflops for 5 nodes.!
Why is there a performance drop in kernel 2.6.24.3 All else hardware is
the same!
For inter node communication i use three pci express gig eth cards ( 2
intel and one syskonnect ) per node and using nptcp of netpipe their
performance of intel and syskonnect cards in both kernels measured point
to point is 880MBPS approx for all three cards with measured using
netpipe for tcp with kernel 2.6.24.3 and 2.6.23.14 . I am also using
three switches gigabit with high bisection b/w for these eth cards
(copper) with 3 different subnets
Yet I am getting a substantial performance drop keeping the hardware and
openmpi and hpl and gotoblas same. Can some one help me figure out why?
Please find attached my kernel's .config
Cheers,
Allan Menezes


Attachments:
.config (62.33 kB)

2008-03-02 18:48:46

by Eric Dumazet

[permalink] [raw]
Subject: Re: HPL Benchmark performance degradation of kernel 2.6.24.3 vs 2.6.23.14

Allan Menezes a ?crit :
> Hi,
> I have a five node intel Q6600 quad core cluster and I benchmarked it
> with open source open mpi software using fc8 and it's supplied kernels
> recompiled and that of kernel.org with kernel 2.6.23.14 and 2.6.24.3.
> With GotoBlas v 1.24 and open mpi beta both cases (v 1.3a) for kernels
> 2.6.23.14 with web100 i get 158GFlops.
> But when i recompile with web100 for kernel 2.6.24 / without web100 and
> having 6gig DDR2800MHz ram on each node i get only 28GFLOPS AND
> 22GFLOPS for 5 nodes whereas with or without web 100 for kernel
> 2.6.23.14 i get 156-8 GfLOPS. wITH OR WITHOUT web 100 i get for kernel
> 2.6.24.3 22- 28 Gflops for 5 nodes.!
> Why is there a performance drop in kernel 2.6.24.3 All else hardware is
> the same!
> For inter node communication i use three pci express gig eth cards ( 2
> intel and one syskonnect ) per node and using nptcp of netpipe their
> performance of intel and syskonnect cards in both kernels measured point
> to point is 880MBPS approx for all three cards with measured using
> netpipe for tcp with kernel 2.6.24.3 and 2.6.23.14 . I am also using
> three switches gigabit with high bisection b/w for these eth cards
> (copper) with 3 different subnets
> Yet I am getting a substantial performance drop keeping the hardware and
> openmpi and hpl and gotoblas same. Can some one help me figure out why?
> Please find attached my kernel's .config

Hi Allan

Your setup is quite complex, so you should give more information if you want
some help here.

Is this benchmark stressing disk IO, task scheduler, network stack, memory,
swap... hard to tell in fact.

Examining your .config, I would point out CONFIG_SLUB_DEBUG=y
You really should disable this expensive option.
(and possibly use CONFIG_SLAB instead of CONFIG_SLUB)

You probably should try to use oprofile tool, because its results are probably
a good way to give hints about bad configuration, or kernel regressions.

opcontrol --vmfile=/boot/vmlinux-2.6.24.3 --start
<benchmarking>
opreport -l /boot/vmlinux-2.6.24.3

2008-03-02 19:03:28

by Peter Zijlstra

[permalink] [raw]
Subject: Re: HPL Benchmark performance degradation of kernel 2.6.24.3 vs 2.6.23.14


On Sun, 2008-03-02 at 19:48 +0100, Eric Dumazet wrote:

> Examining your .config, I would point out CONFIG_SLUB_DEBUG=y
> You really should disable this expensive option.

CONFIG_SLUB_DEBUG_ON is the expensive one.

> (and possibly use CONFIG_SLAB instead of CONFIG_SLUB)

That is a good thing to test indeed.

2008-03-02 20:07:37

by Roger Heflin

[permalink] [raw]
Subject: Re: HPL Benchmark performance degradation of kernel 2.6.24.3 vs 2.6.23.14

Eric Dumazet wrote:
> Allan Menezes a ?crit :
>> Hi,
>> I have a five node intel Q6600 quad core cluster and I benchmarked
>> it with open source open mpi software using fc8 and it's supplied
>> kernels recompiled and that of kernel.org with kernel 2.6.23.14 and
>> 2.6.24.3.
>> With GotoBlas v 1.24 and open mpi beta both cases (v 1.3a) for kernels
>> 2.6.23.14 with web100 i get 158GFlops.
>> But when i recompile with web100 for kernel 2.6.24 / without web100
>> and having 6gig DDR2800MHz ram on each node i get only 28GFLOPS AND
>> 22GFLOPS for 5 nodes whereas with or without web 100 for kernel
>> 2.6.23.14 i get 156-8 GfLOPS. wITH OR WITHOUT web 100 i get for kernel
>> 2.6.24.3 22- 28 Gflops for 5 nodes.!
>> Why is there a performance drop in kernel 2.6.24.3 All else hardware
>> is the same!
>> For inter node communication i use three pci express gig eth cards ( 2
>> intel and one syskonnect ) per node and using nptcp of netpipe their
>> performance of intel and syskonnect cards in both kernels measured
>> point to point is 880MBPS approx for all three cards with measured
>> using netpipe for tcp with kernel 2.6.24.3 and 2.6.23.14 . I am also
>> using three switches gigabit with high bisection b/w for these eth
>> cards (copper) with 3 different subnets
>> Yet I am getting a substantial performance drop keeping the hardware
>> and openmpi and hpl and gotoblas same. Can some one help me figure out
>> why?
>> Please find attached my kernel's .config
>
> Hi Allan
>
> Your setup is quite complex, so you should give more information if you
> want some help here.
>
> Is this benchmark stressing disk IO, task scheduler, network stack,
> memory, swap... hard to tell in fact.
>
> Examining your .config, I would point out CONFIG_SLUB_DEBUG=y
> You really should disable this expensive option.
> (and possibly use CONFIG_SLAB instead of CONFIG_SLUB)
>
> You probably should try to use oprofile tool, because its results are
> probably a good way to give hints about bad configuration, or kernel
> regressions.
>
> opcontrol --vmfile=/boot/vmlinux-2.6.24.3 --start
> <benchmarking>
> opreport -l /boot/vmlinux-2.6.24.3

I am not the original reporter, to get good numbers HPL tests cpu and mostly
networking speed (if more than one machine is being used), if local it
test whichever interprocess communication is being used.

It is floating point with communications to sync the different processes
together.

Generally if it is abnormally slow, you either have a errant process
on a machine, a problem with one machine, or a problem with networking
latency, or possibly a problem with some other latency.

I have never seen the scheduler make a big difference (unless the scheduler is
really really broken), and it if configured for speed it does little or no swap
(but I have seem machines that were tuned to page out early cause slight slow
downs in the numbers when things should have nicely fit in memory), and it does
little or no disk IO in the timed speed calculation areas.

It is pretty much all network latency and floating point speed.


Roger

Roger