Hi All,
Ok, we have a Dell PE4600 with dual P4X processors. It will not balance
interrupts between the two cpus, CPU0 always receives the bulk
of the interrupts, although both processors are being utilized
by processes on the system. This system is not yet in production, so I
have some time to try things on it.
Current kernel is 2.4.18 with the LSE APIC Routing patch
(http://sourceforge.net/projects/lse). 2.4.18 stock (well ok, -rc4) has
much the same results.
I have tried Ingos' patch from early Feb and it managed to hang the system
on CPU0 initialization...
This is a newer system and uses the ServerWorks GCHE South Bridge I
believe, dmesg actually comes up with a few unknown devices.
I'm attaching dmesg output, .config, lspci -v output, and output
from /proc/cpuinfo and /proc/interrupts.
If anything else is needed, just let me know.
Thanks and Regards,
James Bourne
--
James Bourne, Supervisor Data Centre Operations
Mount Royal College, Calgary, AB, CA
http://www.mtroyal.ab.ca
******************************************************************************
This communication is intended for the use of the recipient to which it is
addressed, and may contain confidential, personal, and or privileged
information. Please contact the sender immediately if you are not the
intended recipient of this communication, and do not copy, distribute, or
take action relying on it. Any communication received in error, or
subsequent reply, should be deleted or destroyed.
******************************************************************************
> Current kernel is 2.4.18 with the LSE APIC Routing patch
> (http://sourceforge.net/projects/lse). 2.4.18 stock (well ok, -rc4)
> has much the same results.
I hate to ask trivial questions, but you did read the notes that
come with the APIC routing patch, and actually enable it, right?
IIRC, there was something about a command line parameter needed.
just checking ...
M.
On Tue, 16 Apr 2002, Martin J. Bligh wrote:
> > Current kernel is 2.4.18 with the LSE APIC Routing patch
> > (http://sourceforge.net/projects/lse). 2.4.18 stock (well ok, -rc4)
> > has much the same results.
>
> I hate to ask trivial questions, but you did read the notes that
> come with the APIC routing patch, and actually enable it, right?
> IIRC, there was something about a command line parameter needed.
>
> just checking ...
:) I knew I should have included that in the original message, but it's
also in the dmesg output. append"idle=poll" is there.
There may be other issues, and I need to get physical access to the system
to check (tomorrow)...
Regards
James Bourne
>
> M.
>
--
James Bourne, Supervisor Data Centre Operations
Mount Royal College, Calgary, AB, CA
http://www.mtroyal.ab.ca
******************************************************************************
This communication is intended for the use of the recipient to which it is
addressed, and may contain confidential, personal, and or privileged
information. Please contact the sender immediately if you are not the
intended recipient of this communication, and do not copy, distribute, or
take action relying on it. Any communication received in error, or
subsequent reply, should be deleted or destroyed.
******************************************************************************
At Tue, 16 Apr 2002 16:29:32 -0600 (MDT),
James Bourne <[email protected]> wrote:
> Hi All,
> Ok, we have a Dell PE4600 with dual P4X processors. It will not balance
> interrupts between the two cpus, CPU0 always receives the bulk
> of the interrupts, although both processors are being utilized
> by processes on the system. This system is not yet in production, so I
> have some time to try things on it.
>
> Current kernel is 2.4.18 with the LSE APIC Routing patch
> (http://sourceforge.net/projects/lse). 2.4.18 stock (well ok, -rc4) has
> much the same results.
We observed the similar problem on our i860 based P4 SMP machine.
We examined several tests and got the following fact:
- Only the upper 4 bits of the task priority is effective on
Pentium 4.
The LSE APIC Routing patch assigns values from 0x10 to 0x18 to TPR for
setting task priority, that means the upper 4 bits are always '0001'.
Based on the previous obserbation, these values are useless in terms
of proper interrupt distribution.
To get an expected operation, upper 4 bits must be changed.
You may refer the following thread that talks this problem.
"P4 SMP load balancing"
http://marc.theaimsgroup.com/?t=100287923700006&r=1&w=2
Appendix.
Our experiments were examined on two different systems.
The following table shows the relations between the TPR value of each
CPU and the interrupt destination.
Pentium4 Machine:
CPU: Pentium4 Xeon 2.0GHz x 2
Motherboard: Supermicro P4DCE
Chipset: Intel860
Pentium3 Machine:
CPU: Pentium3 500MHz x 2
Motherboard: ASUS CUR-DLS
Chipset: Serverworks ServerSet3 LE
Boot Processor: CPU#0
TPR Value Interrupt CPU
CPU#0 CPU#1 Pen3 Pen4
=========================================
01 02 0or1 0
02 01 0or1 0
0f 00 0or1 0
-----------------------------------------
10 01 1 1
10 11 0 0
11 10 1 0
1f 10 1 0
11 11 0or1 0
-----------------------------------------
20 20 0or1 0
20 10 1 1
20 21 0 0
21 20 1 0
-----------------------------------------
----
Shuji YAMAMURA
Grid Computing & Bioinformatics Laboratory, FUJITSU Laboratories, LTD.
E-mail: [email protected]
Ok,
After Ingo forwarded me his original patch (I found his patch via a web
based medium, which had converted all of the left shifts to compares, and
now I'm very glad it didn't boot...) and the system is booted and is
balancing most of the interrupts at least. Here's the current output
of /proc/interrupts
brynhild:bash$ cat /proc/interrupts
CPU0 CPU1
0: 171414 0 IO-APIC-edge timer
1: 3 2 IO-APIC-edge keyboard
2: 0 0 XT-PIC cascade
8: 1 0 IO-APIC-edge rtc
18: 8 7 IO-APIC-level aic7xxx
19: 13566 12799 IO-APIC-level eth0
20: 9 7 IO-APIC-level aic7xxx
21: 9 7 IO-APIC-level aic7xxx
27: 1572 5371 IO-APIC-level megaraid
NMI: 0 0
LOC: 171315 171251
ERR: 0
MIS: 0
So, the timer isn't being balanced still, others are (is there a
specific case in your patch for irq 0 (< 1)? I couldn't see it but
it almost looks as though it's being missed..)
Thanks to all that replied.
Regards
James
--
James Bourne, Supervisor Data Centre Operations
Mount Royal College, Calgary, AB, CA
http://www.mtroyal.ab.ca
******************************************************************************
This communication is intended for the use of the recipient to which it is
addressed, and may contain confidential, personal, and or privileged
information. Please contact the sender immediately if you are not the
intended recipient of this communication, and do not copy, distribute, or
take action relying on it. Any communication received in error, or
subsequent reply, should be deleted or destroyed.
******************************************************************************
On Wed, 17 Apr 2002, James Bourne wrote:
> After Ingo forwarded me his original patch (I found his patch via a web
> based medium, which had converted all of the left shifts to compares,
> and now I'm very glad it didn't boot...) and the system is booted and is
> balancing most of the interrupts at least. Here's the current output of
> /proc/interrupts
>
> brynhild:bash$ cat /proc/interrupts
> CPU0 CPU1
> 0: 171414 0 IO-APIC-edge timer
> 1: 3 2 IO-APIC-edge keyboard
> 2: 0 0 XT-PIC cascade
> 8: 1 0 IO-APIC-edge rtc
> 18: 8 7 IO-APIC-level aic7xxx
> 19: 13566 12799 IO-APIC-level eth0
> 20: 9 7 IO-APIC-level aic7xxx
> 21: 9 7 IO-APIC-level aic7xxx
> 27: 1572 5371 IO-APIC-level megaraid
> NMI: 0 0
> LOC: 171315 171251
> ERR: 0
> MIS: 0
it's looking good.
> So, the timer isn't being balanced still, others are (is there a
> specific case in your patch for irq 0 (< 1)? I couldn't see it but it
> almost looks as though it's being missed..)
it's a separate bug, solved by a separate patch.
Ingo
On Wed, 17 Apr 2002, Ingo Molnar wrote:
>
> On Wed, 17 Apr 2002, James Bourne wrote:
>
> > After Ingo forwarded me his original patch (I found his patch via a web
> > based medium, which had converted all of the left shifts to compares,
> > and now I'm very glad it didn't boot...) and the system is booted and is
> > balancing most of the interrupts at least. Here's the current output of
> > /proc/interrupts
> >
> > brynhild:bash$ cat /proc/interrupts
> > CPU0 CPU1
> > 0: 171414 0 IO-APIC-edge timer
> > 1: 3 2 IO-APIC-edge keyboard
> > 2: 0 0 XT-PIC cascade
> > 8: 1 0 IO-APIC-edge rtc
> > 18: 8 7 IO-APIC-level aic7xxx
> > 19: 13566 12799 IO-APIC-level eth0
> > 20: 9 7 IO-APIC-level aic7xxx
> > 21: 9 7 IO-APIC-level aic7xxx
> > 27: 1572 5371 IO-APIC-level megaraid
> > NMI: 0 0
> > LOC: 171315 171251
> > ERR: 0
> > MIS: 0
>
> it's looking good.
>
> > So, the timer isn't being balanced still, others are (is there a
> > specific case in your patch for irq 0 (< 1)? I couldn't see it but it
> > almost looks as though it's being missed..)
>
> it's a separate bug, solved by a separate patch.
>
> Ingo
>
Ingo,
Are any of these patches going into the mainline kernel soon ?
Regards,
--
Steffen Persvold | Scalable Linux Systems | Try out the world's best
mailto:[email protected] | http://www.scali.com | performing MPI implementation:
Tel: (+47) 2262 8950 | Olaf Helsets vei 6 | - ScaMPI 1.13.8 -
Fax: (+47) 2262 8951 | N0621 Oslo, NORWAY | >320MBytes/s and <4uS latency
On Wed, 17 Apr 2002, Steffen Persvold wrote:
> Are any of these patches going into the mainline kernel soon ?
my irqbalance patch is in Linus' tree already, it should show up in the
next 2.5.9-pre kernel.
Ingo
On Wed, 17 Apr 2002, Ingo Molnar wrote:
>
> On Wed, 17 Apr 2002, James Bourne wrote:
> > So, the timer isn't being balanced still, others are (is there a
> > specific case in your patch for irq 0 (< 1)? I couldn't see it but it
> > almost looks as though it's being missed..)
>
> it's a separate bug, solved by a separate patch.
>
Where would I find this separate patch? Is there something I could do
some testing on?
Thanks and regards
James
> Ingo
>
--
James Bourne, Supervisor Data Centre Operations
Mount Royal College, Calgary, AB, CA
http://www.mtroyal.ab.ca
******************************************************************************
This communication is intended for the use of the recipient to which it is
addressed, and may contain confidential, personal, and or privileged
information. Please contact the sender immediately if you are not the
intended recipient of this communication, and do not copy, distribute, or
take action relying on it. Any communication received in error, or
subsequent reply, should be deleted or destroyed.
******************************************************************************
On Wed, 17 Apr 2002, Ingo Molnar wrote:
>
> On Wed, 17 Apr 2002, James Bourne wrote:
>
> > After Ingo forwarded me his original patch (I found his patch via a web
> > based medium, which had converted all of the left shifts to compares,
> > and now I'm very glad it didn't boot...) and the system is booted and is
> > balancing most of the interrupts at least. Here's the current output of
> > /proc/interrupts
> >
> > brynhild:bash$ cat /proc/interrupts
> > CPU0 CPU1
> > 0: 171414 0 IO-APIC-edge timer
> > 1: 3 2 IO-APIC-edge keyboard
> > 2: 0 0 XT-PIC cascade
> > 8: 1 0 IO-APIC-edge rtc
> > 18: 8 7 IO-APIC-level aic7xxx
> > 19: 13566 12799 IO-APIC-level eth0
> > 20: 9 7 IO-APIC-level aic7xxx
> > 21: 9 7 IO-APIC-level aic7xxx
> > 27: 1572 5371 IO-APIC-level megaraid
> > NMI: 0 0
> > LOC: 171315 171251
> > ERR: 0
> > MIS: 0
>
> it's looking good.
>
> > So, the timer isn't being balanced still, others are (is there a
> > specific case in your patch for irq 0 (< 1)? I couldn't see it but it
> > almost looks as though it's being missed..)
>
> it's a separate bug, solved by a separate patch.
>
Hi again,
Hmm, is that something ServerWorks specific because on my Plumas chipset
the timer interrupt is balanced just fine :
(sp@puma2:~)> cat /proc/interrupts
CPU0 CPU1
0: 14358402 14297319 IO-APIC-edge timer
1: 2 1 IO-APIC-edge keyboard
2: 0 0 XT-PIC cascade
4: 336 325 IO-APIC-edge serial
8: 1 0 IO-APIC-edge rtc
9: 0 0 IO-APIC-edge acpi
15: 3 1 IO-APIC-edge ide1
16: 0 0 IO-APIC-level usb-uhci
17: 576744 574959 IO-APIC-level eth0
18: 0 0 IO-APIC-level usb-uhci
19: 0 0 IO-APIC-level usb-uhci
28: 72602 71619 IO-APIC-level aic7xxx
29: 8 8 IO-APIC-level aic7xxx
31: 0 0 IO-APIC-level e1000
48: 289545 269389 IO-APIC-level ssci
NMI: 0 0
LOC: 28654183 28654202
PMC: 0 0
ERR: 0
MIS: 0
Regards,
--
Steffen Persvold | Scalable Linux Systems | Try out the world's best
mailto:[email protected] | http://www.scali.com | performing MPI implementation:
Tel: (+47) 2262 8950 | Olaf Helsets vei 6 | - ScaMPI 1.13.8 -
Fax: (+47) 2262 8951 | N0621 Oslo, NORWAY | >320MBytes/s and <4uS latency
On Wed, 17 Apr 2002, Ingo Molnar wrote:
>
> On Wed, 17 Apr 2002, Steffen Persvold wrote:
>
> > Are any of these patches going into the mainline kernel soon ?
>
> my irqbalance patch is in Linus' tree already, it should show up in the
> next 2.5.9-pre kernel.
>
What about 2.4.x ?
Regards,
--
Steffen Persvold | Scalable Linux Systems | Try out the world's best
mailto:[email protected] | http://www.scali.com | performing MPI implementation:
Tel: (+47) 2262 8950 | Olaf Helsets vei 6 | - ScaMPI 1.13.8 -
Fax: (+47) 2262 8951 | N0621 Oslo, NORWAY | >320MBytes/s and <4uS latency
On Wed, 17 Apr 2002, James Bourne wrote:
> Where would I find this separate patch? Is there something I could do
> some testing on?
the timer irq inbalance problem should be solved by the attached patch.
Ingo
diff -up --recursive --new-file linux-2.4.18.macro/arch/i386/kernel/io_apic.c linux-2.4.18/arch/i386/kernel/io_apic.c
--- linux-2.4.18.macro/arch/i386/kernel/io_apic.c Fri Nov 23 15:32:04 2001
+++ linux-2.4.18/arch/i386/kernel/io_apic.c Fri Mar 1 14:58:20 2002
@@ -67,7 +67,7 @@ static struct irq_pin_list {
* shared ISA-space IRQs, so we have to support them. We are super
* fast in the common case, and fast for shared ISA-space IRQs.
*/
-static void add_pin_to_irq(unsigned int irq, int apic, int pin)
+static void __init add_pin_to_irq(unsigned int irq, int apic, int pin)
{
static int first_free_entry = NR_IRQS;
struct irq_pin_list *entry = irq_2_pin + irq;
@@ -85,6 +85,26 @@ static void add_pin_to_irq(unsigned int
entry->pin = pin;
}
+/*
+ * Reroute an IRQ to a different pin.
+ */
+static void __init replace_pin_at_irq(unsigned int irq,
+ int oldapic, int oldpin,
+ int newapic, int newpin)
+{
+ struct irq_pin_list *entry = irq_2_pin + irq;
+
+ while (1) {
+ if (entry->apic == oldapic && entry->pin == oldpin) {
+ entry->apic = newapic;
+ entry->pin = newpin;
+ }
+ if (!entry->next)
+ break;
+ entry = irq_2_pin + entry->next;
+ }
+}
+
#define __DO_ACTION(R, ACTION, FINAL) \
\
{ \
@@ -1533,6 +1553,10 @@ static inline void check_timer(void)
setup_ExtINT_IRQ0_pin(pin2, vector);
if (timer_irq_works()) {
printk("works.\n");
+ if (pin1 != -1)
+ replace_pin_at_irq(0, 0, pin1, 0, pin2);
+ else
+ add_pin_to_irq(0, 0, pin2);
if (nmi_watchdog == NMI_IO_APIC) {
setup_nmi();
check_nmi_watchdog();
On Wed, 17 Apr 2002, Steffen Persvold wrote:
> On Wed, 17 Apr 2002, Ingo Molnar wrote:
>
> >
> > On Wed, 17 Apr 2002, James Bourne wrote:
> >
> > > After Ingo forwarded me his original patch (I found his patch via a web
> > > based medium, which had converted all of the left shifts to compares,
> > > and now I'm very glad it didn't boot...) and the system is booted and is
> > > balancing most of the interrupts at least. Here's the current output of
> > > /proc/interrupts
> > >
> > > brynhild:bash$ cat /proc/interrupts
> > > CPU0 CPU1
> > > 0: 171414 0 IO-APIC-edge timer
> > > 1: 3 2 IO-APIC-edge keyboard
> > > 2: 0 0 XT-PIC cascade
> > > 8: 1 0 IO-APIC-edge rtc
> > > 18: 8 7 IO-APIC-level aic7xxx
> > > 19: 13566 12799 IO-APIC-level eth0
> > > 20: 9 7 IO-APIC-level aic7xxx
> > > 21: 9 7 IO-APIC-level aic7xxx
> > > 27: 1572 5371 IO-APIC-level megaraid
> > > NMI: 0 0
> > > LOC: 171315 171251
> > > ERR: 0
> > > MIS: 0
> >
> > it's looking good.
> >
> > > So, the timer isn't being balanced still, others are (is there a
> > > specific case in your patch for irq 0 (< 1)? I couldn't see it but it
> > > almost looks as though it's being missed..)
> >
> > it's a separate bug, solved by a separate patch.
> >
>
> Hi again,
>
> Hmm, is that something ServerWorks specific because on my Plumas chipset
> the timer interrupt is balanced just fine :
Hi,
This has a ServerWorks GCHE chipset if I'm reading the docs I've found
correctly.
http://www.serverworks.com/products/GCHE.html
Regards,
James
--
James Bourne, Supervisor Data Centre Operations
Mount Royal College, Calgary, AB, CA
http://www.mtroyal.ab.ca
******************************************************************************
This communication is intended for the use of the recipient to which it is
addressed, and may contain confidential, personal, and or privileged
information. Please contact the sender immediately if you are not the
intended recipient of this communication, and do not copy, distribute, or
take action relying on it. Any communication received in error, or
subsequent reply, should be deleted or destroyed.
******************************************************************************
On Wed, 17 Apr 2002, Ingo Molnar wrote:
>
> On Wed, 17 Apr 2002, James Bourne wrote:
>
> > Where would I find this separate patch? Is there something I could do
> > some testing on?
>
> the timer irq inbalance problem should be solved by the attached patch.
Thanks Ingo,
That has balanced the timer irqs. I've also enabled hyper threading
(append="acpismp=force").
Here's the output from /proc/interrupts:
brynhild:bash$ cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3
0: 3033 2911 2871 2880 IO-APIC-edge timer
1: 1 0 2 0 IO-APIC-edge keyboard
2: 0 0 0 0 XT-PIC cascade
8: 1 0 0 0 IO-APIC-edge rtc
18: 5 3 3 4 IO-APIC-level aic7xxx
19: 480 421 412 529 IO-APIC-level eth0
20: 5 3 4 4 IO-APIC-level aic7xxx
21: 5 3 4 4 IO-APIC-level aic7xxx
27: 588 1010 654 943 IO-APIC-level megaraid
NMI: 0 0 0 0
LOC: 11530 11528 11528 11466
ERR: 0
MIS: 0
And, you've gotta like this line:
Total of 4 processors activated (14299.95 BogoMIPS).
I'm going to do some testing on it to check it's stability.
I'll let you know the results.
Thanks again and regards,
James
>
> Ingo
>
--
James Bourne, Supervisor Data Centre Operations
Mount Royal College, Calgary, AB, CA
http://www.mtroyal.ab.ca
******************************************************************************
This communication is intended for the use of the recipient to which it is
addressed, and may contain confidential, personal, and or privileged
information. Please contact the sender immediately if you are not the
intended recipient of this communication, and do not copy, distribute, or
take action relying on it. Any communication received in error, or
subsequent reply, should be deleted or destroyed.
******************************************************************************
> That has balanced the timer irqs. I've also enabled hyper threading
> (append="acpismp=force").
> ...
> And, you've gotta like this line:
> Total of 4 processors activated (14299.95 BogoMIPS).
Before you get too excited about that, how much performance boost do
you actually get by turning on Hyperthreading? ;-)
M.
On Wed, 2002-04-17 at 17:10, Martin J. Bligh wrote:
> > Total of 4 processors activated (14299.95 BogoMIPS).
>
> Before you get too excited about that, how much performance boost do
> you actually get by turning on Hyperthreading? ;-)
Certainly not the mips*4 that bogomips is showing :)
I guess that is a "bug" ?
Robert Love
On Wed, 17 Apr 2002, Martin J. Bligh wrote:
> > That has balanced the timer irqs. I've also enabled hyper threading
> > (append="acpismp=force").
> > ...
> > And, you've gotta like this line:
> > Total of 4 processors activated (14299.95 BogoMIPS).
>
> Before you get too excited about that, how much performance boost do
> you actually get by turning on Hyperthreading? ;-)
Well, that's something I'm working on finding out.
But, you have to like the looks of it!
James
>
> M.
>
--
James Bourne, Supervisor Data Centre Operations
Mount Royal College, Calgary, AB, CA
http://www.mtroyal.ab.ca
******************************************************************************
This communication is intended for the use of the recipient to which it is
addressed, and may contain confidential, personal, and or privileged
information. Please contact the sender immediately if you are not the
intended recipient of this communication, and do not copy, distribute, or
take action relying on it. Any communication received in error, or
subsequent reply, should be deleted or destroyed.
******************************************************************************
On Wed, Apr 17, 2002 at 04:15:42PM -0400, Robert Love wrote:
> > > Total of 4 processors activated (14299.95 BogoMIPS).
> Certainly not the mips*4 that bogomips is showing :)
> I guess that is a "bug" ?
Well, it justifies a comment in arch/i386/kernel/smpboot.c
1181 /*
1182 * Allow the user to impress friends.
1183 */
8-)
Dave.
--
| Dave Jones. http://www.codemonkey.org.uk
| SuSE Labs
> On Wed, 2002-04-17 at 17:10, Martin J. Bligh wrote:
> > > Total of 4 processors activated (14299.95 BogoMIPS).
> >
> > Before you get too excited about that, how much performance
> boost do
> > you actually get by turning on Hyperthreading? ;-)
I've seen some Intel bench's and they are specing an increase of 0% to 30%
(Though check Anandtech, he did a benchmark on his DB, and got a small
performance Decrease on a test!)
After looking at the Hyperthreading Doc's, It looks like they are trying to
utilize some of the idle time the Execution engine has while waiting for
other ops to happen, Trace code misses, and such. Strap on an extra
processor state, and get some extra oomph. Hey, the P4 can use all the
extra oomph it can get!
B.
Beware! I have seen lockups and driver sickness with hyperthreading
enabled on some motherboards. Most notably, Tyan with 2.4.19 and
2.5.6.
Jeff
On Wed, Apr 17, 2002 at 04:15:42PM -0400, Robert Love wrote:
> On Wed, 2002-04-17 at 17:10, Martin J. Bligh wrote:
> > > Total of 4 processors activated (14299.95 BogoMIPS).
> >
> > Before you get too excited about that, how much performance boost do
> > you actually get by turning on Hyperthreading? ;-)
>
> Certainly not the mips*4 that bogomips is showing :)
>
> I guess that is a "bug" ?
>
> Robert Love
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
On Wed, Apr 17, 2002 at 02:10:52PM -0700, Martin J. Bligh wrote:
> > That has balanced the timer irqs. I've also enabled hyper threading
> > (append="acpismp=force").
> > ...
> > And, you've gotta like this line:
> > Total of 4 processors activated (14299.95 BogoMIPS).
>
> Before you get too excited about that, how much performance boost do
> you actually get by turning on Hyperthreading? ;-)
>
In my testing with SCI, it speeds up some operations and with 3Ware
it increases throuput about 10 MB/S. Not a lot but there is some
improvement (if you can get around the lockups during boot).
Jeff
> M.
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
> I've seen some Intel bench's and they are specing an increase of 0% to 30%
I think I'd be less cynical about benchmark results that didn't come from the
people trying to sell the product ;-)
> (Though check Anandtech, he did a benchmark on his DB, and got a small
> performance Decrease on a test!)
;-)
Thanks for the pointer.
> After looking at the Hyperthreading Doc's, It looks like they are trying to
> utilize some of the idle time the Execution engine has while waiting for
> other ops to happen, Trace code misses, and such. Strap on an extra
> processor state, and get some extra oomph. Hey, the P4 can use all the
> extra oomph it can get!
It sounds like a good idea in theory, but the fact that they share the TLB
cache and other things makes me rather dubious about whether it's really
worth it. I'm not saying it's necessarily bad, I'm just not convinced it's good
yet. Introducing more processors to the OS has it's own problems to deal
with (ones we're interested in solving anyway).
Real world benchmarks from people other than Intel should make interesting
reading .... I think we need some more smarts in the OS to take real advantage
of this (eg using the NUMA scheduling mods to create cpu pools of 2 "procs"
for each pair, etc) ... will be fun ;-)
M.
> > And, you've gotta like this line:
> > Total of 4 processors activated (14299.95 BogoMIPS).
>
> Before you get too excited about that, how much performance boost do
> you actually get by turning on Hyperthreading? ;-)
10-30% typically. I've actually seen code where you can't measure the
improvement because the main cpu code path is precision tuned to the
cache...
>> Before you get too excited about that, how much performance boost do
>> you actually get by turning on Hyperthreading? ;-)
>
> In my testing with SCI, it speeds up some operations and with 3Ware
> it increases throuput about 10 MB/S. Not a lot but there is some
> improvement (if you can get around the lockups during boot).
What's that 10MB/s as a percentage of the overall performance?
M.
On Wed, 17 Apr 2002, Martin J. Bligh wrote:
> > I've seen some Intel bench's and they are specing an increase of 0% to 30%
>
> I think I'd be less cynical about benchmark results that didn't come from the
> people trying to sell the product ;-)
>
[...]
> Real world benchmarks from people other than Intel should make interesting
> reading .... I think we need some more smarts in the OS to take real advantage
> of this (eg using the NUMA scheduling mods to create cpu pools of 2 "procs"
> for each pair, etc) ... will be fun ;-)
Well, here I have some.
The tests involved kernel compiled (2.4.18) from a make mrproper, single
nbench, and unixbench (likely the most useful).
The output from tests can be found at http://www.hardrock.org/HT-results/
Results consist of (all with hyperthreading on and off):
o timed make bzImage and make modules using -j2 -j4 and -j6.
o BYTEmark output (single run)
o BYTE UNIX Benchmarks (Version 4.1.0) log, report, and times
o output from /proc/cpu
o output from /proc/interrupts
o output from dmesg (sorry, one of them has the top clipped).
o output from lspci -v
>From the results what I see is that if you are running a system with
many concurrent tasks then hyperthreading makes sense. Many
concurrent systems (the -j4 and -j6 kernel compiles for example)
do see improvements. Although this is not a very analytical test, I don't
have time for 10 iterations of each to get a better set of numbers.
Thanks to all and regards,
James Bourne
>
> M.
>
--
James Bourne, Supervisor Data Centre Operations
Mount Royal College, Calgary, AB, CA
http://www.mtroyal.ab.ca
******************************************************************************
This communication is intended for the use of the recipient to which it is
addressed, and may contain confidential, personal, and or privileged
information. Please contact the sender immediately if you are not the
intended recipient of this communication, and do not copy, distribute, or
take action relying on it. Any communication received in error, or
subsequent reply, should be deleted or destroyed.
******************************************************************************
>> (Though check Anandtech, he did a benchmark on his DB, and got a small
>> performance Decrease on a test!)
>Thanks for the pointer.
Also, it looks like the Athlon MP still stacks up quite nice in his more or
less real world benchmark, even against an pair of Hyperthreaded ZEON's.
>It sounds like a good idea in theory, but the fact that they share the TLB
>cache and other things makes me rather dubious about whether it's really
>worth it. I'm not saying it's necessarily bad, I'm just not convinced it's
good
>yet. Introducing more processors to the OS has it's own problems to deal
>with (ones we're interested in solving anyway).
I'll bet it'll be interesting, and I agree, I was dubious about the P4 at
first anyway.. :o) Though Hyperthreading will only be in the ZEON.
>Real world benchmarks from people other than Intel should make interesting
>reading .... I think we need some more smarts in the OS to take real
advantage
>of this (eg using the NUMA scheduling mods to create cpu pools of 2 "procs"
>for each pair, etc) ... will be fun ;-)
>From the docs, it looks like maybe some scheduling smarts could be added.
Run Floating point ops on one Logical processor, and normal ops on the
other, and maybe some other parallelism mods.
On Wed, Apr 17, 2002 at 05:31:05PM -0700, Martin J. Bligh wrote:
> >> Before you get too excited about that, how much performance boost do
> >> you actually get by turning on Hyperthreading? ;-)
> >
> > In my testing with SCI, it speeds up some operations and with 3Ware
> > it increases throuput about 10 MB/S. Not a lot but there is some
> > improvement (if you can get around the lockups during boot).
>
> What's that 10MB/s as a percentage of the overall performance?
>
> M.
247-251 (with) vs. 227-238 (without) MB/S
Jeff
On Wed, 17 Apr 2002, Steffen Persvold wrote:
> Hmm, is that something ServerWorks specific because on my Plumas chipset
> the timer interrupt is balanced just fine :
It's specific to certain timer IRQ routing setups, ServerWorks being one
of them. I consider them braindamaged but that's just my opinion. We
handle them fine (modulo bugs).
--
+ Maciej W. Rozycki, Technical University of Gdansk, Poland +
+--------------------------------------------------------------+
+ e-mail: [email protected], PGP key available +
On Wed, 17 Apr 2002, James Bourne wrote:
> After Ingo forwarded me his original patch (I found his patch via a web
> based medium, which had converted all of the left shifts to compares, and
> now I'm very glad it didn't boot...) and the system is booted and is
> balancing most of the interrupts at least. Here's the current output
> of /proc/interrupts
Is this positive or negative on performance? If you have a system
getting so many interrupts that one CPU can't handle them, obviously there
is a gain. However, by thrashing the cache of all CPUs instead of just one
you have some memory performance cost.
I first looked at this for a mainframe vendor who decided that putting
all the interrupts in one CPU was better. That was then, this is now, but
I am curious about metrics, like real and system time doing a kernel
compile, etc.
--
bill davidsen <[email protected]>
CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.
Cache warmth in handling interrupts is good. In fact, this is one
of the reasons to use interrupt affinity.
But, directing all interrupts to single processor penalizes unfairly any
tasks that are scheduled to run on that processor. Under heavy interrupt
load, a tasks can become effectively "pinned" onto that processor, unable
to get cpu time to make progress, and unable to be scheduled somewhere
else.
Under really heavy interrupt load, it's good to have
many processors handling interrupts. It increases rate the system
can handle interrupts, and it reduces the latency of individual interrupts.
Dave.
> From [email protected] Thu Apr 18 09:11:22 2002
> Date: Thu, 18 Apr 2002 12:04:35 -0400 (EDT)
> From: Bill Davidsen <[email protected]>
> To: James Bourne <[email protected]>
> cc: Linux Kernel Mailing List <[email protected]>,
> Ingo Molnar <[email protected]>
> Subject: Re: SMP P4 APIC/interrupt balancing
>
> Is this positive or negative on performance? If you have a system
> getting so many interrupts that one CPU can't handle them, obviously there
> is a gain. However, by thrashing the cache of all CPUs instead of just one
> you have some memory performance cost.
>
> I first looked at this for a mainframe vendor who decided that putting
> all the interrupts in one CPU was better. That was then, this is now, but
> I am curious about metrics, like real and system time doing a kernel
> compile, etc.
>
On Thu, 18 Apr 102 [email protected] wrote:
> Interrupts are nicely load-balanced on my ServerWorks machine under 2.4.17:
This is always the case for dedicated inter-APIC bus setups, i.e.
everything up to P3, as the bus protocol supports priority arbitration.
--
+ Maciej W. Rozycki, Technical University of Gdansk, Poland +
+--------------------------------------------------------------+
+ e-mail: [email protected], PGP key available +
On Thu, 18 Apr 2002, Dave Olien wrote:
> Cache warmth in handling interrupts is good. In fact, this is one of
> the reasons to use interrupt affinity.
and in fact this is why IRQ handlers in the irqbalance patch stay affine
to a single CPU for at least 10 msecs. So for most practical purposes when
there is no direct affinity between tasks and IRQs, this brings us very
close the highest possible affinity that can be achieved.
/proc/irq/*/smp_affinity is still preserved for those workloads when some
direct relationship can be established between process activity and IRQ
load. (such as perfectly partitioned server workloads.)
> But, directing all interrupts to single processor penalizes unfairly any
> tasks that are scheduled to run on that processor. Under heavy
> interrupt load, a tasks can become effectively "pinned" onto that
> processor, unable to get cpu time to make progress, and unable to be
> scheduled somewhere else.
>
> Under really heavy interrupt load, it's good to have many processors
> handling interrupts. It increases rate the system can handle
> interrupts, and it reduces the latency of individual interrupts.
yes, this is why the irqbalance patch goes to great lengths to assure that
distribution of IRQs is as random as possible, with the following
variation: idle CPUs are more likely to be used by the IRQ balancing
mechanism than busy CPUs.
Ingo
Bill Davidsen wrote:
>
> On Wed, 17 Apr 2002, James Bourne wrote:
>
> > After Ingo forwarded me his original patch (I found his patch via a web
> > based medium, which had converted all of the left shifts to compares, and
> > now I'm very glad it didn't boot...) and the system is booted and is
> > balancing most of the interrupts at least. Here's the current output
> > of /proc/interrupts
>
Is there a version of this patch for 2.4.18? I also found the one on the web site wouldn't
boot but would very much like to have a copy that would work for 2.4.18. Where might I find
this?
Ragards
Mark
On Fri, 19 Apr 2002, Mark Hounschell wrote:
> Bill Davidsen wrote:
> >
> > On Wed, 17 Apr 2002, James Bourne wrote:
> >
> > > After Ingo forwarded me his original patch (I found his patch via a web
> > > based medium, which had converted all of the left shifts to compares, and
> > > now I'm very glad it didn't boot...) and the system is booted and is
> > > balancing most of the interrupts at least. Here's the current output
> > > of /proc/interrupts
> >
>
> Is there a version of this patch for 2.4.18? I also found the one on the web site wouldn't
> boot but would very much like to have a copy that would work for 2.4.18. Where might I find
> this?
Ingos' irqbalance-2.4.17-B1.patch applies cleanly to 2.4.18. This and the
timer-irq-balance-2.4.18.patch are attached. Also attached is a 2
line patch to identify the CPUs on boot, instead of getting unknown cpu
errors (cosmetic only).
These are currently running on a system with hyper threading
turned on, for the past 2 days or so and it seems stable.
Regards
James
>
> Ragards
> Mark
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
--
James Bourne, Supervisor Data Centre Operations
Mount Royal College, Calgary, AB, CA
http://www.mtroyal.ab.ca
******************************************************************************
This communication is intended for the use of the recipient to which it is
addressed, and may contain confidential, personal, and or privileged
information. Please contact the sender immediately if you are not the
intended recipient of this communication, and do not copy, distribute, or
take action relying on it. Any communication received in error, or
subsequent reply, should be deleted or destroyed.
******************************************************************************