2002-09-24 23:22:32

by Adam Goldstein

[permalink] [raw]
Subject: Very High Load, kernel 2.4.18, apache/mysql

I have been trying to find an answer to this for the past couple weeks,
but I have finally
broken down and must post this to this list. ;)

I am running a high user load site (>20million hits/month stamp auction
site) which runs entirely on apache/php with mysql. It was running
smoothly (for the most part) on as a virtual server on a relatively
nice box(see Moya below), but started needing more and more disk space
(from uploads, logs, etc) and kept running out of space on the root
partition (including /var...which has mysql & weblogs)

I decided to build a new box for it, which we were shipping to a
highspeed colo facility.

While this unit was slightly less powerful, it was a clean install with
a larger root partition (See Anubis below). This unit started acting
pathetic, and had less other loads on it (old box also has lots of
samba sharing and other low-traffic websites). The system load kept at
high rates constantly, 5-8 during non-peak hours , average of 10-20
during most times, and spikes >100... needless to say, any load over
5-10 made the unit a pile of dung.

My partner is running a similar site, under debian, on similar hardware
(almost identical, actually) and is having -very- similar problems.

I stripped the old server, packed it into a new 4U case (-packed!-) and
moved just the one site (29G including pictures and sql data) to it,
and the results are no better. This unit has even more ram, and, more
hard drive space. (See Nosferatu below)

We are at the end of our ropes, and are clearing our chalkboards to
start testing pieces of our systems... problem is, testing these system
is difficult due to needing to put live loads on them. We need to
narrow down the search, and need your help... please...

We also see high amounts of apache children segfaulting under load...
as high as 2-10/minute at times. I have tried turning off atimes, and
reducing tcp timeouts, etc. The big users of CPU are typically apache
and mysql. About 110+ instances of apache and mysqld each run in top at
high load. CPU use bounces wildly, with most in user space.

Also the amount of open files/handlers on the machines is staggering.

[root@nosferatu whitewlf]# lsof | wc -l
42068
[root@nosferatu whitewlf]# cat /proc/sys/fs/inode-nr
84976 36563
[root@nosferatu whitewlf]# cat /proc/sys/fs/fil
file-max file-nr
[root@nosferatu whitewlf]# cat /proc/sys/fs/file-max
8192
[root@nosferatu whitewlf]# cat /proc/sys/fs/file-
file-max file-nr
[root@nosferatu whitewlf]# cat /proc/sys/fs/file-nr
4198 1052 8192

Machines:
Moya (Which ran OK):
Dual Thunder K7 1900+MPs, 1.5Gddr/ecc/reg ram
dual u160 scsi, 3x18G soft raid 5 /home(ext3), 9G / (ext3) & /boot
(ext2) & 512Mb swap
Mandrake 8.1, kernel 2.4.8, 40G ide backup
Apache 1.3.23, mod_ssl/2.8.7 OpenSSL/0.9.6c PHP/4.1.2
Mysql 3.23.47 (large & huge my.conf files)

Anubis:
Dual PIII 440lx, 1.0Gpc100/ecc ram
dual u2 scsi, 3x18G soft raid 5 /home(ext3), 18G / (ext3) & /boot
(ext2)
Mandrake 8.2, kernel 2.4.18, 80G ide backup
Apache 1.3.23, mod_ssl/2.8.7 OpenSSL/0.9.6c PHP/4.1.2
Mysql 3.23.47 and 3.23.52 (large & huge my.conf files)

Nosferatu:
Dual Thunder K7 1900+MPs, 2.0Gddr/ecc/reg ram
dual u160 scsi, 7x18G soft raid 5 /(ext3) & (250 MB/boot sda & 250MB
swap on others)
Mandrake 8.2, kernel 2.4.18, 80G ide backup
Apache 1.3.23, mod_ssl/2.8.7 OpenSSL/0.9.6c PHP/4.1.2
Mysql 3.23.47 (large & huge my.conf files)

Current snap use shots (low usage at this hour)
(Threads: 55 Questions: 13038219 Slow queries: 12879 Opens: 620
Flush tables: 1 Open tables: 512 Queries per second avg: 90.952)

7:01pm up 1 day, 15:54, 2 users, load average: 20.51, 17.21, 15.78

7:04pm up 1 day, 15:56, 2 users, load average: 18.95, 17.81, 16.21
236 processes: 223 sleeping, 13 running, 0 zombie, 0 stopped
CPU0 states: 89.1% user, 10.5% system, 0.0% nice, 0.0% idle
CPU1 states: 84.2% user, 15.4% system, 0.0% nice, 0.0% idle
Mem: 2061772K av, 1949428K used, 112344K free, 0K shrd,
290984K buff
Swap: 1493808K av, 48420K used, 1445388K free
882200K cached

Server uptime: 2 hours 10 minutes 6 seconds
43 requests currently being processed, 13 idle servers

KK_WW_WW_K_KWLWWWKW_KKKK.__K_WWW_WWW_K_WWWWK_WKWW_WKK.W...W....W...W..

(My partner's boxes have been similar to above, except first was
TigerMPX, 1G ram, ide drives (first--ran ok.. ran out of space as
well), second nearly identical to Anubis except Debian Woody, no PHP
used, mostly CGIs, 35million+hits. Same System loads, sometimes
spiraling out of control, needing apache shutdown.)
--
Adam Goldstein
White Wolf Networks


2002-09-25 03:43:46

by Bernd Eckenfels

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

In article <[email protected]> you wrote:
> I have been trying to find an answer to this for the past couple weeks,
> but I have finally broken down and must post this to this list. ;)

Ok, but I wonder a bit about your question. You have a big workload and
therefore a high load on the system, there is nothing much you can do from a
kernel perspecitve about it. I guess the PHP is one of the Problems here,
perhaps you should start to benchmark your mostly used web pages for
database access patttern.

> We also see high amounts of apache children segfaulting under load...
> as high as 2-10/minute at times.

This is strange. Is this a ressource limit congestion or a apache bug. Which
apache version are you using?

> reducing tcp timeouts, etc. The big users of CPU are typically apache
> and mysql. About 110+ instances of apache and mysqld each run in top at
> high load. CPU use bounces wildly, with most in user space.

What are your Apache Parameters for min/max spare, maxclients etc.

> Machines:
> Moya (Which ran OK):

does that mean it run ok and had high load, or does that mean it had not
such a high load?

> Dual Thunder K7 1900+MPs, 1.5Gddr/ecc/reg ram
> dual u160 scsi, 3x18G soft raid 5 /home(ext3), 9G / (ext3) & /boot
> (ext2) & 512Mb swap

this is a bit small ram, and you are probably better using raid-1 here.

> Anubis:
> Dual PIII 440lx, 1.0Gpc100/ecc ram

this is much slower, no wonder the system is sluggish, if you have CPU bound
tasks. can you post some vmstat results, to see if you are io, cpu or ram
bound.

> 7:04pm up 1 day, 15:56, 2 users, load average: 18.95, 17.81, 16.21
> 236 processes: 223 sleeping, 13 running, 0 zombie, 0 stopped
> CPU0 states: 89.1% user, 10.5% system, 0.0% nice, 0.0% idle
> CPU1 states: 84.2% user, 15.4% system, 0.0% nice, 0.0% idle
> Mem: 2061772K av, 1949428K used, 112344K free, 0K shrd,
> 290984K buff
> Swap: 1493808K av, 48420K used, 1445388K free
> 882200K cached

to me this looks good, it is a very balanced workload. No idle time. Your
system seems to be mostly cpu bound here.

Greetings
Bernd

2002-09-25 03:45:06

by Bernd Eckenfels

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

In article <[email protected]> you wrote:
> If it's IO bound, it's quite possible the problem is the disk
> elevator and Andrew Morton's read-latency2 patch might help
> somewhat (if the system is heavy on both reads and writes).

hmm.. it does not look so from the posted stats, or do you think so? 10%
system time and 0% idle.

> It would make sense to study the output of top and vmstat for
> a few hours to identify exactly what the problem is

yes, I think so.

Greetings
Bernd

2002-09-25 02:33:52

by Adam Goldstein

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

Moya used ext3 as well (I listed the file systems&partitions for each
machine near the bottom of the post)

It hasn't run out of ram, even though I have set mysql to use -a lot-.
2Gigs of ram should
be more than enough for this... I would hope. 1.5G was OK before.

These are under current load, i will run a full snap of tests tomorrow
during peak load.

Can anyone recommend any long term cumulative monitors for vmstat,
and/or other processes that could run behind the scenes and gather
cooperative data? Personally, I can't make heads or tails of the vmstat
output, and, I still have as of yet to get a -real- answer for what
"load" is.. besides the knee-jerk answer of "its the avg load over X
minutes". :)

[root@nosferatu whitewlf]# vmstat -n 1
procs memory swap io system
cpu
r b w swpd free buff cache si so bi bo in cs us
sy id
5 5 2 94076 1181592 61740 219676 0 0 10 16 125 111
69 12 19
7 2 4 94076 1186024 61752 219664 0 0 0 948 454 1421
95 5 0
10 2 2 94076 1172288 61764 219672 0 0 0 1024 468 1425
88 12 0
7 2 3 94076 1175220 61772 219660 0 0 0 1236 509 1513
93 7 0
5 2 2 94076 1187824 61784 219664 0 0 0 864 419 1524
87 13 0
8 1 2 94076 1170140 61792 219656 0 0 0 656 362 945
88 12 0
5 7 3 94076 1182448 61800 219712 0 0 36 696 580 1616
93 7 0
5 4 3 94076 1186500 61808 219740 0 0 12 1252 595 1766
90 10 0
8 1 3 94076 1177424 61812 219744 0 0 0 1124 497 1588
96 4 0
8 3 3 94076 1167564 61824 219748 0 0 0 1136 485 1476
88 12 0
5 4 2 94076 1187024 61836 219740 0 0 0 1204 473 1659
93 7 0
10 6 3 94076 1180816 61840 219832 0 0 52 1124 668 3079
73 27 0
6 6 2 94076 1184404 61840 219932 0 0 88 1356 1110 1886
94 6 0
8 4 2 94076 1176276 61852 219948 0 0 0 1324 683 1819
89 11 0
6 4 3 94076 1183948 61860 219932 0 0 0 984 441 1296
92 8 0
11 1 2 94076 1177320 61872 219940 0 0 0 948 448 1351
88 12 0
12 2 2 94076 1150268 61880 219952 0 0 0 952 438 1206
88 12 0

here is a snap of top (idles off):
10:21pm up 1 day, 19:13, 2 users, load average: 12.53, 12.30, 11.85
235 processes: 229 sleeping, 6 running, 0 zombie, 0 stopped
CPU0 states: 87.5% user, 12.0% system, 0.0% nice, 0.0% idle
CPU1 states: 90.2% user, 9.4% system, 0.0% nice, 0.0% idle
Mem: 2061772K av, 867640K used, 1194132K free, 0K shrd,
57560K buff
Swap: 1493808K av, 94080K used, 1399728K free
198052K cached

PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
16800 apache 20 0 4732 4260 2988 R 37.7 0.2 0:35 httpd
21171 apache 16 0 4976 4548 3268 R 36.6 0.2 2:02 httpd
6949 apache 17 0 4604 4132 2936 R 36.5 0.2 0:53 httpd
29183 apache 17 0 4900 4468 3192 R 36.0 0.2 6:18 httpd
21179 root 19 0 1200 1200 812 R 9.3 0.0 0:07 top
21584 amavis 9 0 6840 6840 632 D 3.8 0.3 0:00 sweep
21585 amavis 9 0 6836 6836 632 D 3.8 0.3 0:00 sweep
21 root 10 0 0 0 0 DW 1.2 0.0 16:52 kjournald
25742 postfix 9 0 1864 1864 1288 D 0.7 0.0 3:46 qmgr
17272 apache 9 0 4412 3924 2928 D 0.6 0.1 0:00 httpd
4 root 19 19 0 0 0 RWN 0.0 0.0 0:01
ksoftirqd_CPU1
20854 postfix 9 0 1540 1540 1188 D 0.0 0.0 0:00 cleanup
21362 postfix 9 0 1376 1376 1080 D 0.0 0.0 0:00 smtp
21365 postfix 9 0 1356 1356 1060 D 0.0 0.0 0:00 smtp
21399 apache 9 0 4344 4244 4072 D 0.0 0.2 0:00 httpd
21401 apache 9 0 5212 5112 4176 D 0.0 0.2 0:00 httpd

Also, I ran a bonnie++ test this eve during the nightly lull. It pushed
the load from about
7 to 16, then settled in at about 12 during the test.

Version 1.02a ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per
Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
nosferatu 4G 14665 88 53478 46 21632 21 5415 27 39694 18
216.2 2

------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create--
--Read--- -Delete--
files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
64:20000:16/512 876 11 4980 20 3596 14 2497 29 891 4
578 3
nosferatu,4G,14665,88,53478,46,21632,21,5415,27,39694,18,216.2,2,64:2000
0:16/512,876,11,4980,20,3596,14,2497,29,891,4,578,3

On Tuesday, September 24, 2002, at 09:28 PM, Rik van Riel wrote:

> On Wed, 25 Sep 2002, Roger Larsson wrote:
>
>> Have you been able to determine if it is I/O bound or CPU bound?
>> Or maybe using to much CPU to do I/O?
>>
>> Does anyone know what virtual memory system does Mandrake uses?
>
> If it's IO bound, it's quite possible the problem is the disk
> elevator and Andrew Morton's read-latency2 patch might help
> somewhat (if the system is heavy on both reads and writes).
>
> If the system is short on RAM and/or swapping, that might be
> a VM thing or just a shortage of RAM...
>
> It would make sense to study the output of top and vmstat for
> a few hours to identify exactly what the problem is, instead
> of trying to fix all kinds of random things that aren't the
> core problem.
>
> regards,
>
> Rik
> --
> Bravely reimplemented by the knights who say "NIH".
>
> http://www.surriel.com/ http://distro.conectiva.com/
>
> Spamtraps of the month: [email protected] [email protected]
>
--
Adam Goldstein
White Wolf Networks

2002-09-25 01:23:32

by Rik van Riel

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

On Wed, 25 Sep 2002, Roger Larsson wrote:

> Have you been able to determine if it is I/O bound or CPU bound?
> Or maybe using to much CPU to do I/O?
>
> Does anyone know what virtual memory system does Mandrake uses?

If it's IO bound, it's quite possible the problem is the disk
elevator and Andrew Morton's read-latency2 patch might help
somewhat (if the system is heavy on both reads and writes).

If the system is short on RAM and/or swapping, that might be
a VM thing or just a shortage of RAM...

It would make sense to study the output of top and vmstat for
a few hours to identify exactly what the problem is, instead
of trying to fix all kinds of random things that aren't the
core problem.

regards,

Rik
--
Bravely reimplemented by the knights who say "NIH".

http://www.surriel.com/ http://distro.conectiva.com/

Spamtraps of the month: [email protected] [email protected]

2002-09-25 00:55:45

by Roger Larsson

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

Asking some of the things I guess others will ask later, but I won't
look into this anymore this night.

Have you been able to determine if it is I/O bound or CPU bound?
Or maybe using to much CPU to do I/O?

Does anyone know what virtual memory system does Mandrake uses?
Linus, Andreas or Riels?
Have you tried Mandrakes support?

vmstat over some time would be nice to get a hint on what it is doing.
ext3 do you use the same journaling mode as on Moja?
top how much CPU time does the kernel processes use?

/RogerL

--
Roger Larsson
Skellefte?
Sweden

2002-09-25 05:19:01

by Simon Kirby

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

On Tue, Sep 24, 2002 at 10:38:56PM -0400, Adam Goldstein wrote:

> [root@nosferatu whitewlf]# vmstat -n 1
> procs memory swap io system cpu
> r b w swpd free buff cache si so bi bo in cs us sy id
> 5 5 2 94076 1181592 61740 219676 0 0 10 16 125 111 69 12 19
> 7 2 4 94076 1186024 61752 219664 0 0 0 948 454 1421 95 5 0
> 10 2 2 94076 1172288 61764 219672 0 0 0 1024 468 1425 88 12 0
> 7 2 3 94076 1175220 61772 219660 0 0 0 1236 509 1513 93 7 0
> 5 2 2 94076 1187824 61784 219664 0 0 0 864 419 1524 87 13 0
> 8 1 2 94076 1170140 61792 219656 0 0 0 656 362 945 88 12 0
> 5 7 3 94076 1182448 61800 219712 0 0 36 696 580 1616 93 7 0
> 5 4 3 94076 1186500 61808 219740 0 0 12 1252 595 1766 90 10 0
> 8 1 3 94076 1177424 61812 219744 0 0 0 1124 497 1588 96 4 0
> 8 3 3 94076 1167564 61824 219748 0 0 0 1136 485 1476 88 12 0
> 5 4 2 94076 1187024 61836 219740 0 0 0 1204 473 1659 93 7 0
> 10 6 3 94076 1180816 61840 219832 0 0 52 1124 668 3079 73 27 0
> 6 6 2 94076 1184404 61840 219932 0 0 88 1356 1110 1886 94 6 0
> 8 4 2 94076 1176276 61852 219948 0 0 0 1324 683 1819 89 11 0
> 6 4 3 94076 1183948 61860 219932 0 0 0 984 441 1296 92 8 0
> 11 1 2 94076 1177320 61872 219940 0 0 0 948 448 1351 88 12 0
> 12 2 2 94076 1150268 61880 219952 0 0 0 952 438 1206 88 12 0

(Yes, I reformatted your vmstat.)

It's mostly CPU bound (see first column), but there is some disk waiting
going on too (next two). Most of the disk activity shows writing ("bo"),
not reading ("bi"). There is some swap use, but no swap occurred during
your dump ("si", "so"), so it's probably fine.

Free memory is huge, which indicates either the box hasn't been up long,
some huge process just exited and cleared a lot of memory with it, or
your site really is small and doesn't need anywhere near that much
memory. Judging by the rate of disk reads ("bi"), it looks like it
probably has more than enough memory.

A lot of writeouts are happening, and they're happening all the time (not
in five second bursts which would indicate regular asynchronous write
out). Are applications sync()ing, fsync()ing, fdatasync()ing, or using
O_SYNC? Are you using a journalling FS and are doing a lot of metadata
(directory) changes? We saw huge problems on our mail servers when we
switched to ext3 from ext2 when with ext2 they were almost always idle
(load went from 0.2, 0.4 to 20, 30) because we're using dotlocking which
seems to annoy ext3.

If you're using a database, try disabling fsync() mode. Data integrity
after crashes might be more interesting (insert fsync() flamewar here),
but it mith help a lot. At least try it temporarily to see if this is
what is causing the load.

Always mount your filesystems with "noatime" and "nodiratime". I mount
hundreds of servers this way and nobody ever notices (except that disks
last a lot longer and there are a lot less writeouts on servers that do a
lot of reading, such as web servers). If you don't do this, _every_ file
read will result in a scheduled writeback to disk to update the atime
(last accessed time). Writing atime to disk is usually a dumb idea,
because almost nothing uses it. I think the only program in the wild
I've ever seen that uses the atime field is the finger daemon (wow).

> CPU0 states: 87.5% user, 12.0% system, 0.0% nice, 0.0% idle
> CPU1 states: 90.2% user, 9.4% system, 0.0% nice, 0.0% idle

Looks like mostly user CPU.

> PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
> 16800 apache 20 0 4732 4260 2988 R 37.7 0.2 0:35 httpd
> 21171 apache 16 0 4976 4548 3268 R 36.6 0.2 2:02 httpd
> 6949 apache 17 0 4604 4132 2936 R 36.5 0.2 0:53 httpd
> 29183 apache 17 0 4900 4468 3192 R 36.0 0.2 6:18 httpd

First, check /tmp for .bugtraq.c, etc., and make sure this isn't the
Slapper worm. :)

Next, figure out why these processes have taken _minutes_ of CPU time and
are still running! If these aren't the worm, you're likely using
mod_perl or mod_php or something which can make the httpd proess take
that much CPU. Check which scripts and what conditions are creating
those processes. Play around in /proc/16800/fd, look at /proc/16800/cwd,
etc., if you can't determine what is happening by the logs. If you're
still stuck, try tracing them (see below). If it's hard to catch them
(though it appears they are slugs), switching mod_perl/mod_php to
standalone CGIs may help.

To summarize, it looks like the box is both CPU bound (above Apache
processes) and blocking on disk writes. The processes using the CPU are
not responsible for the writing out because they are in 'R' state
(running); if they were writing, they would be in mostly 'D' state.

If you want to see which processes are writing out, try:

ps auxw | grep ' D '

(Might give false positives -- just looking for 'D' state.)

If you want to see whether the journalling code is doing the writing,
try:

ps -eo pid,stat,args,wchan | grep ' D '

...and see which functions the 'D' state processes are blocking in
(requires your System.map file to be up-to-date). If you see something
about do_get_write_access (a function in fs/jbd/transaction.c), it's
likely the ext3 journalling causing all of the writing. This is what I
saw in our case with the mail servers.

This "ps" command is also useful for figuring out what other non-running
processes are doing, too. However, the wchan field often shows just
"down", which isn't very helpful.

If you are getting a lot of processes sleeping in "down" and want to
figure out where they are actually stuck, try heading over to the console
and hit right_control-scroll_lock. Modern kernels will print a stack
backtrace for each process, and you can manually translate the the EIP
locations in /System.map or /boot/System.map (whatever matches your
kernel) to the function names to find functions the kernel is/was in.

To find the function in System.map, first make make sure it is sorted.
Next, incrementally search for the first EIP, number by number. The EIP
provided in the process list dump will always be higher than the actual
function offset, because it will be somewhere in the middle of the
function (System.map lists the beginning of each function). If you don't
have incremental search, this might be tedious. Some versions of "klogd"
will do this translation for you; you might want to check your kern.log.
You may also be able to coax "ksymoops" into doing the translation for
you.

If you cannot find a match in System.map, the EIP may be in a module
(requires loading modules with a symbol dump to trace). Try the next EIP
first, you can often get a good idea of what is happening by just tracing
further back. Once you've done this a few times, you'll get used to
seeing the module offsets being quite different from built-in offsets.

If you want to figure out what a running ('R') process is doing, first
try "strace -p <pid>". If it's not making many or any system calls (eg:
an endless loop or very user-CPU-intensive loop), try ltrace. If that
provides nothing useful the only other option is to try attaching to it
with gdb and do a backtrace:

gdb /proc/<pid>/exe
attach <pid>
bt

...but you may need to compile with debugging symbols for this to provide
useful output. Chances are you won't need to do this, and "strace"
will give you a pretty good idea about what is happening.

There should be enough information you can gather from these tools to
figure out what is happening. "vmstat 1" is usually the quickest way to
get a general idea of what is happening, and "ps auxwr" and "ps aux |
grep ' D '" are useful for starting to narrow it down.

Hope this helps. :)

Simon-

[ Stormix Technologies Inc. ][ NetNation Communications Inc. ]
[ [email protected] ][ [email protected] ]
[ Opinions expressed are not necessarily those of my employers. ]

2002-09-25 06:51:22

by Adam Goldstein

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

Have added nodiratime, missed that one, and switched to ext2 for
testing... ;)
It is still running high load, but seems only slightly better , but, i
will know more later.
It is currently at 12-23 load, with 76 httpd processes running (75
mysql)
0% idle, 89% user per cpu avg. about 8-12 httpd processes active
simultaneously.

Using postfix on new server, not sure how to disable locking?
Same with mysql.. can locking be disabled? how? safe?

No, no slappers ;) We did get that on the old server(almost the moment
it was around, before it was widely known), one of the reasons I
scrapped it so fast.

All patches are applied to the new servers, even though openssl reports
old
'c' version, it is patched (many distros have done this, I do not know
why they simply
don't use the new 'e'+ versions.)

The site uses php heavily, everypage has php includes and mysql lookups
(multiple languages, banner rotation, article rotation, etc...)

The customer/developer is really quite good with php/html... just not
very adept at linux..yet ;)

You can take a look at the site (ok netiquette?) http://delcampe.com
... please excuse the
intense lag, however, a we "are experiencing technical difficulties"
... <har har>

I will assume the combination of diratime, journaling, software raid,
mail locking and logging are a
bad combination.... however, I have been finding many instances online
about software
raid performing as well, or in some cases better, than hardware raid
setups (their tests,
not mine.. I would have assumed the reverse) but, that reiser-fs
performs far better than ext3 under load (with notail, noatime
enabled... tho ext3 can out do it under light load.)

Thanks for all the info... I am going to run tests on it tomorrow
morning during 'rush hour'. (This server's users are mostly european,
so my peak times differ from our other sites... which is good for most
of the day...)


On Wednesday, September 25, 2002, at 01:24 AM, Simon Kirby wrote:

> On Tue, Sep 24, 2002 at 10:38:56PM -0400, Adam Goldstein wrote:
>
>> [root@nosferatu whitewlf]# vmstat -n 1
>> procs memory swap io system
>> cpu
>> r b w swpd free buff cache si so bi bo in cs us sy
>> id
>> 5 5 2 94076 1181592 61740 219676 0 0 10 16 125 111 69 12
>> 19
>> 7 2 4 94076 1186024 61752 219664 0 0 0 948 454 1421 95 5
>> 0
>> 10 2 2 94076 1172288 61764 219672 0 0 0 1024 468 1425 88 12
>> 0
>> 7 2 3 94076 1175220 61772 219660 0 0 0 1236 509 1513 93 7
>> 0
>> 5 2 2 94076 1187824 61784 219664 0 0 0 864 419 1524 87 13
>> 0
>> 8 1 2 94076 1170140 61792 219656 0 0 0 656 362 945 88 12
>> 0
>> 5 7 3 94076 1182448 61800 219712 0 0 36 696 580 1616 93 7
>> 0
>> 5 4 3 94076 1186500 61808 219740 0 0 12 1252 595 1766 90 10
>> 0
>> 8 1 3 94076 1177424 61812 219744 0 0 0 1124 497 1588 96 4
>> 0
>> 8 3 3 94076 1167564 61824 219748 0 0 0 1136 485 1476 88 12
>> 0
>> 5 4 2 94076 1187024 61836 219740 0 0 0 1204 473 1659 93 7
>> 0
>> 10 6 3 94076 1180816 61840 219832 0 0 52 1124 668 3079 73 27
>> 0
>> 6 6 2 94076 1184404 61840 219932 0 0 88 1356 1110 1886 94 6
>> 0
>> 8 4 2 94076 1176276 61852 219948 0 0 0 1324 683 1819 89 11
>> 0
>> 6 4 3 94076 1183948 61860 219932 0 0 0 984 441 1296 92 8
>> 0
>> 11 1 2 94076 1177320 61872 219940 0 0 0 948 448 1351 88 12
>> 0
>> 12 2 2 94076 1150268 61880 219952 0 0 0 952 438 1206 88 12
>> 0
>
> (Yes, I reformatted your vmstat.)
>
> It's mostly CPU bound (see first column), but there is some disk
> waiting
> going on too (next two). Most of the disk activity shows writing
> ("bo"),
> not reading ("bi"). There is some swap use, but no swap occurred
> during
> your dump ("si", "so"), so it's probably fine.
>
> Free memory is huge, which indicates either the box hasn't been up
> long,
> some huge process just exited and cleared a lot of memory with it, or
> your site really is small and doesn't need anywhere near that much
> memory. Judging by the rate of disk reads ("bi"), it looks like it
> probably has more than enough memory.
>
> A lot of writeouts are happening, and they're happening all the time
> (not
> in five second bursts which would indicate regular asynchronous write
> out). Are applications sync()ing, fsync()ing, fdatasync()ing, or using
> O_SYNC? Are you using a journalling FS and are doing a lot of metadata
> (directory) changes? We saw huge problems on our mail servers when we
> switched to ext3 from ext2 when with ext2 they were almost always idle
> (load went from 0.2, 0.4 to 20, 30) because we're using dotlocking
> which
> seems to annoy ext3.
>
> If you're using a database, try disabling fsync() mode. Data integrity
> after crashes might be more interesting (insert fsync() flamewar here),
> but it mith help a lot. At least try it temporarily to see if this is
> what is causing the load.
>
> Always mount your filesystems with "noatime" and "nodiratime". I mount
> hundreds of servers this way and nobody ever notices (except that disks
> last a lot longer and there are a lot less writeouts on servers that
> do a
> lot of reading, such as web servers). If you don't do this, _every_
> file
> read will result in a scheduled writeback to disk to update the atime
> (last accessed time). Writing atime to disk is usually a dumb idea,
> because almost nothing uses it. I think the only program in the wild
> I've ever seen that uses the atime field is the finger daemon (wow).
>
>> CPU0 states: 87.5% user, 12.0% system, 0.0% nice, 0.0% idle
>> CPU1 states: 90.2% user, 9.4% system, 0.0% nice, 0.0% idle
>
> Looks like mostly user CPU.
>
>> PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
>> 16800 apache 20 0 4732 4260 2988 R 37.7 0.2 0:35 httpd
>> 21171 apache 16 0 4976 4548 3268 R 36.6 0.2 2:02 httpd
>> 6949 apache 17 0 4604 4132 2936 R 36.5 0.2 0:53 httpd
>> 29183 apache 17 0 4900 4468 3192 R 36.0 0.2 6:18 httpd
>
> First, check /tmp for .bugtraq.c, etc., and make sure this isn't the
> Slapper worm. :)
>
> Next, figure out why these processes have taken _minutes_ of CPU time
> and
> are still running! If these aren't the worm, you're likely using
> mod_perl or mod_php or something which can make the httpd proess take
> that much CPU. Check which scripts and what conditions are creating
> those processes. Play around in /proc/16800/fd, look at
> /proc/16800/cwd,
> etc., if you can't determine what is happening by the logs. If you're
> still stuck, try tracing them (see below). If it's hard to catch them
> (though it appears they are slugs), switching mod_perl/mod_php to
> standalone CGIs may help.
>
> To summarize, it looks like the box is both CPU bound (above Apache
> processes) and blocking on disk writes. The processes using the CPU
> are
> not responsible for the writing out because they are in 'R' state
> (running); if they were writing, they would be in mostly 'D' state.
>
> If you want to see which processes are writing out, try:
>
> ps auxw | grep ' D '
>
> (Might give false positives -- just looking for 'D' state.)
>
> If you want to see whether the journalling code is doing the writing,
> try:
>
> ps -eo pid,stat,args,wchan | grep ' D '
>
> ...and see which functions the 'D' state processes are blocking in
> (requires your System.map file to be up-to-date). If you see something
> about do_get_write_access (a function in fs/jbd/transaction.c), it's
> likely the ext3 journalling causing all of the writing. This is what I
> saw in our case with the mail servers.
>
> This "ps" command is also useful for figuring out what other
> non-running
> processes are doing, too. However, the wchan field often shows just
> "down", which isn't very helpful.
>
> If you are getting a lot of processes sleeping in "down" and want to
> figure out where they are actually stuck, try heading over to the
> console
> and hit right_control-scroll_lock. Modern kernels will print a stack
> backtrace for each process, and you can manually translate the the EIP
> locations in /System.map or /boot/System.map (whatever matches your
> kernel) to the function names to find functions the kernel is/was in.
>
> To find the function in System.map, first make make sure it is sorted.
> Next, incrementally search for the first EIP, number by number. The
> EIP
> provided in the process list dump will always be higher than the actual
> function offset, because it will be somewhere in the middle of the
> function (System.map lists the beginning of each function). If you
> don't
> have incremental search, this might be tedious. Some versions of
> "klogd"
> will do this translation for you; you might want to check your
> kern.log.
> You may also be able to coax "ksymoops" into doing the translation for
> you.
>
> If you cannot find a match in System.map, the EIP may be in a module
> (requires loading modules with a symbol dump to trace). Try the next
> EIP
> first, you can often get a good idea of what is happening by just
> tracing
> further back. Once you've done this a few times, you'll get used to
> seeing the module offsets being quite different from built-in offsets.
>
> If you want to figure out what a running ('R') process is doing, first
> try "strace -p <pid>". If it's not making many or any system calls
> (eg:
> an endless loop or very user-CPU-intensive loop), try ltrace. If that
> provides nothing useful the only other option is to try attaching to it
> with gdb and do a backtrace:
>
> gdb /proc/<pid>/exe
> attach <pid>
> bt
>
> ...but you may need to compile with debugging symbols for this to
> provide
> useful output. Chances are you won't need to do this, and "strace"
> will give you a pretty good idea about what is happening.
>
> There should be enough information you can gather from these tools to
> figure out what is happening. "vmstat 1" is usually the quickest way
> to
> get a general idea of what is happening, and "ps auxwr" and "ps aux |
> grep ' D '" are useful for starting to narrow it down.
>
> Hope this helps. :)
>
> Simon-
>
> [ Stormix Technologies Inc. ][ NetNation Communications Inc. ]
> [ [email protected] ][ [email protected] ]
> [ Opinions expressed are not necessarily those of my employers. ]
>
--
Adam Goldstein
White Wolf Networks

2002-09-25 07:15:21

by Simon Kirby

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

On Wed, Sep 25, 2002 at 02:56:18AM -0400, Adam Goldstein wrote:

> Have added nodiratime, missed that one, and switched to ext2 for
> testing... ;)
> It is still running high load, but seems only slightly better , but, i
> will know more later.

Yes, nodiratime will only make a tiny difference.

> Using postfix on new server, not sure how to disable locking?

It's not locking you'd want to disable. If anything, it's the
synchronous writes to disk of data which may or may not even need to go
to disk (eg: an email that gets delivered almost instantly and
subsequently removed from disk just after it was written). The idea with
a journal, however, is that it can keep track of such emails sequentually
on disk rather than seeking all over the place, and write the ones that
will stick around later. Your output rate is too low to be bounded by a
sequential write limit alone, especially on software RAID, so it's most
likely doing a lot of seeking while writing.

> Same with mysql.. can locking be disabled? how? safe?

Again, not locking, but fsync(). It's safe providing your machine never
crashes. :) Of course, there's still a chance it can be corrupted
_with_ fsync() anyway, but the difference is the clients will get a
result beore it guarantees the data will be on disk.

First narrow down what is causing most of the writing activity.

> The site uses php heavily, everypage has php includes and mysql lookups
> (multiple languages, banner rotation, article rotation, etc...)

I see. The cause of your CPU-wise load appears to be mostly the PHP under
mod_php (unless something else is running). Those processes you showed
in top were running for so long that they were probably never going to
output anything (or at least the client wouldn't be there anymore), so it
looks like a code bug. You should debug this.

> You can take a look at the site (ok netiquette?) http://delcampe.com

It definitely seems slow. :)

> I will assume the combination of diratime, journaling, software raid,
> mail locking and logging are a
> bad combination.... however, I have been finding many instances online

Software RAID won't slow it down. diratime won't make any noticeable
difference. Logging is usually sequential. Journalling _with_ mail
locking might be a concern, but more than likely you're just seeing the
result of fsync(). What sort of mail load do you have? What about the
MySQL write load?

Simon-

[ Simon Kirby ][ Network Operations ]
[ [email protected] ][ NetNation Communications ]
[ Opinions expressed are not necessarily those of my employer. ]

2002-09-25 07:46:46

by Paweł Krawczyk

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

On Wed, Sep 25, 2002 at 12:20:26AM -0700, Simon Kirby wrote:

> Again, not locking, but fsync(). It's safe providing your machine never
> crashes. :) Of course, there's still a chance it can be corrupted
> _with_ fsync() anyway, but the difference is the clients will get a
> result beore it guarantees the data will be on disk.

Many Linux distributions configure syslog to use synchronous writes
for each logged line, which caused very high load on busy systems
I've seen.

Go through your /etc/syslog.conf and change every "/var/log/messages"
to "-/var/log/messages", the minus enables asynchronous writes.

Also try disabling logging for Apache at all for some time (set ErrorLog,
AccessLog or CustomLog to /dev/null) and see what happens.

--
Pawe? Krawczyk, Krak?w, Poland http://echelon.pl/kravietz/
horses: http://kabardians.com/
crypto: http://ipsec.pl/

2002-09-25 13:08:15

by Rik van Riel

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

On Tue, 24 Sep 2002, Adam Goldstein wrote:

> These are under current load, i will run a full snap of tests tomorrow
> during peak load.

> 235 processes: 229 sleeping, 6 running, 0 zombie, 0 stopped
> CPU0 states: 87.5% user, 12.0% system, 0.0% nice, 0.0% idle
> CPU1 states: 90.2% user, 9.4% system, 0.0% nice, 0.0% idle

OK, this looks like you're just running out of CPU power.

Rik
--
Bravely reimplemented by the knights who say "NIH".

http://www.surriel.com/ http://distro.conectiva.com/

Spamtraps of the month: [email protected] [email protected]

2002-09-25 20:11:42

by Adam Goldstein

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

During my investigation of php accelerator (which we put off before
thinking it would be better to stabilize the server first) I came
across a small blurb about php 4.1.2 (which we use) and mysql.

http://www.php-accelerator.co.uk/faq.php#segv2

Apparently this is how the site is written in some places, and it
causes instability in the php portion of the apache process. We are
fixing this now. Also, with the nodiratime, noatime, ext2 combination,
the load has decreased a little, but, not very much. It has still
reached >25 load when apache processes reached 120 (112 active
according to server-status) and page loads come to near dead stop...
segfaults still exist, even with fixed mysql connection calls. :(
1-4/min under present 25+ load.

As for the syslog, unfort. almost every entry was marked async. I
changed an auth log entry but messages was already async. I left
kernel.errors sync, as It never really logs.

On Wednesday, September 25, 2002, at 04:55 AM, Randal, Phil wrote:

> Have you tried using PHP Accelerator?
>
> It's the only free PHP Cache which has survived my testing,
> and should certainly reduce your CPU load.
>
> Phil
>
> ---------------------------------------------
> Phil Randal
> Network Engineer
> Herefordshire Council
> Hereford, UK
>
>> -----Original Message-----
>> From: Adam Goldstein [mailto:[email protected]]
>> Sent: 25 September 2002 07:56
>> To: Simon Kirby
>> Cc: [email protected]; Adam Bernau; Adam Taylor
>> Subject: Re: Very High Load, kernel 2.4.18, apache/mysql
>>
>>
>> Have added nodiratime, missed that one, and switched to ext2 for
>> testing... ;)
>> It is still running high load, but seems only slightly better
>> , but, i
>> will know more later.
>> It is currently at 12-23 load, with 76 httpd processes running (75
>> mysql)
>> 0% idle, 89% user per cpu avg. about 8-12 httpd processes active
>> simultaneously.
>>
>> Using postfix on new server, not sure how to disable locking?
>> Same with mysql.. can locking be disabled? how? safe?
>>
>> No, no slappers ;) We did get that on the old server(almost
>> the moment
>> it was around, before it was widely known), one of the reasons I
>> scrapped it so fast.
>>
>> All patches are applied to the new servers, even though
>> openssl reports
>> old
>> 'c' version, it is patched (many distros have done this, I do
>> not know
>> why they simply
>> don't use the new 'e'+ versions.)
>>
>> The site uses php heavily, everypage has php includes and
>> mysql lookups
>> (multiple languages, banner rotation, article rotation, etc...)
>>
>> The customer/developer is really quite good with php/html... just not
>> very adept at linux..yet ;)
>>
>> You can take a look at the site (ok netiquette?) http://delcampe.com
>> ... please excuse the
>> intense lag, however, a we "are experiencing technical difficulties"
>> ... <har har>
>>
>> I will assume the combination of diratime, journaling, software raid,
>> mail locking and logging are a
>> bad combination.... however, I have been finding many
>> instances online
>> about software
>> raid performing as well, or in some cases better, than hardware raid
>> setups (their tests,
>> not mine.. I would have assumed the reverse) but, that reiser-fs
>> performs far better than ext3 under load (with notail, noatime
>> enabled... tho ext3 can out do it under light load.)
>>
>> Thanks for all the info... I am going to run tests on it tomorrow
>> morning during 'rush hour'. (This server's users are mostly european,
>> so my peak times differ from our other sites... which is good
>> for most
>> of the day...)
>>
>>
>> On Wednesday, September 25, 2002, at 01:24 AM, Simon Kirby wrote:
>>
>>> On Tue, Sep 24, 2002 at 10:38:56PM -0400, Adam Goldstein wrote:
>>>
>>>> [root@nosferatu whitewlf]# vmstat -n 1
>>>> procs memory swap io
>> system
>>>> cpu
>>>> r b w swpd free buff cache si so bi bo in
>> cs us sy
>>>> id
>>>> 5 5 2 94076 1181592 61740 219676 0 0 10 16 125
>> 111 69 12
>>>> 19
>>>> 7 2 4 94076 1186024 61752 219664 0 0 0 948 454
>> 1421 95 5
>>>> 0
>>>> 10 2 2 94076 1172288 61764 219672 0 0 0 1024 468
>> 1425 88 12
>>>> 0
>>>> 7 2 3 94076 1175220 61772 219660 0 0 0 1236 509
>> 1513 93 7
>>>> 0
>>>> 5 2 2 94076 1187824 61784 219664 0 0 0 864 419
>> 1524 87 13
>>>> 0
>>>> 8 1 2 94076 1170140 61792 219656 0 0 0 656 362
>> 945 88 12
>>>> 0
>>>> 5 7 3 94076 1182448 61800 219712 0 0 36 696 580
>> 1616 93 7
>>>> 0
>>>> 5 4 3 94076 1186500 61808 219740 0 0 12 1252 595
>> 1766 90 10
>>>> 0
>>>> 8 1 3 94076 1177424 61812 219744 0 0 0 1124 497
>> 1588 96 4
>>>> 0
>>>> 8 3 3 94076 1167564 61824 219748 0 0 0 1136 485
>> 1476 88 12
>>>> 0
>>>> 5 4 2 94076 1187024 61836 219740 0 0 0 1204 473
>> 1659 93 7
>>>> 0
>>>> 10 6 3 94076 1180816 61840 219832 0 0 52 1124 668
>> 3079 73 27
>>>> 0
>>>> 6 6 2 94076 1184404 61840 219932 0 0 88 1356 1110
>> 1886 94 6
>>>> 0
>>>> 8 4 2 94076 1176276 61852 219948 0 0 0 1324 683
>> 1819 89 11
>>>> 0
>>>> 6 4 3 94076 1183948 61860 219932 0 0 0 984 441
>> 1296 92 8
>>>> 0
>>>> 11 1 2 94076 1177320 61872 219940 0 0 0 948 448
>> 1351 88 12
>>>> 0
>>>> 12 2 2 94076 1150268 61880 219952 0 0 0 952 438
>> 1206 88 12
>>>> 0
>>>
>>> (Yes, I reformatted your vmstat.)
>>>
>>> It's mostly CPU bound (see first column), but there is some disk
>>> waiting
>>> going on too (next two). Most of the disk activity shows writing
>>> ("bo"),
>>> not reading ("bi"). There is some swap use, but no swap occurred
>>> during
>>> your dump ("si", "so"), so it's probably fine.
>>>
>>> Free memory is huge, which indicates either the box hasn't been up
>>> long,
>>> some huge process just exited and cleared a lot of memory
>> with it, or
>>> your site really is small and doesn't need anywhere near that much
>>> memory. Judging by the rate of disk reads ("bi"), it looks like it
>>> probably has more than enough memory.
>>>
>>> A lot of writeouts are happening, and they're happening all
>> the time
>>> (not
>>> in five second bursts which would indicate regular
>> asynchronous write
>>> out). Are applications sync()ing, fsync()ing,
>> fdatasync()ing, or using
>>> O_SYNC? Are you using a journalling FS and are doing a lot
>> of metadata
>>> (directory) changes? We saw huge problems on our mail
>> servers when we
>>> switched to ext3 from ext2 when with ext2 they were almost
>> always idle
>>> (load went from 0.2, 0.4 to 20, 30) because we're using dotlocking
>>> which
>>> seems to annoy ext3.
>>>
>>> If you're using a database, try disabling fsync() mode.
>> Data integrity
>>> after crashes might be more interesting (insert fsync()
>> flamewar here),
>>> but it mith help a lot. At least try it temporarily to see
>> if this is
>>> what is causing the load.
>>>
>>> Always mount your filesystems with "noatime" and
>> "nodiratime". I mount
>>> hundreds of servers this way and nobody ever notices
>> (except that disks
>>> last a lot longer and there are a lot less writeouts on
>> servers that
>>> do a
>>> lot of reading, such as web servers). If you don't do
>> this, _every_
>>> file
>>> read will result in a scheduled writeback to disk to update
>> the atime
>>> (last accessed time). Writing atime to disk is usually a dumb idea,
>>> because almost nothing uses it. I think the only program
>> in the wild
>>> I've ever seen that uses the atime field is the finger daemon (wow).
>>>
>>>> CPU0 states: 87.5% user, 12.0% system, 0.0% nice, 0.0% idle
>>>> CPU1 states: 90.2% user, 9.4% system, 0.0% nice, 0.0% idle
>>>
>>> Looks like mostly user CPU.
>>>
>>>> PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM
>> TIME COMMAND
>>>> 16800 apache 20 0 4732 4260 2988 R 37.7 0.2 0:35 httpd
>>>> 21171 apache 16 0 4976 4548 3268 R 36.6 0.2 2:02 httpd
>>>> 6949 apache 17 0 4604 4132 2936 R 36.5 0.2 0:53 httpd
>>>> 29183 apache 17 0 4900 4468 3192 R 36.0 0.2 6:18 httpd
>>>
>>> First, check /tmp for .bugtraq.c, etc., and make sure this isn't the
>>> Slapper worm. :)
>>>
>>> Next, figure out why these processes have taken _minutes_
>> of CPU time
>>> and
>>> are still running! If these aren't the worm, you're likely using
>>> mod_perl or mod_php or something which can make the httpd
>> proess take
>>> that much CPU. Check which scripts and what conditions are creating
>>> those processes. Play around in /proc/16800/fd, look at
>>> /proc/16800/cwd,
>>> etc., if you can't determine what is happening by the logs.
>> If you're
>>> still stuck, try tracing them (see below). If it's hard to
>> catch them
>>> (though it appears they are slugs), switching mod_perl/mod_php to
>>> standalone CGIs may help.
>>>
>>> To summarize, it looks like the box is both CPU bound (above Apache
>>> processes) and blocking on disk writes. The processes
>> using the CPU
>>> are
>>> not responsible for the writing out because they are in 'R' state
>>> (running); if they were writing, they would be in mostly 'D' state.
>>>
>>> If you want to see which processes are writing out, try:
>>>
>>> ps auxw | grep ' D '
>>>
>>> (Might give false positives -- just looking for 'D' state.)
>>>
>>> If you want to see whether the journalling code is doing
>> the writing,
>>> try:
>>>
>>> ps -eo pid,stat,args,wchan | grep ' D '
>>>
>>> ...and see which functions the 'D' state processes are blocking in
>>> (requires your System.map file to be up-to-date). If you
>> see something
>>> about do_get_write_access (a function in fs/jbd/transaction.c), it's
>>> likely the ext3 journalling causing all of the writing.
>> This is what I
>>> saw in our case with the mail servers.
>>>
>>> This "ps" command is also useful for figuring out what other
>>> non-running
>>> processes are doing, too. However, the wchan field often shows just
>>> "down", which isn't very helpful.
>>>
>>> If you are getting a lot of processes sleeping in "down" and want to
>>> figure out where they are actually stuck, try heading over to the
>>> console
>>> and hit right_control-scroll_lock. Modern kernels will
>> print a stack
>>> backtrace for each process, and you can manually translate
>> the the EIP
>>> locations in /System.map or /boot/System.map (whatever matches your
>>> kernel) to the function names to find functions the kernel
>> is/was in.
>>>
>>> To find the function in System.map, first make make sure it
>> is sorted.
>>> Next, incrementally search for the first EIP, number by
>> number. The
>>> EIP
>>> provided in the process list dump will always be higher
>> than the actual
>>> function offset, because it will be somewhere in the middle of the
>>> function (System.map lists the beginning of each function). If you
>>> don't
>>> have incremental search, this might be tedious. Some versions of
>>> "klogd"
>>> will do this translation for you; you might want to check your
>>> kern.log.
>>> You may also be able to coax "ksymoops" into doing the
>> translation for
>>> you.
>>>
>>> If you cannot find a match in System.map, the EIP may be in a module
>>> (requires loading modules with a symbol dump to trace).
>> Try the next
>>> EIP
>>> first, you can often get a good idea of what is happening by just
>>> tracing
>>> further back. Once you've done this a few times, you'll get used to
>>> seeing the module offsets being quite different from
>> built-in offsets.
>>>
>>> If you want to figure out what a running ('R') process is
>> doing, first
>>> try "strace -p <pid>". If it's not making many or any system calls
>>> (eg:
>>> an endless loop or very user-CPU-intensive loop), try
>> ltrace. If that
>>> provides nothing useful the only other option is to try
>> attaching to it
>>> with gdb and do a backtrace:
>>>
>>> gdb /proc/<pid>/exe
>>> attach <pid>
>>> bt
>>>
>>> ...but you may need to compile with debugging symbols for this to
>>> provide
>>> useful output. Chances are you won't need to do this, and "strace"
>>> will give you a pretty good idea about what is happening.
>>>
>>> There should be enough information you can gather from
>> these tools to
>>> figure out what is happening. "vmstat 1" is usually the
>> quickest way
>>> to
>>> get a general idea of what is happening, and "ps auxwr" and
>> "ps aux |
>>> grep ' D '" are useful for starting to narrow it down.
>>>
>>> Hope this helps. :)
>>>
>>> Simon-
>>>
>>> [ Stormix Technologies Inc. ][ NetNation Communications Inc. ]
>>> [ [email protected] ][ [email protected] ]
>>> [ Opinions expressed are not necessarily those of my employers. ]
>>>
>> --
>> Adam Goldstein
>> White Wolf Networks
>>
>> -
>> To unsubscribe from this list: send the line "unsubscribe
>> linux-kernel" in
>> the body of a message to [email protected]
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>> Please read the FAQ at http://www.tux.org/lkml/
>>
>>
--
Adam Goldstein
White Wolf Networks

2002-09-25 21:22:47

by Roger Larsson

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

The big question is - why that much CPU usage?

Possible answers:
* PHP, mySQL, Apache - needs that amount of CPU to perform the requested
function.
(you have got suggestions from others)

* The implementation if either has bugs that cause the CPU usage. Garbage
collection? Ineffective algorithms?
- Not much to do other than collecting execution profiles, quite advanced -
recompiling of the tools will probably be needed... And probably help from
the tools developers...

* The implementation of the user code has bugs that cause the CPU usage.
One example:
SQL SELECT with unindexed data - this can usually be noticed as buffer in load
in vmstat but since all data fits in memory - it would cause scans in memory,
with lots of RAM cache misses... And it would work well as long as the
scanned data was smaller than the CPU cache?
- Suggestion: Review your index keys and select statements to make sure that
they match!

/RogerL

2002-09-25 22:49:08

by Jose Luis Domingo Lopez

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

On Tuesday, 24 September 2002, at 22:38:56 -0400,
Adam Goldstein wrote:

> Can anyone recommend any long term cumulative monitors for vmstat,
> and/or other processes that could run behind the scenes and gather
> cooperative data? Personally, I can't make heads or tails of the vmstat
> output, and, I still have as of yet to get a -real- answer for what
> "load" is.. besides the knee-jerk answer of "its the avg load over X
> minutes". :)
>
apt-cache show sysstat
...
Description: sar, iostat and mpstat - system performance tools for Linux

The above are very well known performance monitoring tools used in the
UNIX world, that can gather periodic measures of many of your system's
usage parameters. Check the man pages for details :-)

Hope this helps.

--
Jose Luis Domingo Lopez
Linux Registered User #189436 Debian Linux Woody (Linux 2.4.19-pre6aa1)

2002-09-26 02:57:54

by Ernst Herzberg

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

On Mittwoch, 25. September 2002 22:16, Adam Goldstein wrote:

> [.....] It has still
> reached >25 load when apache processes reached 120 (112 active
> according to server-status) and page loads come to near dead stop...
> segfaults still exist, even with fixed mysql connection calls. :(
> 1-4/min under present 25+ load.
>
> [.....]

> Server uptime: 2 hours 10 minutes 6 seconds
> 43 requests currently being processed, 13 idle servers

> KK_WW_WW_K_KWLWWWKW_KKKK.__K_WWW_WWW_K_WWWWK_WKWW_WKK.W...W....W...W..

>> PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND
>> 16800 apache 20 0 4732 4260 2988 R 37.7 0.2 0:35 httpd
>> 21171 apache 16 0 4976 4548 3268 R 36.6 0.2 2:02 httpd
>> 6949 apache 17 0 4604 4132 2936 R 36.5 0.2 0:53 httpd
>> 29183 apache 17 0 4900 4468 3192 R 36.0 0.2 6:18 httpd

--------------------------------------------------------

Looks very bad. Not the '>25 load', don't panic if that reaches more than 50 or more,
if at the same time the processors don't reaches the 100%.

First reconfigure your apache, with

MaxClients 256 # absolute minimum, maybe you have to recompile apache
MinSpareServers 100 # better 150 to 200
MaxSpareServers 200 # bring it near MaxClients

Make shure, you have enough resources available, su to apache, make shure
ulimit -a
data seg size (kbytes) unlimited
file size (blocks) unlimited
max locked memory (kbytes) unlimited
max memory size (kbytes) unlimited
open files 65536 (!!)
pipe size (512 bytes) 8
stack size (kbytes) unlimited
cpu time (seconds) unlimited
max user processes 4095 (!!)
virtual memory (kbytes) unlimited

cat /proc/sys/fs/file-max
131072

Your machine should handle that.

Reason: Bring the count of forks of apache clients to a minimum. But you have to be careful.
You need everywhere the needed resources, max client connects to mysql for example.

And increase the apache-servers in several steps. If you have a bug or bad implementation
in your php scripts, you can run out of your cpu-resources.

If still the cpu-usage is about 100%, redesign your software or buy a bigger machine ;-)

<Earny>

2002-09-26 17:03:54

by Joachim Breuer

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

Adam Goldstein <[email protected]> writes:

> [...]
> cooperative data? Personally, I can't make heads or tails of the
> vmstat output, and, I still have as of yet to get a -real- answer for
> what "load" is.. besides the knee-jerk answer of "its the avg load
> over X minutes". :)

In the olden days (at least I learnt that definition for a system
based on 3.x BSD), the "load average" is the number of runnable
processes (i.e. those that could do work if they got a slice of CPU
time) averaged over some period of time (1, 2, 5, 10 minutes).

So, naively speaking upgrading the box to the number of CPUs indicated
by an average load average will keep it well busy while getting the
maximum amount of work done. [Yes, of course this rule of thumb does
not include the considerable overhead were one to really implement
that scheme - we used this measure when scaling hardware well before
SMP x86 became competitively available].

For Linux the load average also seems to include some notion of the
fraction of time spent waiting for disk accesses; possibly Linux
counts the number of processes which are either Runnable or Waiting
for Disk.

I don't know the concise definition in Linux's case either.


So long,
Joe

--
"I use emacs, which might be thought of as a thermonuclear
word processor."
-- Neal Stephenson, "In the beginning... was the command line"

2002-09-26 17:11:59

by Rik van Riel

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

On Thu, 26 Sep 2002, Joachim Breuer wrote:

> In the olden days (at least I learnt that definition for a system
> based on 3.x BSD), the "load average" is the number of runnable
> processes (i.e. those that could do work if they got a slice of CPU
> time) averaged over some period of time (1, 2, 5, 10 minutes).

> I don't know the concise definition in Linux's case either.

Extending your definition, the load average in Linux would be:

"the number of processes that could do work if they got a slice
of CPU time or had their data in RAM instead of being blocked
on disk"

Rik
--
A: No.
Q: Should I include quotations after my reply?

http://www.surriel.com/ http://distro.conectiva.com/

2002-09-26 18:31:42

by Marco Colombo

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

On Thu, 26 Sep 2002, Ernst Herzberg wrote:

> First reconfigure your apache, with
>
> MaxClients 256 # absolute minimum, maybe you have to recompile apache
> MinSpareServers 100 # better 150 to 200
> MaxSpareServers 200 # bring it near MaxClients

KeepAlive On
MaxKeepAliveRequests 1000

.TM.

2002-09-26 19:23:04

by Rik van Riel

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

On Thu, 26 Sep 2002, Marco Colombo wrote:
> On Thu, 26 Sep 2002, Ernst Herzberg wrote:
>
> > MaxClients 256 # absolute minimum, maybe you have to recompile apache
> > MinSpareServers 100 # better 150 to 200
> > MaxSpareServers 200 # bring it near MaxClients
>
> KeepAlive On
> MaxKeepAliveRequests 1000

That sounds like an extraordinarily bad idea. You really
don't want to have ALL your apache daemons tied up with
keepalive requests.

Personally I never have MaxKeepAliveRequests set to more
than 2/3 of MaxClients.

Rik
--
A: No.
Q: Should I include quotations after my reply?

http://www.surriel.com/ http://distro.conectiva.com/

2002-09-26 19:57:33

by Marco Colombo

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

On Thu, 26 Sep 2002, Rik van Riel wrote:

> On Thu, 26 Sep 2002, Marco Colombo wrote:
> > On Thu, 26 Sep 2002, Ernst Herzberg wrote:
> >
> > > MaxClients 256 # absolute minimum, maybe you have to recompile apache
> > > MinSpareServers 100 # better 150 to 200
> > > MaxSpareServers 200 # bring it near MaxClients
> >
> > KeepAlive On
> > MaxKeepAliveRequests 1000
>
> That sounds like an extraordinarily bad idea. You really
> don't want to have ALL your apache daemons tied up with
> keepalive requests.

[this is sliding OT]

# MaxKeepAliveRequests: The maximum number of requests to allow
# during a persistent connection. Set to -1 to allow an unlimited amount.
# We recommend you leave this number high, for maximum performance.
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

(what "high" means is the question here, I believe)

> Personally I never have MaxKeepAliveRequests set to more
> than 2/3 of MaxClient.

There's a timeout (15 sec, by default), which kicks idle clients away.

I guess it depends on the kind of load. If you're serving just static pages,
I agree. If you're serving dynamic pages via SQL queries (expecially with
authenticated connection), "session" setup cost may dominate.


Anyway, it's the "extraordinarily bad" part that I don't get.

Say we set MaxKeepAliveRequests to 190 (~2/3 of 256) instead of 1000.

How many requests does a client perform before it hits the 15 sec idle
timer? Is it 189? The apache process is stuck in the timeout phase
anyway. Is it 191? Then the first apache process drops the keepalive
connection, the client reconnects to a second server process, which
is stuck again in the timeout phase. Or am I missing something?

>
> Rik
>

.TM.

2002-09-26 20:04:04

by Rik van Riel

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

On Thu, 26 Sep 2002, Marco Colombo wrote:

> Say we set MaxKeepAliveRequests to 190 (~2/3 of 256) instead of 1000.
>
> How many requests does a client perform before it hits the 15 sec idle
> timer? Is it 189? The apache process is stuck in the timeout phase
> anyway. Is it 191? Then the first apache process drops the keepalive
> connection, the client reconnects to a second server process, which
> is stuck again in the timeout phase. Or am I missing something?

As I read it, MaxKeepAliveRequests is the maximum of simultaneous
keepalive requests that are tying up apache processes.

regards,

Rik
--
A: No.
Q: Should I include quotations after my reply?

http://www.surriel.com/ http://distro.conectiva.com/

2002-09-26 20:20:07

by Ernst Herzberg

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql


KeepAlive On # that is ok
MaxKeepAliveRequests 1000 # that is too high. No Client will request such count. Check your Pages:
How many Images/Frames/etc you nee per request? Should not reach 100 ;-)
KeepAliveTimeout 15 # that is the key, but that is dangerous. Too high, and you will run out of MaxClients.
But you see that in serverstats ('K')

On Donnerstag, 26. September 2002 21:27, Rik van Riel wrote:
> On Thu, 26 Sep 2002, Marco Colombo wrote:
> > On Thu, 26 Sep 2002, Ernst Herzberg wrote:
> > > MaxClients 256 # absolute minimum, maybe you have to recompile apache
> > > MinSpareServers 100 # better 150 to 200
> > > MaxSpareServers 200 # bring it near MaxClients
> >
> > KeepAlive On
> > MaxKeepAliveRequests 1000
>
> That sounds like an extraordinarily bad idea. You really
> don't want to have ALL your apache daemons tied up with
> keepalive requests.
>
> Personally I never have MaxKeepAliveRequests set to more
> than 2/3 of MaxClients.
>


<Earny>

2002-09-27 08:47:27

by Martin Brulisauer

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql


Turn on extended status in your apache configuration file:
-> ExtendedStatus On
so you can see more information on what the server is
doing. The information looks like:

Current Time: Friday, 27-Sep-2002 10:48:59 CEST
Restart Time: Tuesday, 27-Aug-2002 17:42:47 CEST
Parent Server Generation: 32 
Server uptime: 30 days 17 hours 6 minutes 12 seconds
Total accesses: 256966 - Total Traffic: 1.7 GB
CPU Usage: u11.3027 s6.76172 cu14.4033 cs2.29785 - .00131% CPU
load
.0968 requests/sec - 702 B/second - 7.1 kB/request
1 requests currently being processed, 5 idle servers

Martin


On 26 Sep 2002, at 20:36, Marco Colombo wrote:

> On Thu, 26 Sep 2002, Ernst Herzberg wrote:
>
> > First reconfigure your apache, with
> >
> > MaxClients 256 # absolute minimum, maybe you have to recompile apache
> > MinSpareServers 100 # better 150 to 200
> > MaxSpareServers 200 # bring it near MaxClients
>
> KeepAlive On
> MaxKeepAliveRequests 1000
>
> .TM.
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>


2002-10-01 05:31:29

by David Rees

[permalink] [raw]
Subject: Re: Very High Load, kernel 2.4.18, apache/mysql

I can second the PHPA recommendation. Since you appear to be CPU bound
doing a lot of processing in httpd, anything you can do to speed them up
will help. PHPA showed significant performance increases in my tests.

-Dave

On Wed, Sep 25, 2002 at 04:16:47PM -0400, Adam Goldstein wrote:
> During my investigation of php accelerator (which we put off before
> thinking it would be better to stabilize the server first) I came
> across a small blurb about php 4.1.2 (which we use) and mysql.
>
> http://www.php-accelerator.co.uk/faq.php#segv2
>
> Apparently this is how the site is written in some places, and it
> causes instability in the php portion of the apache process. We are
> fixing this now. Also, with the nodiratime, noatime, ext2 combination,
> the load has decreased a little, but, not very much. It has still
> reached >25 load when apache processes reached 120 (112 active
> according to server-status) and page loads come to near dead stop...
> segfaults still exist, even with fixed mysql connection calls. :(
> 1-4/min under present 25+ load.
>
> As for the syslog, unfort. almost every entry was marked async. I
> changed an auth log entry but messages was already async. I left
> kernel.errors sync, as It never really logs.
>
> On Wednesday, September 25, 2002, at 04:55 AM, Randal, Phil wrote:
>
> > Have you tried using PHP Accelerator?
> >
> > It's the only free PHP Cache which has survived my testing,
> > and should certainly reduce your CPU load.