procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 1 0 774404 49644 1713932 0 0 8 44 2680 5105 23 42 0 35
2 1 0 774900 49448 1713668 0 0 8 28 2870 5101 22 41 0 36
swapon /dev/vg1/swap
2 2 34460 770808 47824 1709852 0 34460 952 34472 3174 5601 21 48 2 29
1 1 129716 759904 47460 1699364 0 95316 272 95532 3093 5641 16 49 7 29
1 4 196728 756680 47436 1693616 64 67080 92 67104 2980 5569 18 49 24 8
2 1 246212 754076 47396 1689240 32 49652 220 49760 3241 5405 19 51 1 29
2 3 282404 791648 47272 1686252 16 36284 240 36392 3088 6281 18 52 9 22
1 4 299464 847200 47260 1685208 324 17296 324 17320 3190 6199 23 46 0 30
2 5 302316 854384 47256 1685884 944 3804 1948 3812 2723 5297 20 45 5 31
11 3 303436 861700 47384 1686400 1188 1900 2084 1912 2615 4593 21 51 9 20
swapoff -a
2 6 301740 863436 47384 1687048 1860 4 2368 128 2541 4591 21 63 0 15
3 4 300076 865916 47384 1687604 1672 0 2120 156 2673 5208 19 63 0 18
2 5 297676 866288 47380 1687988 2396 0 2668 188 2632 5259 17 70 2 12
2 2 295556 866784 47380 1687956 2352 0 2352 0 2621 5316 16 72 9 4
5 5 293168 867156 47384 1688120 2344 0 2344 20 2651 5469 18 79 0 4
2 4 291108 870504 47396 1688260 2036 0 2172 160 2381 4817 15 67 0 18
3 3 289416 870612 47400 1688476 2028 0 2260 148 2602 5214 18 69 0 13
3 3 287528 871928 47400 1688556 1928 0 2136 0 2470 5096 19 68 3 10
2 4 285740 873896 47408 1688892 2352 0 2764 172 2490 5194 18 66 0 17
1.7gb of disk cache, 750 meg of free RAM, and it swaps out 300meg in a
matter of seconds when given access to a swapfile, then thrashes the
disk like crazy because that RAM was actively in use. From an
interactivity standpoint the machine is unusable - 1-2 second pauses
on mouse movement, 10-15 seconds for the window manager to change
window focus, keypresses take 1-2 seconds to show up, etc. When I
issue a swapoff it takes 10-15 minutes for it to slowly pull it all
back in - once it's done I run fine.
The machine has 4gb of RAM (64bit), and runs 3 VMware VMs with a total
of 1024mb allocated to them. VMware uses file-backed RAM (for some
ungodly stupid reason) which is on a separate spindle (/dev/sdb) from
swap (/dev/sda). I've got one VM pointed to a file-backed RAM on a
tmpfs (512mb) that's probably got quite a bit of memory activity.
The other main RAM hogs are the usual suspects - firefox, Xorg,
thunderbird, openoffice, pidgin, etc, the usual junk on a busy Linux
desktop.
All the sysctl tuneables should be defaults, there's no distro or
local changes to any vm settings (mmap_min_addr=0 for Wine,
shmmax=256mb, some ipv6 tweaks)
Ideas on where to start looking? If this is a vmware issue I'll take
it to them.
On 11/18/2009 02:56 AM, Dan Merillat wrote:
> 1.7gb of disk cache, 750 meg of free RAM, and it swaps out 300meg in a
> matter of seconds when given access to a swapfile, then thrashes the
> disk like crazy because that RAM was actively in use.
>
> Ideas on where to start looking? If this is a vmware issue I'll take
> it to them.
>
This smells like a higher order memory allocation
failed and either is doing a lot of direct reclaim or
kicked kswapd until it freed up lots of memory.
However, I do not believe we keep track of higher
order reclaims in /proc/vmstat or elsewhere I
looked :(
Hi all,
(please Cc)
Thanks Dan for your email, it was very enlightening!!!
I have seen that post I am replying to:
On Mon, 18 Nov 2009, Dan Merillat wrote:
> 1.7gb of disk cache, 750 meg of free RAM, and it swaps out 300meg in a
> matter of seconds when given access to a swapfile, then thrashes the
> disk like crazy because that RAM was actively in use. From an
> interactivity standpoint the machine is unusable - 1-2 second pauses
> on mouse movement, 10-15 seconds for the window manager to change
> window focus, keypresses take 1-2 seconds to show up, etc. When I
> issue a swapoff it takes 10-15 minutes for it to slowly pull it all
> back in - once it's done I run fine.
And back in the beginning of October I reported something very similar:
On Mo, 05 Okt 2009, preining wrote:
> I am experiencing IO stalls of real serious dimensions. I mean up to 20secs
> waiting for some operations.
>
> That normally happeny when I do a svn up on a big subversion repository,
> but even on other locations.
>
> Yesterday a simple sync took 30sec although I was not doing anything else.
So what normally trashed my system to a quasi halt I tried after a swapoff -a,
*two* svn up of really big repositories (some Gb), plus starting
VirtualBox Windows XP with 1Gb virtual RAM on a 2Gb machine.
And see hoho, no problem at all. Everything remains responsive and happy.
The memory was never above 60% in use, but 40% in cache, and all
without any problems.
So that seems to be a real bug.
I am running currently 2.6.32-rc7, but experienced that already in the
31-rc version. Hard to pinpoint exactely where it happens.
If I can run some patches *please* let me know.
Best wishes
Norbert
-------------------------------------------------------------------------------
Dr. Norbert Preining Associate Professor
JAIST Japan Advanced Institute of Science and Technology [email protected]
Vienna University of Technology [email protected]
Debian Developer (Debian TeX Task Force) [email protected]
gpg DSA: 0x09C5B094 fp: 14DF 2E6C 0307 BE6D AD76 A9C0 D2BF 4AA3 09C5 B094
-------------------------------------------------------------------------------
GLENWHILLY (n. Scots)
A small tartan pouch worn beneath the kilt during the thistle-harvest.
--- Douglas Adams, The Meaning of Liff
Norbert Preining wrote:
> So what normally trashed my system to a quasi halt I tried after a swapoff -a,
> *two* svn up of really big repositories (some Gb), plus starting
> VirtualBox Windows XP with 1Gb virtual RAM on a 2Gb machine.
>
> And see hoho, no problem at all. Everything remains responsive and happy.
> The memory was never above 60% in use, but 40% in cache, and all
> without any problems.
>
> So that seems to be a real bug.
>
> I am running currently 2.6.32-rc7, but experienced that already in the
> 31-rc version. Hard to pinpoint exactely where it happens.
Similar here (with 2.6.31.6 kernel).
I have a pretty powerful desktop machine with 8 GB RAM, fast disks with
RAID-1 etc.
It runs 5 (mostly idle) KVM machines, Firefox, Thunderbird, KDE4, image
editing program, multiseat X session. Around 6 GB of RAM used when
caches/buffers are excluded.
Every 10 minutes or so, machine is really unresponsive, load jumps to 10
or 20. Mouse pointer jumps, it's impossible to change between windows etc.
Do a "swapoff -a", and everything is snappy and responsive as it should,
there are no more lags.
I noticed that with swap disabled, "Dirty" (in /proc/meminfo) is way
below 100 MB (usually, 50 MB or so).
With swap enabled, it "Dirty" is usually around 300-500-700 MB, and from
time to time, the system basically thrashes the disks, and everything is
unresponsive.
Similar to uncompressing a big tar archive - lots of IO, system
unresponsive.
BTW, did you try:
echo 0 > /proc/sys/vm/swappiness
swapoff -a
swapon -a
--
Tomasz Chmielewski
http://wpkg.org
On Wed, Nov 18, 2009 at 4:45 PM, Tomasz Chmielewski <[email protected]> wrote:
> I have a pretty powerful desktop machine with 8 GB RAM, fast disks with
> RAID-1 etc.
> Every 10 minutes or so, machine is really unresponsive, load jumps to 10 or
> 20. Mouse pointer jumps, it's impossible to change between windows etc.
This is the type of hefty workstation many of the core developers
have, running similar workloads, so I'm somewhat surprised that the
default VM settings have this kind of issue.
> Do a "swapoff -a", and everything is snappy and responsive as it should,
> there are no more lags.
Yes, that's my exact finding.
Is this weird IO storm happening for anyone else with plenty of memory
(for their taskload?)
On Wed, 18 Nov 2009, Dan Merillat wrote:
> On Wed, Nov 18, 2009 at 4:45 PM, Tomasz Chmielewski <[email protected]> wrote:
>
> > I have a pretty powerful desktop machine with 8 GB RAM, fast disks with
> > RAID-1 etc.
>
> > Every 10 minutes or so, machine is really unresponsive, load jumps to 10 or
> > 20. Mouse pointer jumps, it's impossible to change between windows etc.
>
> This is the type of hefty workstation many of the core developers
> have, running similar workloads, so I'm somewhat surprised that the
> default VM settings have this kind of issue.
>
> > Do a "swapoff -a", and everything is snappy and responsive as it should,
> > there are no more lags.
>
> Yes, that's my exact finding.
>
> Is this weird IO storm happening for anyone else with plenty of memory
> (for their taskload?)
For me it happend on my laptop. 3gb RAM, and a 1gb VMWare Windows-XP
instance running, plus the usual like firefox, thunderbird, kde4.
Without running vmware it did not happen. And since I have now disabled
barriers on the xfs /home partition (on luks crypto lvm) it also does
not happen anymore.
The machine is currently 300mb in swap, together with >2gb shown by free
as cached - as I just copied 10gb around nothing I worry about.
For me when it happened it looked like when a big bunch of data is to
be commited to disk xfs seeks between superblock/journal and data
and syncs/waits between each step - reducing the write speed of the
disk from the normal >20mb/s to less than 2mb/s and also delaying all
other disk-accesses in the meantime.
c'ya
sven
ps: I know with disabling barriers I sacrifice some data safety, but
better a usable and fast system than everything ultra-safe and
slow - for safety in suche rare case there is a backup.
--
The lights are fading out, once more...
On Thu, Nov 19, 2009 at 02:09:09AM +0100, Sven-Haegar Koch wrote:
> On Wed, 18 Nov 2009, Dan Merillat wrote:
>
> > On Wed, Nov 18, 2009 at 4:45 PM, Tomasz Chmielewski <[email protected]> wrote:
> >
> > > I have a pretty powerful desktop machine with 8 GB RAM, fast disks with
> > > RAID-1 etc.
> >
> > > Every 10 minutes or so, machine is really unresponsive, load jumps to 10 or
> > > 20. Mouse pointer jumps, it's impossible to change between windows etc.
> >
> > This is the type of hefty workstation many of the core developers
> > have, running similar workloads, so I'm somewhat surprised that the
> > default VM settings have this kind of issue.
> >
> > > Do a "swapoff -a", and everything is snappy and responsive as it should,
> > > there are no more lags.
> >
> > Yes, that's my exact finding.
> >
> > Is this weird IO storm happening for anyone else with plenty of memory
> > (for their taskload?)
>
> For me it happend on my laptop. 3gb RAM, and a 1gb VMWare Windows-XP
> instance running, plus the usual like firefox, thunderbird, kde4.
>
> Without running vmware it did not happen. And since I have now disabled
> barriers on the xfs /home partition (on luks crypto lvm) it also does
> not happen anymore.
Yeah, it looks like dm-crypt recently started supporting barriers in
commit 647c7db14ef9cacc4ccb3683e206b61f0de6dc2b. Hence XFS will have
detected barriers work at mount time mount and so is now issuing
them.
Similarly, raid1 (mirror) has recently gained barrier support so
the same issue can be seen there.
Cheers,
Dave.
--
Dave Chinner
[email protected]
Dave Chinner wrote:
>>>> I have a pretty powerful desktop machine with 8 GB RAM, fast disks with
>>>> RAID-1 etc.
>>>> Every 10 minutes or so, machine is really unresponsive, load jumps to 10 or
>>>> 20. Mouse pointer jumps, it's impossible to change between windows etc.
>>> This is the type of hefty workstation many of the core developers
>>> have, running similar workloads, so I'm somewhat surprised that the
>>> default VM settings have this kind of issue.
>>>
>>>> Do a "swapoff -a", and everything is snappy and responsive as it should,
>>>> there are no more lags.
>>> Yes, that's my exact finding.
>>>
>>> Is this weird IO storm happening for anyone else with plenty of memory
>>> (for their taskload?)
>> For me it happend on my laptop. 3gb RAM, and a 1gb VMWare Windows-XP
>> instance running, plus the usual like firefox, thunderbird, kde4.
>>
>> Without running vmware it did not happen. And since I have now disabled
>> barriers on the xfs /home partition (on luks crypto lvm) it also does
>> not happen anymore.
>
> Yeah, it looks like dm-crypt recently started supporting barriers in
> commit 647c7db14ef9cacc4ccb3683e206b61f0de6dc2b. Hence XFS will have
> detected barriers work at mount time mount and so is now issuing
> them.
>
> Similarly, raid1 (mirror) has recently gained barrier support so
> the same issue can be seen there.
What is also interesting, that a normal software RAID-1 sync (i.e. from
a degraded state) does not seem to make any visible effect on system
responsiveness.
Uncompress a big tar file, or VM writes out lots of data - system
becomes really unresponsive.
--
Tomasz Chmielewski
http://wpkg.org
Hi Dan,
2009/11/18 Dan Merillat <[email protected]>:
> procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
> ?r ?b ? swpd ? free ? buff ?cache ? si ? so ? ?bi ? ?bo ? in ? cs us sy id wa
> ?1 ?1 ? ? ?0 774404 ?49644 1713932 ? ?0 ? ?0 ? ? 8 ? ?44 2680 5105 23 42 ?0 35
> ?2 ?1 ? ? ?0 774900 ?49448 1713668 ? ?0 ? ?0 ? ? 8 ? ?28 2870 5101 22 41 ?0 36
>
> swapon /dev/vg1/swap
>
> ?2 ?2 ?34460 770808 ?47824 1709852 ? ?0 34460 ? 952 34472 3174 5601 21 48 ?2 29
> ?1 ?1 129716 759904 ?47460 1699364 ? ?0 95316 ? 272 95532 3093 5641 16 49 ?7 29
> ?1 ?4 196728 756680 ?47436 1693616 ? 64 67080 ? ?92 67104 2980 5569 18 49 24 ?8
> ?2 ?1 246212 754076 ?47396 1689240 ? 32 49652 ? 220 49760 3241 5405 19 51 ?1 29
> ?2 ?3 282404 791648 ?47272 1686252 ? 16 36284 ? 240 36392 3088 6281 18 52 ?9 22
> ?1 ?4 299464 847200 ?47260 1685208 ?324 17296 ? 324 17320 3190 6199 23 46 ?0 30
> ?2 ?5 302316 854384 ?47256 1685884 ?944 3804 ?1948 ?3812 2723 5297 20 45 ?5 31
> 11 ?3 303436 861700 ?47384 1686400 1188 1900 ?2084 ?1912 2615 4593 21 51 ?9 20
> swapoff -a
> ?2 ?6 301740 863436 ?47384 1687048 1860 ? ?4 ?2368 ? 128 2541 4591 21 63 ?0 15
> ?3 ?4 300076 865916 ?47384 1687604 1672 ? ?0 ?2120 ? 156 2673 5208 19 63 ?0 18
> ?2 ?5 297676 866288 ?47380 1687988 2396 ? ?0 ?2668 ? 188 2632 5259 17 70 ?2 12
> ?2 ?2 295556 866784 ?47380 1687956 2352 ? ?0 ?2352 ? ? 0 2621 5316 16 72 ?9 ?4
> ?5 ?5 293168 867156 ?47384 1688120 2344 ? ?0 ?2344 ? ?20 2651 5469 18 79 ?0 ?4
> ?2 ?4 291108 870504 ?47396 1688260 2036 ? ?0 ?2172 ? 160 2381 4817 15 67 ?0 18
> ?3 ?3 289416 870612 ?47400 1688476 2028 ? ?0 ?2260 ? 148 2602 5214 18 69 ?0 13
> ?3 ?3 287528 871928 ?47400 1688556 1928 ? ?0 ?2136 ? ? 0 2470 5096 19 68 ?3 10
> ?2 ?4 285740 873896 ?47408 1688892 2352 ? ?0 ?2764 ? 172 2490 5194 18 66 ?0 17
>
> 1.7gb of disk cache, 750 meg of free RAM, and it swaps out 300meg in a
> matter of seconds when given access to a swapfile, then thrashes the
> disk like crazy because that RAM was actively in use. From an
> interactivity standpoint the machine is unusable - 1-2 second pauses
> on mouse movement, 10-15 seconds for the window manager to change
> window focus, keypresses take 1-2 seconds to show up, etc. ?When I
> issue a swapoff it takes 10-15 ?minutes for it to slowly pull it all
> back in - once it's done I run fine.
>
> The machine has 4gb of RAM (64bit), and runs 3 VMware VMs with a total
> of 1024mb allocated to them. ? VMware uses file-backed RAM (for some
> ungodly stupid reason) which is on a separate spindle (/dev/sdb) from
> swap (/dev/sda). ?I've got one VM pointed to a file-backed RAM on a
> tmpfs (512mb) that's probably got quite a bit of memory activity.
>
> The other main RAM hogs are the usual suspects - firefox, Xorg,
> thunderbird, openoffice, pidgin, etc, the usual junk on a busy Linux
> desktop.
>
> All the sysctl tuneables should be defaults, there's no distro or
> local changes to any vm settings (mmap_min_addr=0 for Wine,
> shmmax=256mb, some ipv6 tweaks)
>
> Ideas on where to start looking? ?If this is a vmware issue I'll take
> it to them.
Umm, very strange.
I made two debug patch. can you please apply it and post following
command output?
% cat /proc/meminfo
% cat /proc/vmstat
% cat /proc/zoneinfo
# cat /proc/filecache | sort -nr -k3 |head -30
On Thu, Nov 19, 2009 at 1:43 AM, Tomasz Chmielewski <[email protected]> wrote:
> What is also interesting, that a normal software RAID-1 sync (i.e. from a
> degraded state) does not seem to make any visible effect on system
> responsiveness.
>
> Uncompress a big tar file, or VM writes out lots of data - system becomes
> really unresponsive.
Setup:
XFS -> LV -> md0 (degraded) -> ST3500630AS ver 3.AA (500gb)
Basically no other disk activity while doing these tests.
barriers on:
$ time git checkout -f
Checking out files: 100% (29108/29108), done.
real 2m51.913s
user 0m3.128s
sys 0m3.004s
$ time rm -r *
real 1m52.562s
user 0m0.072s
sys 0m2.980s
$ sudo mount /usr/src -o remount,nobarrier
$ time git checkout -f
Checking out files: 100% (29108/29108), done.
real 0m9.782s
user 0m2.944s
sys 0m2.984s
$ time rm -r *
real 0m24.996s
user 0m0.076s
sys 0m2.808s
So XFS + barriers are part of the culprit here, but I was only using
xfs for /usr/src. Reformatted that to ext4 after I found that little
nugget of joy. Ext4+barriers isn't anywhere near as dramatic a hit,
it's a much more reasonable speed/safety tradeoff. Again, that's
only for /usr/src, the rest of my system is using ext4, so it doesn't
explain any other workload problems. btrfs doesn't seem to have much
of a hit at all using barriers but I'd need to test that properly,
it's below 6 seconds for checkout and the differences would be in the
noise without averaging multiple runs.
On Thu, Nov 19, 2009 at 9:36 AM, KOSAKI Motohiro
<[email protected]> wrote:
> Hi Dan,
>
> Umm, very strange.
> I made two debug patch. can you please apply it and post following
> command output?
>
> % cat /proc/meminfo
> % cat /proc/vmstat
> % cat /proc/zoneinfo
> # cat /proc/filecache | sort -nr -k3 |head -30
Unfortunately not, it doesn't compile on 2.6.31 which means I'd have
to re-port vmware & fglrx just to test that.
CC fs/proc/filecache.o
fs/proc/filecache.c: In function ?iwin_fill?:
fs/proc/filecache.c:108: error: ?bdi_lock? undeclared (first use in
this function)
fs/proc/filecache.c:108: error: (Each undeclared identifier is
reported only once
fs/proc/filecache.c:108: error: for each function it appears in.)
fs/proc/filecache.c:109: error: ?struct backing_dev_info? has no
member named ?bdi_list?
fs/proc/filecache.c:109: warning: type defaults to ?int? in
declaration of ?__mptr?
fs/proc/filecache.c:109: error: ?bdi_list? undeclared (first use in
this function)
fs/proc/filecache.c:109: error: ?struct backing_dev_info? has no
member named ?bdi_list?
fs/proc/filecache.c:109: error: ?struct backing_dev_info? has no
member named ?bdi_list?
And I can't forward port the patch without major work, since the whole
bdi_writeback structure was introduced post 2.6.31. The recent-rotated
patch works, I'll include that data as soon as I get the memory
pressure back up.
Right now it's behaving correctly after the reboot - the usual sign of
a problem is free memory going way up while swapping like mad. I'm
putting a lot of memory pressure on the kernel post-reboot but swap is
behaving normally. I hope this doesn't take multiple days of uptime
to get back into that state.
On Thu, Nov 19, 2009 at 9:36 AM, KOSAKI Motohiro
<[email protected]> wrote:
> Hi Dan,
>
> Umm, very strange.
> I made two debug patch. can you please apply it and post following
> command output?
>
> % cat /proc/meminfo
> % cat /proc/vmstat
> % cat /proc/zoneinfo
> # cat /proc/filecache | sort -nr -k3 |head -30
As I said I can't give you the filecache info, but here's two datasets
The amount of cache is due to 1.5gb of mmaped vmware guest backing files.
First, 400mb ram "free" but still swapping out - usable with a few
pauses as apps swap back in.
meminfo
MemTotal: 3929040 kB
MemFree: 417748 kB
Buffers: 98348 kB
Cached: 2243696 kB
SwapCached: 149480 kB
Active: 1687316 kB
Inactive: 1527104 kB
Active(anon): 1042292 kB
Inactive(anon): 477736 kB
Active(file): 645024 kB
Inactive(file): 1049368 kB
Unevictable: 20 kB
Mlocked: 20 kB
SwapTotal: 3903480 kB
SwapFree: 3249496 kB
Dirty: 224 kB
Writeback: 0 kB
AnonPages: 759152 kB
Mapped: 398180 kB
Slab: 101844 kB
SReclaimable: 60864 kB
SUnreclaim: 40980 kB
PageTables: 38656 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 5868000 kB
Committed_AS: 3989292 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 302588 kB
VmallocChunk: 34359432695 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 40832 kB
DirectMap2M: 4020224 kB
vmstat
nr_free_pages 104437
nr_inactive_anon 119434
nr_active_anon 260573
nr_inactive_file 262342
nr_active_file 161256
nr_unevictable 5
nr_mlock 5
nr_anon_pages 189788
nr_mapped 99545
nr_file_pages 622881
nr_dirty 56
nr_writeback 0
nr_slab_reclaimable 15216
nr_slab_unreclaimable 10245
nr_page_table_pages 9664
nr_unstable 0
nr_bounce 0
nr_vmscan_write 13542637
nr_writeback_temp 0
numa_hit 378246197
numa_miss 0
numa_foreign 0
numa_interleave 8130
numa_local 378246197
numa_other 0
pgpgin 134039596
pgpgout 83142977
pswpin 168794
pswpout 282875
pgalloc_dma 28902
pgalloc_dma32 244979577
pgalloc_normal 135020061
pgalloc_movable 0
pgfree 380133342
pgactivate 2301807
pgdeactivate 1966072
pgfault 123713739
pgmajfault 168717
pgrefill_dma 447
pgrefill_dma32 818890
pgrefill_normal 263325
pgrefill_movable 0
pgsteal_dma 62
pgsteal_dma32 41972665
pgsteal_normal 8076187
pgsteal_movable 0
pgscan_kswapd_dma 192
pgscan_kswapd_dma32 100342332
pgscan_kswapd_normal 19655823
pgscan_kswapd_movable 0
pgscan_direct_dma 0
pgscan_direct_dma32 9364278
pgscan_direct_normal 2242788
pgscan_direct_movable 0
zone_reclaim_failed 0
pginodesteal 6375
slabs_scanned 1234816
kswapd_steal 47729503
kswapd_inodesteal 242151
pageoutrun 409691
allocstall 26919
pgrotated 283193
htlb_buddy_alloc_success 0
htlb_buddy_alloc_fail 0
unevictable_pgs_culled 9875
unevictable_pgs_scanned 0
unevictable_pgs_rescued 40104
unevictable_pgs_mlocked 42144
unevictable_pgs_munlocked 41486
unevictable_pgs_cleared 0
unevictable_pgs_stranded 0
unevictable_pgs_mlockfreed 0
zoneinfo
Node 0, zone DMA
pages free 3887
min 7
low 8
high 10
scanned 0
spanned 4096
present 3839
nr_free_pages 3887
nr_inactive_anon 15
nr_active_anon 0
nr_inactive_file 18
nr_active_file 56
nr_unevictable 0
nr_mlock 0
nr_anon_pages 7
nr_mapped 2
nr_file_pages 82
nr_dirty 0
nr_writeback 0
nr_slab_reclaimable 5
nr_slab_unreclaimable 2
nr_page_table_pages 0
nr_unstable 0
nr_bounce 0
nr_vmscan_write 0
nr_writeback_temp 0
numa_hit 21161
numa_miss 0
numa_foreign 0
numa_interleave 0
numa_local 21161
numa_other 0
protection: (0, 3126, 3883, 3883)
pagesets
cpu: 0
count: 0
high: 0
batch: 1
vm stats threshold: 4
cpu: 1
count: 0
high: 0
batch: 1
vm stats threshold: 4
all_unreclaimable: 0
prev_priority: 12
start_pfn: 0
inactive_ratio: 1
recent_rotated_anon: 0
recent_scanned_anon: 2
recent_rotated_file: 3
recent_scanned_file: 12
Node 0, zone DMA32
pages free 100087
min 1602
low 2002
high 2403
scanned 0
spanned 1044480
present 800280
nr_free_pages 100087
nr_inactive_anon 62741
nr_active_anon 217794
nr_inactive_file 226981
nr_active_file 136874
nr_unevictable 5
nr_mlock 5
nr_anon_pages 130228
nr_mapped 83692
nr_file_pages 519311
nr_dirty 39
nr_writeback 0
nr_slab_reclaimable 10649
nr_slab_unreclaimable 5709
nr_page_table_pages 4812
nr_unstable 0
nr_bounce 0
nr_vmscan_write 10954292
nr_writeback_temp 0
numa_hit 243347959
numa_miss 0
numa_foreign 0
numa_interleave 0
numa_local 243347959
numa_other 0
protection: (0, 0, 757, 757)
pagesets
cpu: 0
count: 134
high: 186
batch: 31
vm stats threshold: 24
cpu: 1
count: 11
high: 186
batch: 31
vm stats threshold: 24
all_unreclaimable: 0
prev_priority: 12
start_pfn: 4096
inactive_ratio: 5
recent_rotated_anon: 33921
recent_scanned_anon: 77920
recent_rotated_file: 459
recent_scanned_file: 80092
Node 0, zone Normal
pages free 463
min 388
low 485
high 582
scanned 0
spanned 196608
present 193920
nr_free_pages 463
nr_inactive_anon 56678
nr_active_anon 42779
nr_inactive_file 35343
nr_active_file 24326
nr_unevictable 0
nr_mlock 0
nr_anon_pages 59553
nr_mapped 15851
nr_file_pages 103488
nr_dirty 17
nr_writeback 0
nr_slab_reclaimable 4562
nr_slab_unreclaimable 4534
nr_page_table_pages 4852
nr_unstable 0
nr_bounce 0
nr_vmscan_write 2588345
nr_writeback_temp 0
numa_hit 134877151
numa_miss 0
numa_foreign 0
numa_interleave 8130
numa_local 134877151
numa_other 0
protection: (0, 0, 0, 0)
pagesets
cpu: 0
count: 149
high: 186
batch: 31
vm stats threshold: 16
cpu: 1
count: 77
high: 186
batch: 31
vm stats threshold: 16
all_unreclaimable: 0
prev_priority: 12
start_pfn: 1048576
inactive_ratio: 1
recent_rotated_anon: 6122
recent_scanned_anon: 14889
recent_rotated_file: 491
recent_scanned_file: 9600
Second one was during a thrash storm (I think, it may not have dumped
until the end of it)
Note the kernel claims it has over 500mb free RAM that's not being
used while we swap
meminfo
MemTotal: 3929040 kB
MemFree: 544676 kB
Buffers: 123256 kB
Cached: 2097536 kB
SwapCached: 184016 kB
Active: 1703160 kB
Inactive: 1376904 kB
Active(anon): 990688 kB
Inactive(anon): 400420 kB
Active(file): 712472 kB
Inactive(file): 976484 kB
Unevictable: 20 kB
Mlocked: 20 kB
SwapTotal: 3903480 kB
SwapFree: 3023160 kB
Dirty: 6844 kB
Writeback: 112 kB
AnonPages: 740368 kB
Mapped: 455792 kB
Slab: 109176 kB
SReclaimable: 63088 kB
SUnreclaim: 46088 kB
PageTables: 39540 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 5868000 kB
Committed_AS: 4134332 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 302588 kB
VmallocChunk: 34359432695 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 40832 kB
DirectMap2M: 4020224 kB
vmstat
nr_free_pages 136138
nr_inactive_anon 100105
nr_active_anon 247672
nr_inactive_file 244146
nr_active_file 178118
nr_unevictable 5
nr_mlock 5
nr_anon_pages 185092
nr_mapped 113948
nr_file_pages 601227
nr_dirty 1711
nr_writeback 28
nr_slab_reclaimable 15772
nr_slab_unreclaimable 11522
nr_page_table_pages 9885
nr_unstable 0
nr_bounce 0
nr_vmscan_write 14020410
nr_writeback_temp 0
numa_hit 438013705
numa_miss 0
numa_foreign 0
numa_interleave 8130
numa_local 438013705
numa_other 0
pgpgin 137757163
pgpgout 89200634
pswpin 410261
pswpout 464160
pgalloc_dma 28902
pgalloc_dma32 291710526
pgalloc_normal 149182097
pgalloc_movable 0
pgfree 441058114
pgactivate 3112124
pgdeactivate 2717995
pgfault 185871068
pgmajfault 254976
pgrefill_dma 447
pgrefill_dma32 839061
pgrefill_normal 263472
pgrefill_movable 0
pgsteal_dma 62
pgsteal_dma32 42581791
pgsteal_normal 8623195
pgsteal_movable 0
pgscan_kswapd_dma 192
pgscan_kswapd_dma32 115131371
pgscan_kswapd_normal 24244847
pgscan_kswapd_movable 0
pgscan_direct_dma 0
pgscan_direct_dma32 9390346
pgscan_direct_normal 2249551
pgscan_direct_movable 0
zone_reclaim_failed 0
pginodesteal 6375
slabs_scanned 2123520
kswapd_steal 48883166
kswapd_inodesteal 310652
pageoutrun 426393
allocstall 26962
pgrotated 464504
htlb_buddy_alloc_success 0
htlb_buddy_alloc_fail 0
unevictable_pgs_culled 9875
unevictable_pgs_scanned 0
unevictable_pgs_rescued 40104
unevictable_pgs_mlocked 42211
unevictable_pgs_munlocked 41486
unevictable_pgs_cleared 0
unevictable_pgs_stranded 0
unevictable_pgs_mlockfreed 0
zoneinfo
Node 0, zone DMA
pages free 3888
min 7
low 8
high 10
scanned 0
spanned 4096
present 3839
nr_free_pages 3888
nr_inactive_anon 14
nr_active_anon 0
nr_inactive_file 18
nr_active_file 56
nr_unevictable 0
nr_mlock 0
nr_anon_pages 6
nr_mapped 3
nr_file_pages 82
nr_dirty 0
nr_writeback 0
nr_slab_reclaimable 5
nr_slab_unreclaimable 2
nr_page_table_pages 0
nr_unstable 0
nr_bounce 0
nr_vmscan_write 0
nr_writeback_temp 0
numa_hit 21161
numa_miss 0
numa_foreign 0
numa_interleave 0
numa_local 21161
numa_other 0
protection: (0, 3126, 3883, 3883)
pagesets
cpu: 0
count: 0
high: 0
batch: 1
vm stats threshold: 4
cpu: 1
count: 0
high: 0
batch: 1
vm stats threshold: 4
all_unreclaimable: 0
prev_priority: 7
start_pfn: 0
inactive_ratio: 1
recent_rotated_anon: 0
recent_scanned_anon: 2
recent_rotated_file: 3
recent_scanned_file: 12
Node 0, zone DMA32
pages free 117449
min 1602
low 2002
high 2403
scanned 0
spanned 1044480
present 800280
nr_free_pages 117449
nr_inactive_anon 51601
nr_active_anon 208680
nr_inactive_file 210820
nr_active_file 154607
nr_unevictable 5
nr_mlock 5
nr_anon_pages 128789
nr_mapped 98197
nr_file_pages 508828
nr_dirty 1501
nr_writeback 0
nr_slab_reclaimable 11187
nr_slab_unreclaimable 6932
nr_page_table_pages 4798
nr_unstable 0
nr_bounce 0
nr_vmscan_write 11175938
nr_writeback_temp 0
numa_hit 288976607
numa_miss 0
numa_foreign 0
numa_interleave 0
numa_local 288976607
numa_other 0
protection: (0, 0, 757, 757)
pagesets
cpu: 0
count: 182
high: 186
batch: 31
vm stats threshold: 24
cpu: 1
count: 166
high: 186
batch: 31
vm stats threshold: 24
all_unreclaimable: 0
prev_priority: 7
start_pfn: 4096
inactive_ratio: 5
recent_rotated_anon: 34440
recent_scanned_anon: 39630
recent_rotated_file: 416
recent_scanned_file: 59261
Node 0, zone Normal
pages free 14801
min 388
low 485
high 582
scanned 0
spanned 196608
present 193920
nr_free_pages 14801
nr_inactive_anon 48490
nr_active_anon 38992
nr_inactive_file 33308
nr_active_file 23455
nr_unevictable 0
nr_mlock 0
nr_anon_pages 56297
nr_mapped 15748
nr_file_pages 92317
nr_dirty 210
nr_writeback 28
nr_slab_reclaimable 4580
nr_slab_unreclaimable 4588
nr_page_table_pages 5087
nr_unstable 0
nr_bounce 0
nr_vmscan_write 2844472
nr_writeback_temp 0
numa_hit 149016012
numa_miss 0
numa_foreign 0
numa_interleave 8130
numa_local 149016012
numa_other 0
protection: (0, 0, 0, 0)
pagesets
cpu: 0
count: 28
high: 186
batch: 31
vm stats threshold: 16
cpu: 1
count: 61
high: 186
batch: 31
vm stats threshold: 16
all_unreclaimable: 0
prev_priority: 7
start_pfn: 1048576
inactive_ratio: 1
recent_rotated_anon: 8094
recent_scanned_anon: 17955
recent_rotated_file: 229
recent_scanned_file: 14296
On 11/25/2009 06:13 PM, Dan Merillat wrote:
> On Thu, Nov 19, 2009 at 9:36 AM, KOSAKI Motohiro
> <[email protected]> wrote:
>
>> Hi Dan,
>>
>> Umm, very strange.
>> I made two debug patch. can you please apply it and post following
>> command output?
>>
>> % cat /proc/meminfo
>> % cat /proc/vmstat
>> % cat /proc/zoneinfo
>> # cat /proc/filecache | sort -nr -k3 |head -30
>>
> As I said I can't give you the filecache info, but here's two datasets
>
> The amount of cache is due to 1.5gb of mmaped vmware guest backing files.
>
> First, 400mb ram "free" but still swapping out - usable with a few
> pauses as apps swap back in.
>
>
Can you try out the patch from
http://lkml.org/lkml/2009/11/25/467 ?
Dan Merillat wrote:
> On Wed, Nov 18, 2009 at 4:45 PM, Tomasz Chmielewski <[email protected]> wrote:
>
>> I have a pretty powerful desktop machine with 8 GB RAM, fast disks with
>> RAID-1 etc.
>
>> Every 10 minutes or so, machine is really unresponsive, load jumps to 10 or
>> 20. Mouse pointer jumps, it's impossible to change between windows etc.
>
> This is the type of hefty workstation many of the core developers
> have, running similar workloads, so I'm somewhat surprised that the
> default VM settings have this kind of issue.
>
>> Do a "swapoff -a", and everything is snappy and responsive as it should,
>> there are no more lags.
>
> Yes, that's my exact finding.
>
> Is this weird IO storm happening for anyone else with plenty of memory
> (for their taskload?)
My load is virtually the same a Tomasz describes, virtual machines (including
this desktop), little backup servers on CentOS-5.3, production Fedora-9 and 10,
testing FC11 and 12. I see none of this problem, with the Fedora kernel or my
own build of 2.6.31 whatever was stable two weeks ago.
What is not the same is that all are started from CLI and scripts, because much
of my VM stuff predates libvirt. So nothing between qemu-kvm and my fingers or
boot scripts.
I have no idea if this is useful or significant, just sharing it "in case."
--
Bill Davidsen <[email protected]>
"We have more to fear from the bungling of the incompetent than from
the machinations of the wicked." - from Slashdot
>
> Can you try out the patch from
> http://lkml.org/lkml/2009/11/25/467 ?
It's on the system now, I missed your reply for some reason. Stupid gmail.
So far it's survived the 'open all in tabs' firefox test, which used
to throw it into complete disarray, threw a 1gb file into tmpfs and it
swapped out 150mb and didn't trash.
It's running all the things I normally do in a day all at once, so far
nothing, but it seems to take
a while before things go pear-shaped.
I'll give another update in a day or so.
As an aside, why is swapin so incredibly slow compared to swapout? I
mean, I understand why (pagefault->swapin *wait* run process new
pagefault->swapin etc) but wouldn't much larger (4mb?) granularity of
swap help with modern high-speed drives?
I've seen a high of 2mb/sec swapin, and 100+mb/sec swapout.
swapoff /dev/swap shows this very clearly - theoretically it should be
"drop enough clean pages to fit everything into RAM, then read the
swap partition in linearly". It's especially egregious in the case
where a memory-hog has terminated, leaving more free RAM than swapped
pages and swapoff still takes upwards of 10 minutes to complete what
should be a 2-second disk read.
>> Can you try out the patch from
>> http://lkml.org/lkml/2009/11/25/467 ?
After a week of testing, that seems to have been the issue, thanks!
On Mon, 07 Dec 2009, Dan Merillat wrote:
> >> Can you try out the patch from
> >> http://lkml.org/lkml/2009/11/25/467 ?
>
> After a week of testing, that seems to have been the issue, thanks!
In that case, is that patch being considered for stable@ ?
--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh
On 12/09/2009 03:48 PM, Henrique de Moraes Holschuh wrote:
> On Mon, 07 Dec 2009, Dan Merillat wrote:
>>>> Can you try out the patch from
>>>> http://lkml.org/lkml/2009/11/25/467 ?
>>
>> After a week of testing, that seems to have been the issue, thanks!
>
> In that case, is that patch being considered for stable@ ?
That may be an option, after Andrew has submitted the patch
to Linus for 2.6.33, which he typically does towards the end
of the merge window.
I will certainly ask the Fedora kernel maintainers to take
the patch in for their 2.6.32 based kernel.
--
All rights reversed.