Greetings,
(Please CC responses as I am not subscribed to the list. Thanks!)
I've recently started experiencing the following problem on one of my
Linux servers:
allocation failed: out of vmalloc space - use vmalloc=<size> to increase
size.
allocation failed: out of vmalloc space - use vmalloc=<size> to increase
size.
XFS: possible memory allocation deadlock in kmem_alloc (mode:0x2d0)
XFS: possible memory allocation deadlock in kmem_alloc (mode:0x2d0)
...and so on until it completely locks up and needs reboot.
>From what I can tell from fs/xfs/linux-2.6/kmem.c, the XFS message is
just another confirmation that the machine has run out of vmalloc space.
The machine has 4GB of RAM and is running 2.6.11.5 kernel.
I have tried to specify vmalloc=256m to start with, but no luck - the
machine does not even want to boot. It panics with:
EXT2-fs: unable to read superblock
isofs_fill_super: bread failed, dev=md0, iso_blknum=16, block=32
XFS: SB read failed
VFS: Cannot open root device "md0" or unknown-block(9,0)
Please append a correct "root=" boot option
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-
block(9,0)
If I remove the "vmalloc" parameter, it boots just fine but then after
some hours, when the load on the server goes up, I get the above
"request" to increase vmalloc. Being desperate to find the way out, I
have also tried increasing the hardcoded value in arch/i386/mm/init.c,
but ended up with the same effect as with the parameter - panic on boot.
/proc/meminfo says (while the system is up and running):
MemTotal: 4073244 kB
MemFree: 144356 kB
Buffers: 1184 kB
Cached: 2735576 kB
SwapCached: 0 kB
Active: 921804 kB
Inactive: 2408800 kB
HighTotal: 3193792 kB
HighFree: 896 kB
LowTotal: 879452 kB
LowFree: 143460 kB
SwapTotal: 7341600 kB
SwapFree: 7341600 kB
Dirty: 50940 kB
Writeback: 0 kB
Mapped: 613172 kB
Slab: 498936 kB
CommitLimit: 9378220 kB
Committed_AS: 736392 kB
PageTables: 1760 kB
VmallocTotal: 114680 kB
VmallocUsed: 88996 kB
VmallocChunk: 20988 kB
HugePages_Total: 0
HugePages_Free: 0
Hugepagesize: 4096 kB
I have also tried changing the following parameters - but no luck
either:
vm.lower_zone_protection = 900
vm.min_free_kbytes = 30000
vm.vfs_cache_pressure = 150
Please help! What am I doing wrong?
Also, if this question does not belong here - please point to the right
direction :).
Best regards,
Ranko
(please do CC replies as I am still not on the list)
As I am kind of pressured to resolve this issue, I've set up a test
environment using VMWare in order to reproduce the problem and
(un)fortunately the attempt was successful.
I have noticed a few points that relate to the size of the physical RAM
and the behavior vmalloc. As I am not sure if this is by design or a
bug, so please someone enlighten me:
The strange thing I have seen is that with the increase of the physical
RAM, the VmallocTotal in the /proc/meminfo gets smaller! Is this how it
is supposed to be?
Namely (tests done using VMWare, using the 2.6.11.5 kernel):
/proc/meminfo on a machine with 256M of physical RAM plus 512M swap:
MemTotal: 254580 kB
MemFree: 220504 kB
Buffers: 4928 kB
Cached: 18212 kB
SwapCached: 0 kB
Active: 13360 kB
Inactive: 12604 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 254580 kB
LowFree: 220504 kB
SwapTotal: 530128 kB
SwapFree: 530128 kB
Dirty: 12 kB
Writeback: 0 kB
Mapped: 6440 kB
Slab: 5812 kB
CommitLimit: 657416 kB
Committed_AS: 7272 kB
PageTables: 344 kB
VmallocTotal: 770040 kB <----------
VmallocUsed: 1348 kB
VmallocChunk: 768504 kB
HugePages_Total: 0
HugePages_Free: 0
Hugepagesize: 4096 kB
/proc/meminfo on a machine with 512M of physical RAM plus 512M swap:
MemTotal: 514260 kB
MemFree: 479068 kB
Buffers: 4880 kB
Cached: 18000 kB
SwapCached: 0 kB
Active: 13264 kB
Inactive: 12556 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 514260 kB
LowFree: 479068 kB
SwapTotal: 530128 kB
SwapFree: 530128 kB
Dirty: 8 kB
Writeback: 0 kB
Mapped: 6452 kB
Slab: 5740 kB
CommitLimit: 787256 kB
Committed_AS: 7280 kB
PageTables: 344 kB
VmallocTotal: 507896 kB <----------
VmallocUsed: 1348 kB
VmallocChunk: 506360 kB
HugePages_Total: 0
HugePages_Free: 0
Hugepagesize: 4096 kB
/proc/meminfo on a machine with 768M of physical RAM plus 512M swap:
MemTotal: 774348 kB
MemFree: 739132 kB
Buffers: 4888 kB
Cached: 17992 kB
SwapCached: 0 kB
Active: 13260 kB
Inactive: 12568 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 774348 kB
LowFree: 739132 kB
SwapTotal: 530128 kB
SwapFree: 530128 kB
Dirty: 228 kB
Writeback: 0 kB
Mapped: 6448 kB
Slab: 5736 kB
CommitLimit: 917300 kB
Committed_AS: 7272 kB
PageTables: 344 kB
VmallocTotal: 245752 kB <----------
VmallocUsed: 1348 kB
VmallocChunk: 244216 kB
HugePages_Total: 0
HugePages_Free: 0
Hugepagesize: 4096 kB
/proc/meminfo on a machine with 1024M of physical RAM plus 512M swap:
MemTotal: 1034444 kB
MemFree: 997764 kB
Buffers: 4876 kB
Cached: 18004 kB
SwapCached: 0 kB
Active: 13260 kB
Inactive: 12544 kB
HighTotal: 131008 kB
HighFree: 108448 kB
LowTotal: 903436 kB
LowFree: 889316 kB
SwapTotal: 530128 kB
SwapFree: 530128 kB
Dirty: 612 kB
Writeback: 0 kB
Mapped: 6436 kB
Slab: 5824 kB
CommitLimit: 1047348 kB
Committed_AS: 7272 kB
PageTables: 344 kB
VmallocTotal: 114680 kB <----------
VmallocUsed: 1348 kB
VmallocChunk: 113144 kB
HugePages_Total: 0
HugePages_Free: 0
Hugepagesize: 4096 kB
Or in summary:
256M RAM --> VmallocTotal: 770040 kB
512M RAM --> VmallocTotal: 507896 kB
768M RAM --> VmallocTotal: 245752 kB
1G RAM --> VmallocTotal: 114680 kB
4G RAM --> VmallocTotal: 114680 kB
6G RAM --> VmallocTotal: 114680 kB
(4G and 6G RAM VmallocTotal values taken off the real machines)
Now the question: Is this behavior normal? Should it not be in reverse -
more RAM equals more space for vmalloc?
With regards to the 'vmalloc' kernel parameter, I was able to boot
normally using kernel parameter vmalloc=192m with RAM sizes 256, 512,
768 but _not_ with 1024M of RAM and above.
With 1024M of RAM (and apparently everything above), it is unable to
boot if vmalloc parameter is specified to a value lager than default
128m. It panics with the following:
EXT2-fs: unable to read superblock
isofs_fill_super: bread failed, dev=md0, iso_blknum=16, block=32
XFS: SB read failed
VFS: Cannot open root device "md0" or unknown-block(9,0)
Please append a correct "root=" boot option
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(9,0)
Question: Is this inability to boot related to the fact that the system
is unable to reserve enough space for vmalloc?
Thanks for your valuable time,
Ranko
Hi Ranko!
On Apr 4, 2005, at 4:36 PM, Ranko Zivojnovic wrote:
> (please do CC replies as I am still not on the list)
>
> As I am kind of pressured to resolve this issue, I've set up a test
> environment using VMWare in order to reproduce the problem and
> (un)fortunately the attempt was successful.
>
> I have noticed a few points that relate to the size of the physical RAM
> and the behavior vmalloc. As I am not sure if this is by design or a
> bug, so please someone enlighten me:
>
> The strange thing I have seen is that with the increase of the physical
> RAM, the VmallocTotal in the /proc/meminfo gets smaller! Is this how it
> is supposed to be?
>
Well, I'm by no means a VM expert (not even a regular kernel hacker),
but it seems to me that the sum of LowTotal and VmallocTotal is rather
constant for the different settings. Alas, I cannot offer an
explanation why this should be, so hopefully a knowledgeable person
will shed some light on this issue.
Ciao,
Roland
--
TU Muenchen, Physik-Department E18, James-Franck-Str. 85747 Garching
Telefon 089/289-12592; Telefax 089/289-12570
--
When I am working on a problem I never think about beauty. I think
only how to solve the problem. But when I have finished, if the
solution is not beautiful, I know it is wrong.
-- R. Buckminster Fuller
-----BEGIN GEEK CODE BLOCK-----
Version: 3.12
GS/CS/M/MU d-(++) s:+ a-> C+++ UL++++ P-(+) L+++ E(+) W+ !N K- w--- M+
!V Y+
PGP++ t+(++) 5 R+ tv-- b+ DI++ e+++>++++ h---- x+++
------END GEEK CODE BLOCK------
Ok, I think I've figured it out so I will try and answer my own
questions (the best part is at the end)...
On Mon, 2005-04-04 at 17:36 +0300, Ranko Zivojnovic wrote:
> (please do CC replies as I am still not on the list)
>
> As I am kind of pressured to resolve this issue, I've set up a test
> environment using VMWare in order to reproduce the problem and
> (un)fortunately the attempt was successful.
>
> I have noticed a few points that relate to the size of the physical RAM
> and the behavior vmalloc. As I am not sure if this is by design or a
> bug, so please someone enlighten me:
>
> The strange thing I have seen is that with the increase of the physical
> RAM, the VmallocTotal in the /proc/meminfo gets smaller! Is this how it
> is supposed to be?
>
As the size of memory grows, more gets allocated to the low memory, less
to the vmalloc memory - within first 1GB of RAM.
> Now the question: Is this behavior normal?
I guess it is (nobody said the oposite).
> Should it not be in reverse -
> more RAM equals more space for vmalloc?
>
It really depends on the setup and the workload - some reasonable
defaults (i.e. 128M) have been defined - you can change them using
vmalloc parameter - but with the _extreme_ care as it gets really tricky
if your RAM is 1G and above - read on...
> With regards to the 'vmalloc' kernel parameter, I was able to boot
> normally using kernel parameter vmalloc=192m with RAM sizes 256, 512,
> 768 but _not_ with 1024M of RAM and above.
>
> With 1024M of RAM (and apparently everything above), it is unable to
> boot if vmalloc parameter is specified to a value lager than default
> 128m. It panics with the following:
>
> EXT2-fs: unable to read superblock
> isofs_fill_super: bread failed, dev=md0, iso_blknum=16, block=32
> XFS: SB read failed
> VFS: Cannot open root device "md0" or unknown-block(9,0)
> Please append a correct "root=" boot option
> Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(9,0)
>
And not just - I have just seen the actual culprit message (way up
front):
initrd extends beyond end of memory (0x37fef33a > 0x34000000)
disabling initrd
> Question: Is this inability to boot related to the fact that the system
> is unable to reserve enough space for vmalloc?
>
The resolution (or rather workaround) to the above is to _trick_ the
GRUB into loading the initrd image into the area below what is _going_
to be the calculated "end of memory" using the "uppermem" command.
Now:
1. I hope this is the right way around the problem.
2. I hope this is going to help someone.
Best regards,
Ranko