Note:
Please CC answers/comments you post to me.
All,
Our team is discussing the best approach to calculating the swap file size
for the 2.4.18 kernels and newer on systems with 6 - 8 Gb of RAM. Does the
"Double The System Ram" rule still apply?
I'm sorry if I offend anyone by posting this here, I'm just looking for an
authoritative / accurate response.
A quote on the topic from one of our engineers:
"I'd disagree on the Swapfile size. Swapfiles over 2 Gig are insane. If
a host is consuming 2+ Gig of real RAM, and consuming another 2 Gig of
Swap, something is wrong with the apps that are running on that host.
(I can go into a big tirade on how Linux VM works, if anyone wants ;) )"
Thanks for your suggestions!
--
Nick Pavlica
EchoStar Communications
CAS-Engineering
(307)633-5237
On Tue, Aug 12, 2003 at 02:53:18PM -0600, Pavlica, Nick wrote:
> Note:
> Please CC answers/comments you post to me.
>
> All,
> Our team is discussing the best approach to calculating the swap file size
> for the 2.4.18 kernels and newer on systems with 6 - 8 Gb of RAM. Does the
> "Double The System Ram" rule still apply?
It never did. It's a rule of thumb, if you have no idea about what your
swap needs might be, and if it's a general purpose machine with "normal
amounts of memory" that shouldn't swap at all but still should have some
swap space just in case.
What do you do with your systems? How much total virtual memory will
you need? How much does it cost you to wait an extra hour for the
computers to finish, compared to the cost of the machine/memory you
would need in order to speed up the processing with an hour?
>
> I'm sorry if I offend anyone by posting this here, I'm just looking for an
> authoritative / accurate response.
I'm certainly not authorative on this issue :) But I rather doubt
that you will find anyone who will stand up and say "for any purpose
imaginable, you will at all times see perfect performance with
K*{physical memory} of swap" for some fixed constant K.
>
> A quote on the topic from one of our engineers:
>
> "I'd disagree on the Swapfile size. Swapfiles over 2 Gig are insane. If
> a host is consuming 2+ Gig of real RAM, and consuming another 2 Gig of
> Swap, something is wrong with the apps that are running on that host.
> (I can go into a big tirade on how Linux VM works, if anyone wants ;) )"
I once wrote a finite element solver that was optimized for having the
entire working set of data on disk (with only a small amount in RAM at
any given time). That was a design decision - nothing was wrong with
that app.
Really, you need to understand what your applications are doing. You
need to understand what their needs are. And you need to know what your
needs are. Then, you can choose whether putting in another 32G of RAM is
better (cost/benefit in your specific scenario) than adding 32G of swap.
Think of your memory like a hierarchy:
Registers -> L1 cache -> L2 cache -> L3 cache -> RAM -> Disk
Adding more storage earlier in this sequence *can* give you significant
performance increases. But it may be costly.
The only common rule that I can think of in this respect is: OOM is bad!
Make sure that you have enough virtual memory for your applications, and
then some to spare. A slow (swap thrashing) machine is better than a
crashed machine.
The workloads I have are usually a handfull of processes in the
few-hundred-megs-of-memory range. For this, I usually make sure that I
have 1-2 GB of swap *more* than I would ever need. A little headroom is
useful when something doesn't run as expected.
Also, having the majority of the used virtual memory in swap (say, a 2GB
RAM plus 16 GB swap machine with 90% of all virtual memory used), is not
necessarily a problem. It depends. Look at "vmstat". If there is
virtually no paging activity, then it's probably not a performance
problem. Buying those extra 16 GB of RAM instead may not help you a
lot. I know people who do chip design, and I've heard stories about
buying extra 'swap drives' simply in order to work around memory leaks
in simulation applications. It's a workaround, and the simulation
application should of course be fixed, but it can work just nicely for
memory leaks where the data that gets swapped out is never again
accessed.
So to summarize: There is no simple rule.
Sorry about that :)
Cheers,
--
................................................................
: [email protected] : And I see the elder races, :
:.........................: putrid forms of man :
: Jakob ?stergaard : See him rise and claim the earth, :
: OZ9ABN : his downfall is at hand. :
:.........................:............{Konkhra}...............:
> > Our team is discussing the best approach to calculating the swap
> > file size for the 2.4.18 kernels and newer on systems with 6 - 8
> > Gb of RAM. Does the "Double The System Ram" rule still apply?
>
> It never did. It's a rule of thumb, if you have no idea about what
> your swap needs might be, and if it's a general purpose machine with
> "normal amounts of memory" that shouldn't swap at all but still
> should have some swap space just in case.
Too much swap space can be a bad thing.
I know an ISP who had a runaway process on a webserver, which had 256
Mb of RAM, and 512 Mb of swap. As soon as the process started using a
lot of swap, the machine became almost completely unresponsive, and
somebody ended up driving to the datacentre to reboot it :-).
Just because disk space is cheap, it's not a good idea simply to
allocate swap space excessively.
> The only common rule that I can think of in this respect is: OOM is
> bad!
> Make sure that you have enough virtual memory for your applications,
> and then some to spare. A slow (swap thrashing) machine is better
> than a crashed machine.
Not always! In the scenario I described above, the machine didn't
need any swap at all - if it hadn't had any, the runaway process would
have just been killed, and downtime would have been avoided.
> The workloads I have are usually a handfull of processes in the
> few-hundred-megs-of-memory range. For this, I usually make sure
> that I have 1-2 GB of swap *more* than I would ever need. A little
> headroom is useful when something doesn't run as expected.
>
> Also, having the majority of the used virtual memory in swap (say, a
> 2GB RAM plus 16 GB swap machine with 90% of all virtual memory
> used), is not necessarily a problem. It depends. Look at "vmstat".
> If there is virtually no paging activity, then it's probably not a
> performance problem. Buying those extra 16 GB of RAM instead may
> not help you a lot. I know people who do chip design, and I've
> heard stories about buying extra 'swap drives' simply in order to
> work around memory leaks in simulation applications. It's a
> workaround, and the simulation application should of course be
> fixed, but it can work just nicely for memory leaks where the data
> that gets swapped out is never again accessed.
A large number of installations, especially desktop machines, don't
need swap devices at all.
The twice the physical RAM rule simply saves the effort of doing a
real assessment of how much RAM is needed for any particular task, and
doesn't negatively affect performance too much, or cost too much in
terms of disk space. It works well, and is appropriate as a default
configuration for a Linux distribution, for example. Wherever
possible, though, it's worth doing a real assessment of memory
requirements.
> So to summarize: There is no simple rule.
I totally agree :-).
John.