2006-12-03 07:08:35

by Andrew Morton

[permalink] [raw]
Subject: Re: la la la la ... swappiness

> On Sun, 3 Dec 2006 00:16:38 -0600 "Aucoin" <[email protected]> wrote:
> I set swappiness to zero and it doesn't do what I want!
>
> I have a system that runs as a Linux based data server 24x7 and occasionally
> I need to apply an update or patch. It's a BIIIG patch to the tune of
> several hundred megabytes, let's say 600MB for a good round number. The
> server software itself runs on very tight memory boundaries, I've
> preallocated a large chunk of memory that is shared amongst several
> processes as a form of application cache, there is barely 15% spare memory
> floating around.
>
> The update is delivered to the server as a tar file. In order to minimize
> down time I untar this update and verify the contents landed correctly
> before switching over to the updated software.
>
> The problem is when I attempt to untar the payload disk I/O starts caching,
> the inactive page count reels wildly out of control, the system starts
> swapping, OOM fires and there goes my 4 9's uptime. My system just suffered
> a catastrophic failure because I can't control pagecache due to disk I/O.

kernel version?

> I need a pagecache throttle, what do you suggest?

Don't set swappiness to zero... Leaving it at the default should avoid
the oom-killer.


2006-12-03 15:40:56

by Aucoin

[permalink] [raw]
Subject: RE: la la la la ... swappiness


Thanks for the reply! I'll buy one of your books!

2.6.16.28 SMP

The application is an "embedded", headless system and we've pretty much laid
memory out the way we want it, the only rogue player is the tar update
process. A little bit of swapping is fine but enough swapping to irritate
OOM is not desireable. Yes, the swap is only 500MB but this is a purpose
built system, there are no random user apps started and stopped so
absolutely nothing swaps until the update process runs.

Here's meminfo from an idle system, on a loaded system the machine locks up
now since I've disabled OOM trying to prevent the imminent crash. I got
desperate and not only set swappiness to zero but I've also tried setting
the dirty ratios down as low as 1, the centisecs as low as 1 and cache
pressure as high as 9999. I'm thrashing and running out of dials to turn.

With the ridiculous settings above dirty pages porpoise between 0-20K, with
more reasonable settings they porpoise between 10-40K but it seems to be the
inactive page count that is killing me.

before tar extraction
MemTotal: 2075152 kB
MemFree: 502916 kB
Buffers: 2272 kB
Cached: 7180 kB
SwapCached: 0 kB
Active: 118792 kB
Inactive: 1648 kB
HighTotal: 1179392 kB
HighFree: 3040 kB
LowTotal: 895760 kB
LowFree: 499876 kB
SwapTotal: 524276 kB
SwapFree: 524276 kB
Dirty: 0 kB
Writeback: 0 kB
Mapped: 116720 kB
Slab: 27956 kB
CommitLimit: 557376 kB
Committed_AS: 903912 kB
PageTables: 1340 kB
VmallocTotal: 114680 kB
VmallocUsed: 1000 kB
VmallocChunk: 113584 kB
HugePages_Total: 345
HugePages_Free: 0
Hugepagesize: 4096 kB

during tar extraction ... inactive pages reaches levels as high as ~375000
MemTotal: 2075152 kB
MemFree: 256316 kB
Buffers: 2944 kB
Cached: 247228 kB
SwapCached: 0 kB
Active: 159652 kB
Inactive: 201608 kB
HighTotal: 1179392 kB
HighFree: 1652 kB
LowTotal: 895760 kB
LowFree: 254664 kB
SwapTotal: 524276 kB
SwapFree: 523932 kB
Dirty: 16068 kB
Writeback: 0 kB
Mapped: 116952 kB
Slab: 34864 kB
CommitLimit: 557376 kB
Committed_AS: 904196 kB
PageTables: 1352 kB
VmallocTotal: 114680 kB
VmallocUsed: 1000 kB
VmallocChunk: 113584 kB
HugePages_Total: 345
HugePages_Free: 0
Hugepagesize: 4096 kB

even after the tar has been complete for a couple minutes.
MemTotal: 2075152 kB
MemFree: 169848 kB
Buffers: 4360 kB
Cached: 334824 kB
SwapCached: 0 kB
Active: 178692 kB
Inactive: 271452 kB
HighTotal: 1179392 kB
HighFree: 1652 kB
LowTotal: 895760 kB
LowFree: 168196 kB
SwapTotal: 524276 kB
SwapFree: 523932 kB
Dirty: 0 kB
Writeback: 0 kB
Mapped: 116716 kB
Slab: 31868 kB
CommitLimit: 557376 kB
Committed_AS: 903908 kB
PageTables: 1340 kB
VmallocTotal: 114680 kB
VmallocUsed: 1000 kB
VmallocChunk: 113584 kB
HugePages_Total: 345
HugePages_Free: 0
Hugepagesize: 4096 kB

-----Original Message-----
From: Andrew Morton [mailto:[email protected]]
Sent: Sunday, December 03, 2006 2:09 AM
To: [email protected]
Cc: [email protected]; [email protected]; [email protected]
Subject: Re: la la la la ... swappiness

> On Sun, 3 Dec 2006 00:16:38 -0600 "Aucoin" <[email protected]> wrote:
> I set swappiness to zero and it doesn't do what I want!
>
> I have a system that runs as a Linux based data server 24x7 and
occasionally
> I need to apply an update or patch. It's a BIIIG patch to the tune of
> several hundred megabytes, let's say 600MB for a good round number. The
> server software itself runs on very tight memory boundaries, I've
> preallocated a large chunk of memory that is shared amongst several
> processes as a form of application cache, there is barely 15% spare memory
> floating around.
>
> The update is delivered to the server as a tar file. In order to minimize
> down time I untar this update and verify the contents landed correctly
> before switching over to the updated software.
>
> The problem is when I attempt to untar the payload disk I/O starts
caching,
> the inactive page count reels wildly out of control, the system starts
> swapping, OOM fires and there goes my 4 9's uptime. My system just
suffered
> a catastrophic failure because I can't control pagecache due to disk I/O.

kernel version?

> I need a pagecache throttle, what do you suggest?

Don't set swappiness to zero... Leaving it at the default should avoid
the oom-killer.


2006-12-03 20:46:36

by Tim Schmielau

[permalink] [raw]
Subject: RE: la la la la ... swappiness

On Sun, 3 Dec 2006, Aucoin wrote:

> during tar extraction ... inactive pages reaches levels as high as ~375000

So why do you want the system to swap _less_? You need to find some free
memory for the additional processes to run in, and you have lots of
inactive pages, so I think you want to swap out _more_ pages.

I'd suggest to temporarily add a swapfile before you update your system.
This can even help in bringing your memory use to the state before if you
do it like this
- swapon additional swapfile
- update your database software
- swapoff swap partition
- swapon swap partition
- swapoff additional swapfile

Tim

2006-12-03 23:56:40

by Aucoin

[permalink] [raw]
Subject: RE: la la la la ... swappiness

We want it to swap less for this particular operation because it is low
priority compared to the rest of what's going on inside the box.

We've considered both artificially manipulating swap on the fly similar to
your suggestion as well a parallel thread that pumps a 3 into drop_caches
every few seconds while the update is running, but these seem too much like
hacks for our liking. Mind you, if we don't have a choice we'll do what we
need to get the job done but there's a nagging voice in our conscience that
says keep looking for a more elegant solution and work *with* the kernel
rather than working against it or trying to trick it into doing what we
want.

We've already disabled OOM so we can at least keep our testing alive while
searching for a more elegant solution. Although we want to avoid swap in
this particular instance for this particular reason, in our hearts we agree
with Andrew that swap can be your friend and get you out of a jam once in a
while. Even more, we'd like to leave OOM active if we can because we want to
be told when somebody's not being a good memory citizen.

Some background, what we've done is carve up a huge chunk of memory that is
shared between three resident processes as write cache for a proprietary
block system layout that is part of a scalable storage architecture
currently capable of RAID 0, 1, 5 (soon 6) virtualized across multiple
chassis's, essentially treating each machine as a "disk" and providing
multipath I/O to multiple iSCSI targets as part of a grid/array storage
solution. Whew! We also have a version that leverages a battery backed write
cache for higher performance at an additional cost. This software is
installable on any commodity platform with 4-N disks supported by Linux,
I've even put it on an Optiplex with 4 simulated disks. Yawn ... yet another
iSCSI storage solution, but this one scales linearly in capacity as well as
performance. As such, we have no user level apps on the boxes and precious
little disk to spare for additional swap so our version of the swap
manipulation solution is to turn swap completely off for the duration of the
update.

I hope I haven't muddied things up even more but basically what we want to
do is find a way to limit the number of cached pages for disk I/O on the OS
filesystem, even if it drastically slows down the untar and verify process
because the disk I/O we really care about is not on any of the OS
partitions.

Louis Aucoin

-----Original Message-----
From: Tim Schmielau [mailto:[email protected]]
Sent: Sunday, December 03, 2006 2:47 PM
To: Aucoin
Cc: 'Andrew Morton'; [email protected]; [email protected];
[email protected]
Subject: RE: la la la la ... swappiness

On Sun, 3 Dec 2006, Aucoin wrote:

> during tar extraction ... inactive pages reaches levels as high as ~375000

So why do you want the system to swap _less_? You need to find some free
memory for the additional processes to run in, and you have lots of
inactive pages, so I think you want to swap out _more_ pages.

I'd suggest to temporarily add a swapfile before you update your system.
This can even help in bringing your memory use to the state before if you
do it like this
- swapon additional swapfile
- update your database software
- swapoff swap partition
- swapon swap partition
- swapoff additional swapfile

Tim


2006-12-04 04:56:59

by Andrew Morton

[permalink] [raw]
Subject: Re: la la la la ... swappiness

On Sun, 3 Dec 2006 17:56:30 -0600
"Aucoin" <[email protected]> wrote:

> I hope I haven't muddied things up even more but basically what we want to
> do is find a way to limit the number of cached pages for disk I/O on the OS
> filesystem, even if it drastically slows down the untar and verify process
> because the disk I/O we really care about is not on any of the OS
> partitions.

Try mounting that fs with `-o sync'.

2006-12-04 05:13:14

by Linus Torvalds

[permalink] [raw]
Subject: Re: la la la la ... swappiness



On Sun, 3 Dec 2006, Andrew Morton wrote:

> On Sun, 3 Dec 2006 17:56:30 -0600
> "Aucoin" <[email protected]> wrote:
>
> > I hope I haven't muddied things up even more but basically what we want to
> > do is find a way to limit the number of cached pages for disk I/O on the OS
> > filesystem, even if it drastically slows down the untar and verify process
> > because the disk I/O we really care about is not on any of the OS
> > partitions.
>
> Try mounting that fs with `-o sync'.

Wouldn't it be much nicer to just lower the dirty-page limit?

echo 1 > /proc/sys/vm/dirty_background_ratio
echo 2 > /proc/sys/vm/dirty_ratio

or something. Which we already discussed in another thread and almost
already decided we should lower the values for big-mem machines..

Hmm?

Linus

2006-12-04 10:44:20

by Nick Piggin

[permalink] [raw]
Subject: Re: la la la la ... swappiness

Aucoin wrote:
> We want it to swap less for this particular operation because it is low
> priority compared to the rest of what's going on inside the box.
>
> We've considered both artificially manipulating swap on the fly similar to
> your suggestion as well a parallel thread that pumps a 3 into drop_caches
> every few seconds while the update is running, but these seem too much like
> hacks for our liking. Mind you, if we don't have a choice we'll do what we
> need to get the job done but there's a nagging voice in our conscience that
> says keep looking for a more elegant solution and work *with* the kernel
> rather than working against it or trying to trick it into doing what we
> want.
>
> We've already disabled OOM so we can at least keep our testing alive while
> searching for a more elegant solution. Although we want to avoid swap in
> this particular instance for this particular reason, in our hearts we agree
> with Andrew that swap can be your friend and get you out of a jam once in a
> while. Even more, we'd like to leave OOM active if we can because we want to
> be told when somebody's not being a good memory citizen.
>
> Some background, what we've done is carve up a huge chunk of memory that is
> shared between three resident processes as write cache for a proprietary
> block system layout that is part of a scalable storage architecture
> currently capable of RAID 0, 1, 5 (soon 6) virtualized across multiple
> chassis's, essentially treating each machine as a "disk" and providing
> multipath I/O to multiple iSCSI targets as part of a grid/array storage
> solution. Whew! We also have a version that leverages a battery backed write
> cache for higher performance at an additional cost. This software is
> installable on any commodity platform with 4-N disks supported by Linux,
> I've even put it on an Optiplex with 4 simulated disks. Yawn ... yet another
> iSCSI storage solution, but this one scales linearly in capacity as well as
> performance. As such, we have no user level apps on the boxes and precious
> little disk to spare for additional swap so our version of the swap
> manipulation solution is to turn swap completely off for the duration of the
> update.
>
> I hope I haven't muddied things up even more but basically what we want to
> do is find a way to limit the number of cached pages for disk I/O on the OS
> filesystem, even if it drastically slows down the untar and verify process
> because the disk I/O we really care about is not on any of the OS
> partitions.

Hi Louis,

We had customers see similar incorrect OOM problems, so I sent in some
patches merged after 2.6.16. Can you upgrade to latest kernel? (otherwise
I guess backporting could be an option for you).

Basically the fixes are more conservative about going OOM if the kernel
thinks it can still reclaim some pages, and also allow the kernel to swap
as a last resort, even if swappiness is set to 0.

Once your OOM problems are solved, I think that page reclaim should do a
reasonable job at evicting the right pages with your simple untar
workload.

Thanks,
Nick

--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com

2006-12-04 14:45:51

by Aucoin

[permalink] [raw]
Subject: RE: la la la la ... swappiness

> From: Nick Piggin [mailto:[email protected]]
> We had customers see similar incorrect OOM problems, so I sent in some
> patches merged after 2.6.16. Can you upgrade to latest kernel? (otherwise
> I guess backporting could be an option for you).

I will raise the question of moving the kernel forward one more time before
release. Can you point me to the patches you mentioned?


2006-12-04 15:05:02

by Nick Piggin

[permalink] [raw]
Subject: Re: la la la la ... swappiness

Aucoin wrote:
>>From: Nick Piggin [mailto:[email protected]]
>>We had customers see similar incorrect OOM problems, so I sent in some
>>patches merged after 2.6.16. Can you upgrade to latest kernel? (otherwise
>>I guess backporting could be an option for you).
>
>
> I will raise the question of moving the kernel forward one more time before
> release. Can you point me to the patches you mentioned?

These two are the main ones:

http://www.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=408d85441cd5a9bd6bc851d677a10c605ed8db5f
http://www.kernel.org/git/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=4ff1ffb4870b007b86f21e5f27eeb11498c4c077

They shouldn't be too hard to backport.

I'd be interested to know how OOM and page reclaim behaves after these patches
(or with a newer kernel).

Thanks,
Nick

--
Send instant messages to your online friends http://au.messenger.yahoo.com

2006-12-04 17:04:31

by Christoph Lameter

[permalink] [raw]
Subject: Re: la la la la ... swappiness

On Sun, 3 Dec 2006, Linus Torvalds wrote:

> Wouldn't it be much nicer to just lower the dirty-page limit?
>
> echo 1 > /proc/sys/vm/dirty_background_ratio
> echo 2 > /proc/sys/vm/dirty_ratio

Dirty ratio cannot be set to less than 5%. See
mm/page-writeback.c:get_dirty_limits().

> or something. Which we already discussed in another thread and almost
> already decided we should lower the values for big-mem machines..

We also have an issue with cpusets. Dirty page throttling does not work in
a cpuset if it is relatively small to the total memory on the system since
we calculate the percentage of the total memory and not a percentage of
the memory the process is allowed to use.

2006-12-05 04:03:07

by Aucoin

[permalink] [raw]
Subject: RE: la la la la ... swappiness

> From: Nick Piggin [mailto:[email protected]]
> I'd be interested to know how OOM and page reclaim behaves after these
> patches
> (or with a newer kernel).

We didn't get far today. The various suggestions everyone has for solving
this problem spurred several new discussions inside the office and raised
more questions. At the heart of the problem Andrew is right, heavy handed
tactics to force limits on page cache don't solve anything and may just
squeeze the problem to new areas. Modifying tar is really a band aid and not
a solution, there is still a fundamental problem with memory management in
this setup.

Nick suggested the possibility of a patching the kernel or upgrading to a
new kernel. Linus made the suggestion of dialing the value of
min_free_kbytes down to match something more in line with what might be
expected in a system with 400MB memory as a way to possibly make VM or at
least a portion of VM simulate a restricted amount of memory. And, I have
seen a couple suggestions about creating a new proc vm file to do things
like tweak max_buffer_heads dynamically.

So here's a silly (crazy) question (or two).

If I'm going to go through all the trouble to change the kernel and maybe
create a new proc file how much code would I have to touch to create a proc
file to set something like, let's say, effective memory and have all the vm
calculations use effective memory as the basis for swap and cache
calculations? And can I stop at the vm engine or does the sprawl farther
out? To the untrained mind it seems like this might be the best of both
worlds. It sounds like it would allow an embedded system like ours to set
aside a chunk of ram for a special purpose and designate a sandbox for the
OS. I am, of course, making the *bold* assumption here that a majority of
the vm algorithms are based off something remotely similar to a value which
represents physical memory.

Thoughts? Stones?


2006-12-05 04:46:42

by Linus Torvalds

[permalink] [raw]
Subject: RE: la la la la ... swappiness



On Mon, 4 Dec 2006, Aucoin wrote:
>
> If I'm going to go through all the trouble to change the kernel and maybe
> create a new proc file how much code would I have to touch to create a proc
> file to set something like, let's say, effective memory and have all the vm
> calculations use effective memory as the basis for swap and cache
> calculations?

Considering your /proc/meminfo under load:

MemTotal: 2075152 kB
MemFree: 169848 kB
Buffers: 4360 kB
Cached: 334824 kB
SwapCached: 0 kB
Active: 178692 kB
Inactive: 271452 kB
HighTotal: 1179392 kB
HighFree: 3040 kB
LowTotal: 895760 kB
LowFree: 499876 kB
SwapTotal: 524276 kB
SwapFree: 524276 kB
Dirty: 0 kB
Writeback: 0 kB
Mapped: 116720 kB
Slab: 27956 kB
..

I actually suspect you should be _fairly_ close to such a situation
already. In particular, the Active and Inactive lists really are fairly
small, and don't contain the big SHM area, they seem to be just the cache
and some (a fairly small amount of) anonymous pages.

The above actually confuses me mightily. I _really_ expected the SHM pages
to show up on the active/inactive lists if it was actually SHM, and they
don't seem to. What am I missing?

Louis, exactly how do you allocate that big 1.6GB shared area?

Linus

2006-12-05 06:41:25

by Aucoin

[permalink] [raw]
Subject: RE: la la la la ... swappiness

> From: Linus Torvalds [mailto:[email protected]]
> I actually suspect you should be _fairly_ close to such a situation

We run with min_free_kbytes set around 4k to answer your earlier question.

> Louis, exactly how do you allocate that big 1.6GB shared area?

Ummm, shm_open, ftruncate, mmap ? Is it a trick question ? The process
responsible for initially setting up the shared area doesn't stay resident.


2006-12-05 07:02:33

by Nick Piggin

[permalink] [raw]
Subject: Re: la la la la ... swappiness

Aucoin wrote:
>>From: Linus Torvalds [mailto:[email protected]]
>>I actually suspect you should be _fairly_ close to such a situation
>
>
> We run with min_free_kbytes set around 4k to answer your earlier question.
>
>
>>Louis, exactly how do you allocate that big 1.6GB shared area?
>
>
> Ummm, shm_open, ftruncate, mmap ? Is it a trick question ? The process
> responsible for initially setting up the shared area doesn't stay resident.

The issue is that the shm pages should show up in the active and
inactive lists. But they aren't, and you seem to have about 1542524K
unacconted for. Weird.

Can you try getting the output of /proc/vmstat as well?

--
SUSE Labs, Novell Inc.
Send instant messages to your online friends http://au.messenger.yahoo.com

2006-12-05 07:27:17

by Rene Herman

[permalink] [raw]
Subject: Re: la la la la ... swappiness

Nick Piggin wrote:

> Aucoin wrote:

>> Ummm, shm_open, ftruncate, mmap ? Is it a trick question ? The process
>> responsible for initially setting up the shared area doesn't stay
>> resident.
>
> The issue is that the shm pages should show up in the active and
> inactive lists. But they aren't, and you seem to have about 1542524K
> unacconted for. Weird.
>
> Can you try getting the output of /proc/vmstat as well?

Haven't followed along on this thread, but couldn't help notice the
ftruncate there and some similarity to a problem I once experienced
myself. Is ext3 involved? If so, maybe:

http://mail.nl.linux.org/linux-mm/2002-11/msg00110.html

is still or again being annoying?

Rene.

2006-12-05 13:25:20

by Aucoin

[permalink] [raw]
Subject: RE: la la la la ... swappiness

> From: Nick Piggin [mailto:[email protected]]
> Can you try getting the output of /proc/vmstat as well?

Ouput from vmstat, meminfo and bloatmon below.

vmstat
nr_dirty 0
nr_writeback 0
nr_unstable 0
nr_page_table_pages 361
nr_mapped 33077
nr_slab 8107
pgpgin 1433195947
pgpgout 148795046
pswpin 0
pswpout 1
pgalloc_high 19333815
pgalloc_normal 38376025
pgalloc_dma32 0
pgalloc_dma 1043219
pgfree 58768398
pgactivate 99313
pgdeactivate 61910
pgfault 248450153
pgmajfault 1009
pgrefill_high 18587
pgrefill_normal 129658
pgrefill_dma32 0
pgrefill_dma 6299
pgsteal_high 11954
pgsteal_normal 197484
pgsteal_dma32 0
pgsteal_dma 6176
pgscan_kswapd_high 13035
pgscan_kswapd_normal 205326
pgscan_kswapd_dma32 0
pgscan_kswapd_dma 6369
pgscan_direct_high 0
pgscan_direct_normal 0
pgscan_direct_dma32 0
pgscan_direct_dma 0
pginodesteal 0
slabs_scanned 24576
kswapd_steal 215614
kswapd_inodesteal 0
pageoutrun 3315
allocstall 0
pgrotated 1
nr_bounce 0

meminfo
MemTotal: 2075152 kB
MemFree: 59052 kB
Buffers: 45088 kB
Cached: 401128 kB
SwapCached: 0 kB
Active: 246424 kB
Inactive: 313332 kB
HighTotal: 1179392 kB
HighFree: 1696 kB
LowTotal: 895760 kB
LowFree: 57356 kB
SwapTotal: 524276 kB
SwapFree: 524272 kB
Dirty: 4 kB
Writeback: 0 kB
Mapped: 132252 kB
Slab: 32432 kB
CommitLimit: 855292 kB
Committed_AS: 980948 kB
PageTables: 1432 kB
VmallocTotal: 114680 kB
VmallocUsed: 1000 kB
VmallocChunk: 113584 kB
HugePages_Total: 345
HugePages_Free: 0
Hugepagesize: 4096 kB

bloatmon
skbuff_fclone_cache: 22KB 22KB 100.0
reiser_inode_cache: 0KB 0KB 100.0
posix_timers_cache: 0KB 0KB 100.0
mqueue_inode_cache: 60KB 63KB 95.9
inotify_watch_cache: 0KB 3KB 14.85
inotify_event_cache: 0KB 0KB 100.0
hugetlbfs_inode_cache: 1KB 3KB 27.27
skbuff_head_cache: 2082KB 2100KB 99.14
shmem_inode_cache: 5KB 11KB 48.14
isofs_inode_cache: 0KB 0KB 100.0
sock_inode_cache: 21KB 26KB 82.85
size-131072(DMA): 0KB 0KB 100.0
request_sock_TCP: 0KB 0KB 100.0
proc_inode_cache: 18KB 38KB 48.18
ext3_inode_cache: 314KB 375KB 83.85
ext2_inode_cache: 11KB 30KB 37.50
tcp_bind_bucket: 0KB 3KB 3.94
sysfs_dir_cache: 85KB 86KB 100.0
size-65536(DMA): 0KB 0KB 100.0
size-32768(DMA): 0KB 0KB 100.0
size-16384(DMA): 0KB 0KB 100.0
scsi_io_context: 0KB 0KB 100.0


2006-12-05 13:27:34

by Aucoin

[permalink] [raw]
Subject: RE: la la la la ... swappiness

> From: Rene Herman [mailto:[email protected]]
> ftruncate there and some similarity to a problem I once experienced

I can't honestly say I completely grasp the fundamentals of the issue you
experienced but we are using ext3 with data=journal


2006-12-05 13:50:22

by Rene Herman

[permalink] [raw]
Subject: Re: la la la la ... swappiness

Aucoin wrote:

>> From: Rene Herman [mailto:[email protected]] ftruncate there
>> and some similarity to a problem I once experienced
>
> I can't honestly say I completely grasp the fundamentals of the issue
> you experienced but we are using ext3 with data=journal

Rereading I see ext3 isn't involved at all but perhaps the ftruncate
does something similar here as it did on ext3? Andrew? It's probably
best to igniore me, I also never quite understood what the problem on
ext3 was. Just thought I'd share the hunch anyway...

Rene.