Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id ; Mon, 28 Oct 2002 14:47:20 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id ; Mon, 28 Oct 2002 14:47:20 -0500 Received: from packet.digeo.com ([12.110.80.53]:18174 "EHLO packet.digeo.com") by vger.kernel.org with ESMTP id ; Mon, 28 Oct 2002 14:47:18 -0500 Message-ID: <3DBD95B8.321C4D6E@digeo.com> Date: Mon, 28 Oct 2002 11:53:28 -0800 From: Andrew Morton X-Mailer: Mozilla 4.79 [en] (X11; U; Linux 2.4.19-pre4 i686) X-Accept-Language: en MIME-Version: 1.0 To: chrisl@vmware.com CC: Andrea Arcangeli , linux-kernel@vger.kernel.org, chrisl@gnuchina.org, Alan Cox Subject: Re: writepage return value check in vmscan.c References: <20021024082505.GB1471@vmware.com> <3DB7B11B.9E552CFF@digeo.com> <20021024175718.GA1398@vmware.com> <20021024183327.GS3354@dualathlon.random> <20021024191531.GD1398@vmware.com> <3DB990FE.A1B8956@digeo.com> <20021028191745.GA1564@vmware.com> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit X-OriginalArrivalTime: 28 Oct 2002 19:53:28.0731 (UTC) FILETIME=[AF72B6B0:01C27EBB] Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2194 Lines: 62 chrisl@vmware.com wrote: > > On Fri, Oct 25, 2002 at 11:44:14AM -0700, Andrew Morton wrote: > > chrisl@vmware.com wrote: > > > > > > bigmm -i 3 -t 2 -c 1024 > > > > That's a nice little box killer you have there. > > Thanks. It kills on all our customer's kernel, they don't use the > bleeding edge kernel at all. It is interesting to see vmware > serve as some heavy load stress test tool. It will give some real > world load to the OS, e.g. the load need to boot a windows etc. You > can stack many of them to abuse the OS. I tested Andrea's latest kernel. It survived. Probably because it left 100 megabytes of lowmem unallocated throughout the test. > > > > With mem=4G, running bigmm -i 5 -t 2 -c 1024: > > > > 2.4.19: Ran for a few minutes, got slower and slower and > > eventually stopped. kupdate had taken 30 seconds CPU and > > all CPUs were spinning in shrink_cache(). Had to reset. > > > > 2.4.20-pre8-ac1: Ran for a minute, froze up for a couple of > > minutes then recovered and remained comfortable. > > How many instance of bigmm left there? It should be 10 bigmm > processes before oom kickin. Well, they should all be left running? All this memory has file-backing, and is easily reclaimable. umm, yes. There could be bogus oom-killings in the combined-LRU VMs. But I saw none in testing. All of which is great fun, but it leaves open the question "what the heck can vmware do about it". I wish there was a clear answer. If the customer is running a suse/UL kernel they're presumably OK. If their kernel comes from kernel.org they should add Andrea's patch. Which means they get an absolute boatload of stuff which they may not want: 1223 files changed, 306053 insertions(+), 9655 deletions(-) but that kernel performs well. If they're running an RH-rmap kernel then they're probably okayish, although I'd recommend more testing there. If they're running an RHAS-style kernel then I do not know. It may fail. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/