Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id ; Wed, 7 Aug 2002 16:52:36 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id ; Wed, 7 Aug 2002 16:52:34 -0400 Received: from garrincha.netbank.com.br ([200.203.199.88]:10250 "HELO garrincha.netbank.com.br") by vger.kernel.org with SMTP id ; Wed, 7 Aug 2002 16:51:23 -0400 Date: Wed, 7 Aug 2002 17:54:50 -0300 (BRT) From: Rik van Riel X-X-Sender: riel@imladris.surriel.com To: Daniel Phillips cc: Andrew Morton , , Subject: Re: [PATCH] Rmap speedup In-Reply-To: Message-ID: X-spambait: aardvark@kernelnewbies.org X-spammeplease: aardvark@nl.linux.org MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1031 Lines: 32 On Wed, 7 Aug 2002, Daniel Phillips wrote: > > It may be a net loss for high sharing levels. Dunno. > > High sharing levels are exactly where the swap-in problem is going to > rear its ugly head. If the swap-in problem exists. I can see it being triggered artificially because we still unmap too many pages in the pageout path, but if we fix shrink_cache() to not unmap the whole inactive list when we're under continuous memory pressure but only the end of the list where we're actually reclaiming pages, maybe then many of the minor page faults we're seeing under such loads will just disappear. Not to mention the superfluous IO being scheduled today. regards, Rik -- Bravely reimplemented by the knights who say "NIH". http://www.surriel.com/ http://distro.conectiva.com/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/