Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id ; Wed, 21 Mar 2001 15:06:31 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id ; Wed, 21 Mar 2001 15:06:21 -0500 Received: from perninha.conectiva.com.br ([200.250.58.156]:46608 "HELO postfix.conectiva.com.br") by vger.kernel.org with SMTP id ; Wed, 21 Mar 2001 15:06:08 -0500 Date: Wed, 21 Mar 2001 16:54:48 -0300 (BRST) From: Rik van Riel To: Jan Harkes Cc: Josh Grebe , linux-kernel@vger.kernel.org, Manfred Spraul Subject: Re: Question about memory usage in 2.4 vs 2.2 In-Reply-To: <20010321141626.A3621@cs.cmu.edu> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 21 Mar 2001, Jan Harkes wrote: > I've been thinking about this a bit and one possible solution would be > to significantly lower the cost of prune_icache by removing the > sync_all_inodes and only let it prune inodes that do not have any > mappings associated with them. Then it might become possible to call > it more frequently, like every time we hit do_try_free_pages. Marcelo and me have been looking at this issue too, and have come to almost the same conclusion as you, with one small change. We -need- to have a way to trigger the writeout of dirty inodes under memory pressure. Imagine doing 'chown -R' on a huge tree on a low-memory box; you'd end up with zillions of dirty inodes in memory with no way to free them. Now if prune_icache would write all the dirty inodes without data pages to disk automatically, we'd have this issue fixed and we'll be able to make a much more efficient prune_icache. Anybody willing to give it a shot ? regards, Rik -- Virtual memory is like a game you can't win; However, without VM there's truly nothing to lose... http://www.surriel.com/ http://www.conectiva.com/ http://distro.conectiva.com.br/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/