Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752763Ab0K3OCI (ORCPT ); Tue, 30 Nov 2010 09:02:08 -0500 Received: from mail-gy0-f174.google.com ([209.85.160.174]:35244 "EHLO mail-gy0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751931Ab0K3OCF (ORCPT ); Tue, 30 Nov 2010 09:02:05 -0500 DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; b=EaF/gttqBxpzqcZdDuI6woo2siBUQ8WsWkRs7VvK5JkvuQnWIjYXuH+sF/6BrErxza 3gAdlw8RZrb0pQTvUoqUf3hQezdEMZWJS8MUGx634/8FiQZ8YUxlTHxu3z3tcJfCBB4R 1ozxoY5ewts11TpHD5qN1JaNetanSRgDD+F1g= Date: Tue, 30 Nov 2010 23:01:52 +0900 From: Minchan Kim To: Mel Gorman Cc: KOSAKI Motohiro , Andrew Morton , linux-mm , LKML , Ben Gamari , Wu Fengguang , Johannes Weiner , Nick Piggin Subject: Re: [PATCH 2/3] Reclaim invalidated page ASAP Message-ID: <20101130140152.GA1528@barrios-desktop> References: <053e6a3308160a8992af5a47fb4163796d033b08.1291043274.git.minchan.kim@gmail.com> <20101130100933.82E9.A69D9226@jp.fujitsu.com> <20101130091822.GJ13268@csn.ul.ie> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20101130091822.GJ13268@csn.ul.ie> User-Agent: Mutt/1.5.20 (2009-06-14) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2867 Lines: 69 On Tue, Nov 30, 2010 at 09:18:22AM +0000, Mel Gorman wrote: > On Tue, Nov 30, 2010 at 10:10:20AM +0900, KOSAKI Motohiro wrote: > > > invalidate_mapping_pages is very big hint to reclaimer. > > > It means user doesn't want to use the page any more. > > > So in order to prevent working set page eviction, this patch > > > move the page into tail of inactive list by PG_reclaim. > > > > > > Please, remember that pages in inactive list are working set > > > as well as active list. If we don't move pages into inactive list's > > > tail, pages near by tail of inactive list can be evicted although > > > we have a big clue about useless pages. It's totally bad. > > > > > > Now PG_readahead/PG_reclaim is shared. > > > fe3cba17 added ClearPageReclaim into clear_page_dirty_for_io for > > > preventing fast reclaiming readahead marker page. > > > > > > In this series, PG_reclaim is used by invalidated page, too. > > > If VM find the page is invalidated and it's dirty, it sets PG_reclaim > > > to reclaim asap. Then, when the dirty page will be writeback, > > > clear_page_dirty_for_io will clear PG_reclaim unconditionally. > > > It disturbs this serie's goal. > > > > > > I think it's okay to clear PG_readahead when the page is dirty, not > > > writeback time. So this patch moves ClearPageReadahead. > > > > > > Signed-off-by: Minchan Kim > > > Acked-by: Rik van Riel > > > Cc: Wu Fengguang > > > Cc: KOSAKI Motohiro > > > Cc: Johannes Weiner > > > Cc: Nick Piggin > > > Cc: Mel Gorman > > > > I still dislike this one. I doubt this trick makes much benefit in real > > world workload. > > > > I would agree except as said elsewhere, it's a chicken and egg problem. > We don't have a real world test because fadvise is not useful in its > current iteration. I'm hoping that there will be a test comparing > > rsync on vanilla kernel > rsync on patched kernel > rsync+patch on vanilla kernel > rsync+patch on patched kernel > > Are the results of such a test likely to happen? Ben, Could you get the rsync execution time(user/sys) and 'cat /proc/vmstat' result before/after? If Ben is busy, I will try to get a data. but I need enough time. I expect rsync+patch on patched kernel should have a less allocstall, less pgscan so fast execution time. > > -- > Mel Gorman > Part-time Phd Student Linux Technology Center > University of Limerick IBM Dublin Software Lab -- Kind regards, Minchan Kim -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/