Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753494Ab2EBFWA (ORCPT ); Wed, 2 May 2012 01:22:00 -0400 Received: from LGEMRELSE1Q.lge.com ([156.147.1.111]:60049 "EHLO LGEMRELSE1Q.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752266Ab2EBFV6 (ORCPT ); Wed, 2 May 2012 01:21:58 -0400 X-AuditID: 9c93016f-b7b1bae0000044da-5e-4fa0c47184cd Message-ID: <4FA0C473.1000505@kernel.org> Date: Wed, 02 May 2012 14:21:55 +0900 From: Minchan Kim User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:11.0) Gecko/20120410 Thunderbird/11.0.1 MIME-Version: 1.0 Newsgroups: gmane.linux.kernel.mm,gmane.linux.file-systems,gmane.linux.kernel To: Johannes Weiner CC: linux-mm@kvack.org, Rik van Riel , Andrea Arcangeli , Peter Zijlstra , Mel Gorman , Andrew Morton , Minchan Kim , Hugh Dickins , KOSAKI Motohiro , linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [patch 5/5] mm: refault distance-based file cache sizing References: <1335861713-4573-1-git-send-email-hannes@cmpxchg.org> <1335861713-4573-6-git-send-email-hannes@cmpxchg.org> <20120501141330.GA2207@barrios> <20120501153825.GA4837@cmpxchg.org> In-Reply-To: <20120501153825.GA4837@cmpxchg.org> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit X-Brightmail-Tracker: AAAAAA== Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4123 Lines: 96 On 05/02/2012 12:38 AM, Johannes Weiner wrote: > On Tue, May 01, 2012 at 11:13:30PM +0900, Minchan Kim wrote: >> Hi Hannes, >> >> On Tue, May 01, 2012 at 10:41:53AM +0200, Johannes Weiner wrote: >>> To protect frequently used page cache (workingset) from bursts of less >>> frequently used or one-shot cache, page cache pages are managed on two >>> linked lists. The inactive list is where all cache starts out on >>> fault and ends on reclaim. Pages that get accessed another time while >>> on the inactive list get promoted to the active list to protect them >>> from reclaim. >>> >>> Right now we have two main problems. >>> >>> One stems from numa allocation decisions and how the page allocator >>> and kswapd interact. The both of them can enter into a perfect loop >>> where kswapd reclaims from the preferred zone of a task, allowing the >>> task to continuously allocate from that zone. Or, the node distance >>> can lead to the allocator to do direct zone reclaim to stay in the >>> preferred zone. This may be good for locality, but the task has only >> >> Understood. >> >>> the inactive space of that one zone to get its memory activated. >>> Forcing the allocator to spread out to lower zones in the right >>> situation makes the difference between continuous IO to serve the >>> workingset, or taking the numa cost but serving fully from memory. >> >> It's hard to parse your word due to my dumb brain. >> Could you elaborate on it? >> It would be a good if you say with example. > > Say your Normal zone is 4G (DMA32 also 4G) and you have 2G of active > file pages in Normal and DMA32 is full of other stuff. Now you access > a new 6G file repeatedly. First it allocates from Normal (preferred), > then tries DMA32 (full), wakes up kswapd and retries all zones. If > kswapd then frees pages at roughly the same pace as the allocator > allocates from Normal, kswapd never goes to sleep and evicts pages > from the 6G file before they can get accessed a second time. Even > though the 6G file could fit in memory (4G Normal + 4G DMA32), the > allocator only uses the 4G Normal zone. > > Same applies if you have a load that would fit in the memory of two > nodes but the node distance leads the allocator to do zone_reclaim() > and forcing the pages to stay in one node, again preventing the load > from being fully cached in memory, which is much more expensive than > the foreign node cost. > >>> up to half of memory, and don't recognize workingset changes that are >>> bigger than half of memory. >> >> Workingset change? >> You mean if new workingset is bigger than half of memory and it's like >> stream before retouch, we could cache only part of working set because >> head pages on working set would be discared by tail pages of working set >> in inactive list? > > Spot-on. I called that 'tail-chasing' in my notes :-) When you are in > a perpetual loop of evicting pages you will need in a couple hundred > page faults. Those couple hundred page faults are the refault > distance and my code is able to detect these loops and increases the > space available to the inactive list to end them, if possible. > Thanks! It would be better to add above explanation in cover-letter. > This is the whole principle of the series. > > If such a loop is recognized in a single zone, the allocator goes for > lower zones to increase the inactive space. If such a loop is > recognized over all allowed zones in the zonelist, the active lists > are shrunk to increase the inactive space. > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ > Don't email: email@kvack.org > -- Kind regards, Minchan Kim -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/