Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754241AbbG2Op6 (ORCPT ); Wed, 29 Jul 2015 10:45:58 -0400 Received: from relay.parallels.com ([195.214.232.42]:53649 "EHLO relay.parallels.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751527AbbG2Op4 (ORCPT ); Wed, 29 Jul 2015 10:45:56 -0400 Date: Wed, 29 Jul 2015 17:45:39 +0300 From: Vladimir Davydov To: Michel Lespinasse CC: Michal Hocko , Andrew Morton , Andres Lagar-Cavilla , Minchan Kim , Raghavendra K T , Johannes Weiner , Greg Thelen , David Rientjes , "Pavel Emelyanov" , Cyrill Gorcunov , Jonathan Corbet , , , , , Subject: Re: [PATCH -mm v9 0/8] idle memory tracking Message-ID: <20150729144539.GU8100@esperanza> References: <20150729123629.GI15801@dhcp22.suse.cz> <20150729135907.GT8100@esperanza> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: X-ClientProxiedBy: US-EXCH.sw.swsoft.com (10.255.249.47) To US-EXCH2.sw.swsoft.com (10.255.249.46) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 1960 Lines: 38 On Wed, Jul 29, 2015 at 07:12:13AM -0700, Michel Lespinasse wrote: > On Wed, Jul 29, 2015 at 6:59 AM, Vladimir Davydov > wrote: > >> I guess the primary reason to rely on the pfn rather than the LRU walk, > >> which would be more targeted (especially for memcg cases), is that we > >> cannot hold lru lock for the whole LRU walk and we cannot continue > >> walking after the lock is dropped. Maybe we can try to address that > >> instead? I do not think this is easy to achieve but have you considered > >> that as an option? > > > > Yes, I have, and I've come to a conclusion it's not doable, because LRU > > lists can be constantly rotating at an arbitrary rate. If you have an > > idea in mind how this could be done, please share. > > > > Speaking of LRU-vs-PFN walk, iterating over PFNs has its own advantages: > > - You can distribute a walk in time to avoid CPU bursts. > > - You are free to parallelize the scanner as you wish to decrease the > > scan time. > > There is a third way: one could go through every MM in the system and scan > their page tables. Doing things that way turns out to be generally faster > than scanning by physical address, because you don't have to go through > RMAP for every page. But, you end up needing to take the mmap_sem lock of > every MM (in turn) while scanning them, and that degrades quickly under > memory load, which is exactly when you most need this feature. So, scan by > address is still what we use here. Page table scan approach has the inherent problem - it ignores unmapped page cache. If a workload does a lot of read/write or map-access-unmap operations, we won't be able to even roughly estimate its wss. Thanks, Vladimir -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/