Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753713AbZIIQSZ (ORCPT ); Wed, 9 Sep 2009 12:18:25 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752913AbZIIQSY (ORCPT ); Wed, 9 Sep 2009 12:18:24 -0400 Received: from g1t0029.austin.hp.com ([15.216.28.36]:28106 "EHLO g1t0029.austin.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752661AbZIIQSY (ORCPT ); Wed, 9 Sep 2009 12:18:24 -0400 Subject: Re: [rfc] lru_add_drain_all() vs isolation From: Lee Schermerhorn To: Minchan Kim Cc: KOSAKI Motohiro , Christoph Lameter , Peter Zijlstra , Mike Galbraith , Ingo Molnar , linux-mm , Oleg Nesterov , lkml In-Reply-To: <28c262360909090839j626ff818of930cf13a6185123@mail.gmail.com> References: <20090909131945.0CF5.A69D9226@jp.fujitsu.com> <28c262360909090839j626ff818of930cf13a6185123@mail.gmail.com> Content-Type: text/plain Organization: HP/LKTT Date: Wed, 09 Sep 2009 12:18:23 -0400 Message-Id: <1252513103.4102.14.camel@useless.americas.hpqcorp.net> Mime-Version: 1.0 X-Mailer: Evolution 2.26.1 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2540 Lines: 57 On Thu, 2009-09-10 at 00:39 +0900, Minchan Kim wrote: > On Wed, Sep 9, 2009 at 1:27 PM, KOSAKI Motohiro > wrote: > >> The usefulness of a scheme like this requires: > >> > >> 1. There are cpus that continually execute user space code > >> without system interaction. > >> > >> 2. There are repeated VM activities that require page isolation / > >> migration. > >> > >> The first page isolation activity will then clear the lru caches of the > >> processes doing number crunching in user space (and therefore the first > >> isolation will still interrupt). The second and following isolation will > >> then no longer interrupt the processes. > >> > >> 2. is rare. So the question is if the additional code in the LRU handling > >> can be justified. If lru handling is not time sensitive then yes. > > > > Christoph, I'd like to discuss a bit related (and almost unrelated) thing. > > I think page migration don't need lru_add_drain_all() as synchronous, because > > page migration have 10 times retry. > > > > Then asynchronous lru_add_drain_all() cause > > > > - if system isn't under heavy pressure, retry succussfull. > > - if system is under heavy pressure or RT-thread work busy busy loop, retry failure. > > > > I don't think this is problematic bahavior. Also, mlock can use asynchrounous lru drain. > > I think, more exactly, we don't have to drain lru pages for mlocking. > Mlocked pages will go into unevictable lru due to > try_to_unmap when shrink of lru happens. > How about removing draining in case of mlock? > > > > > What do you think? Remember how the code works: __mlock_vma_pages_range() loops calliing get_user_pages() to fault in batches of 16 pages and returns the page pointers for mlocking. Mlocking now requires isolation from the lru. If you don't drain after each call to get_user_pages(), up to a pagevec's worth of pages [~14] will likely still be in the pagevec and won't be isolatable/mlockable(). We can end up with most of the pages still on the normal lru lists. If we want to move to an almost exclusively lazy culling of mlocked pages to the unevictable then we can remove the drain. If we want to be more proactive in culling the unevictable pages as we populate the vma, we'll want to keep the drain. Lee -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/