Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751615AbZIVS4G (ORCPT ); Tue, 22 Sep 2009 14:56:06 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1751554AbZIVS4E (ORCPT ); Tue, 22 Sep 2009 14:56:04 -0400 Received: from gir.skynet.ie ([193.1.99.77]:33265 "EHLO gir.skynet.ie" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751529AbZIVS4D (ORCPT ); Tue, 22 Sep 2009 14:56:03 -0400 Date: Tue, 22 Sep 2009 19:56:08 +0100 From: Mel Gorman To: Pekka Enberg Cc: Nick Piggin , Christoph Lameter , heiko.carstens@de.ibm.com, sachinp@in.ibm.com, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Tejun Heo , Benjamin Herrenschmidt Subject: Re: [PATCH 2/4] slqb: Record what node is local to a kmem_cache_cpu Message-ID: <20090922185608.GH25965@csn.ul.ie> References: <1253624054-10882-1-git-send-email-mel@csn.ul.ie> <1253624054-10882-3-git-send-email-mel@csn.ul.ie> <84144f020909220638l79329905sf9a35286130e88d0@mail.gmail.com> <20090922135453.GF25965@csn.ul.ie> <84144f020909221154x820b287r2996480225692fad@mail.gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <84144f020909221154x820b287r2996480225692fad@mail.gmail.com> User-Agent: Mutt/1.5.17+20080114 (2008-01-14) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2052 Lines: 45 On Tue, Sep 22, 2009 at 09:54:33PM +0300, Pekka Enberg wrote: > Hi Mel, > > On Tue, Sep 22, 2009 at 4:54 PM, Mel Gorman wrote: > >> I don't understand how the memory leak happens from the above > >> description (or reading the code). page_to_nid() returns some crazy > >> value at free time? > > > > Nope, it isn't a leak as such, the allocator knows where the memory is. > > The problem is that is always frees remote but on allocation, it sees > > the per-cpu list is empty and calls the page allocator again. The remote > > lists just grow. > > > >> The remote list isn't drained properly? > > > > That is another way of looking at it. When the remote lists get to a > > watermark, they should drain. However, it's worth pointing out if it's > > repaired in this fashion, the performance of SLQB will suffer as it'll > > never reuse the local list of pages and instead always get cold pages > > from the allocator. > > I worry about setting c->local_nid to the node of the allocated struct > kmem_cache_cpu. It seems like an arbitrary policy decision that's not > necessarily the best option and I'm not totally convinced it's correct > when cpusets are configured. SLUB seems to do the sane thing here by > using page allocator fallback (which respects cpusets AFAICT) and > recycling one slab slab at a time. > > Can I persuade you into sending me a patch that fixes remote list > draining to get things working on PPC? I'd much rather wait for Nick's > input on the allocation policy and performance. > It'll be at least next week before I can revisit this again. I'm afraid I'm going offline from tomorrow until Tuesday. -- Mel Gorman Part-time Phd Student Linux Technology Center University of Limerick IBM Dublin Software Lab -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/