Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755066Ab1FFOIJ (ORCPT ); Mon, 6 Jun 2011 10:08:09 -0400 Received: from mail-px0-f179.google.com ([209.85.212.179]:59270 "EHLO mail-px0-f179.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754911Ab1FFOIH (ORCPT ); Mon, 6 Jun 2011 10:08:07 -0400 DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:cc:subject:message-id:references:mime-version :content-type:content-disposition:in-reply-to:user-agent; b=v86tcHApZpF0/2KZ2TiUZVV5q/hJjOJy+hVLVjSNxcDpfK9e8AbNHUjgLBeuoR0Z1l tfsHal8fci/mCqmvyGDJSD9nfzPz+sTDy9WWRe0OmX8RnN+Oh4lGZbmUq2DG9XuTrV8J HI7Wygq7cW//fmBaT0K0N0+u6ZFqnMN54yp+8= Date: Mon, 6 Jun 2011 23:07:56 +0900 From: Minchan Kim To: Mel Gorman Cc: Andrea Arcangeli , Mel Gorman , akpm@linux-foundation.org, Ury Stankevich , KOSAKI Motohiro , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH] mm: compaction: Abort compaction if too many pages are isolated and caller is asynchronous Message-ID: <20110606140756.GD1686@barrios-laptop> References: <20110531143830.GC13418@barrios-laptop> <20110602182302.GA2802@random.random> <20110602202156.GA23486@barrios-laptop> <20110602214041.GF2802@random.random> <20110602223201.GH2802@random.random> <20110603173707.GL2802@random.random> <20110603180730.GM2802@random.random> <20110606103216.GC5247@suse.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20110606103216.GC5247@suse.de> User-Agent: Mutt/1.5.20 (2009-06-14) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 6242 Lines: 155 On Mon, Jun 06, 2011 at 11:32:16AM +0100, Mel Gorman wrote: > On Fri, Jun 03, 2011 at 08:07:30PM +0200, Andrea Arcangeli wrote: > > On Fri, Jun 03, 2011 at 07:37:07PM +0200, Andrea Arcangeli wrote: > > > On Fri, Jun 03, 2011 at 08:01:44AM +0900, Minchan Kim wrote: > > > > Do you want this? (it's almost pseudo-code) > > > > > > Yes that's good idea so we at least take into account if we isolated > > > something big, and it's pointless to insist wasting CPU on the tail > > > pages and even trace a fail because of tail pages after it. > > > > > > I introduced a __page_count to increase readability. It's still > > > hackish to work on subpages in vmscan.c but at least I added a comment > > > and until we serialize destroy_compound_page vs compound_head, I guess > > > there's no better way. I didn't attempt to add out of order > > > serialization similar to what exists for split_huge_page vs > > > compound_trans_head yet, as the page can be allocated or go away from > > > under us, in split_huge_page vs compound_trans_head it's simpler > > > because both callers are required to hold a pin on the page so the > > > page can't go be reallocated and destroyed under it. > > > > Sent too fast... had to shuffle a few things around... trying again. > > > > === > > Subject: mm: no page_count without a page pin > > > > From: Andrea Arcangeli > > > > It's unsafe to run page_count during the physical pfn scan because > > compound_head could trip on a dangling pointer when reading page->first_page if > > the compound page is being freed by another CPU. Also properly take into > > account if we isolated a compound page during the scan and break the loop if > > we've isolated enoguh. Introduce __page_count to cleanup some atomic_read from > > &page->_count in common code to cleanup. > > > > Signed-off-by: Andrea Arcangeli > > This patch is pulling in stuff from Minchan. Minimally his patch should > be kept separate to preserve history or his Signed-off should be > included on this patch. > > > --- > > arch/powerpc/mm/gup.c | 2 - > > arch/powerpc/platforms/512x/mpc512x_shared.c | 2 - > > arch/x86/mm/gup.c | 2 - > > fs/nilfs2/page.c | 2 - > > include/linux/mm.h | 13 ++++++---- > > mm/huge_memory.c | 4 +-- > > mm/internal.h | 2 - > > mm/page_alloc.c | 6 ++-- > > mm/swap.c | 4 +-- > > mm/vmscan.c | 35 ++++++++++++++++++++------- > > 10 files changed, 47 insertions(+), 25 deletions(-) > > > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -1047,7 +1047,7 @@ static unsigned long isolate_lru_pages(u > > for (scan = 0; scan < nr_to_scan && !list_empty(src); scan++) { > > struct page *page; > > unsigned long pfn; > > - unsigned long end_pfn; > > + unsigned long start_pfn, end_pfn; > > unsigned long page_pfn; > > int zone_id; > > > > @@ -1087,9 +1087,9 @@ static unsigned long isolate_lru_pages(u > > */ > > zone_id = page_zone_id(page); > > page_pfn = page_to_pfn(page); > > - pfn = page_pfn & ~((1 << order) - 1); > > - end_pfn = pfn + (1 << order); > > - for (; pfn < end_pfn; pfn++) { > > + start_pfn = page_pfn & ~((1 << order) - 1); > > + end_pfn = start_pfn + (1 << order); > > + for (pfn = start_pfn; pfn < end_pfn; pfn++) { > > struct page *cursor_page; > > > > /* The target page is in the block, ignore it. */ > > @@ -1116,16 +1116,33 @@ static unsigned long isolate_lru_pages(u > > break; > > > > if (__isolate_lru_page(cursor_page, mode, file) == 0) { > > + unsigned int isolated_pages; > > list_move(&cursor_page->lru, dst); > > mem_cgroup_del_lru(cursor_page); > > - nr_taken += hpage_nr_pages(page); > > - nr_lumpy_taken++; > > + isolated_pages = hpage_nr_pages(page); > > + nr_taken += isolated_pages; > > + nr_lumpy_taken += isolated_pages; > > if (PageDirty(cursor_page)) > > - nr_lumpy_dirty++; > > + nr_lumpy_dirty += isolated_pages; > > scan++; > > + pfn += isolated_pages-1; > > Ah, here is the isolated_pages - 1 which is necessary. Should have read > the whole thread before responding to anything :). > > I still think this optimisation is rare and only applies if we are > encountering huge pages during the linear scan. How often are we doing > that really? > > > + VM_BUG_ON(!isolated_pages); > > This BUG_ON is overkill. hpage_nr_pages would have to return 0. > > > + VM_BUG_ON(isolated_pages > MAX_ORDER_NR_PAGES); > > This would require order > MAX_ORDER_NR_PAGES to be passed into > isolate_lru_pages or for a huge page to be unaligned to a power of > two. The former is very unlikely and the latter is not supported by > any CPU. > > > } else { > > - /* the page is freed already. */ > > - if (!page_count(cursor_page)) > > + /* > > + * Check if the page is freed already. > > + * > > + * We can't use page_count() as that > > + * requires compound_head and we don't > > + * have a pin on the page here. If a > > + * page is tail, we may or may not > > + * have isolated the head, so assume > > + * it's not free, it'd be tricky to > > + * track the head status without a > > + * page pin. > > + */ > > + if (!PageTail(cursor_page) && > > + !__page_count(cursor_page)) > > continue; > > break; > > Ack to this part. > > I'm not keen on __page_count() as __ normally means the "unlocked" > version of a function although I realise that rule isn't universal Yes. It's not univeral. I have thought about it as it's just private function or utility function (ie, not-exportable). So I don't mind the name. > either. I can't think of a better name though. Me, too. -- Kind regards Minchan Kim -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/