Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964817AbWAQUPq (ORCPT ); Tue, 17 Jan 2006 15:15:46 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S964818AbWAQUPp (ORCPT ); Tue, 17 Jan 2006 15:15:45 -0500 Received: from omx1-ext.sgi.com ([192.48.179.11]:36040 "EHLO omx1.americas.sgi.com") by vger.kernel.org with ESMTP id S964817AbWAQUPo (ORCPT ); Tue, 17 Jan 2006 15:15:44 -0500 Date: Tue, 17 Jan 2006 12:15:29 -0800 (PST) From: Christoph Lameter To: Hugh Dickins cc: Nick Piggin , Andrew Morton , Linux Kernel Mailing List , Linux Memory Management List Subject: Re: Race in new page migration code? In-Reply-To: Message-ID: References: <20060114155517.GA30543@wotan.suse.de> <20060114181949.GA27382@wotan.suse.de> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2434 Lines: 74 On Tue, 17 Jan 2006, Hugh Dickins wrote: > Of course, page_mapcount may go up an instant later; but equally, > another vma may get added to the prio_tree an instant later. Right. The check does not have to be race free. > It may be that, after much argument to and fro, the whole function will > just reduce to checking "page_mapcount(page) == 1": if so, then I think > you can go back to inlining it literally. Oh. I love simplicity.... Sadly, one sometimes has these complicated notions in ones brain. Thanks. Simplify migrate_page_add after feedback from Hugh. Signed-off-by: Christoph Lameter Index: linux-2.6.15/mm/mempolicy.c =================================================================== --- linux-2.6.15.orig/mm/mempolicy.c 2006-01-14 10:56:28.000000000 -0800 +++ linux-2.6.15/mm/mempolicy.c 2006-01-17 12:12:42.000000000 -0800 @@ -527,42 +527,13 @@ long do_get_mempolicy(int *policy, nodem * page migration */ -/* Check if we are the only process mapping the page in question */ -static inline int single_mm_mapping(struct mm_struct *mm, - struct address_space *mapping) -{ - struct vm_area_struct *vma; - struct prio_tree_iter iter; - int rc = 1; - - spin_lock(&mapping->i_mmap_lock); - vma_prio_tree_foreach(vma, &iter, &mapping->i_mmap, 0, ULONG_MAX) - if (mm != vma->vm_mm) { - rc = 0; - goto out; - } - list_for_each_entry(vma, &mapping->i_mmap_nonlinear, shared.vm_set.list) - if (mm != vma->vm_mm) { - rc = 0; - goto out; - } -out: - spin_unlock(&mapping->i_mmap_lock); - return rc; -} - -/* - * Add a page to be migrated to the pagelist - */ static void migrate_page_add(struct vm_area_struct *vma, struct page *page, struct list_head *pagelist, unsigned long flags) { /* - * Avoid migrating a page that is shared by others and not writable. + * Avoid migrating a page that is shared with others. */ - if ((flags & MPOL_MF_MOVE_ALL) || !page->mapping || PageAnon(page) || - mapping_writably_mapped(page->mapping) || - single_mm_mapping(vma->vm_mm, page->mapping)) + if ((flags & MPOL_MF_MOVE_ALL) || page_mapcount(page) ==1) if (isolate_lru_page(page) == 1) list_add(&page->lru, pagelist); } - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/