Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756710AbZFIQQx (ORCPT ); Tue, 9 Jun 2009 12:16:53 -0400 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1754725AbZFIQQp (ORCPT ); Tue, 9 Jun 2009 12:16:45 -0400 Received: from mk-filter-1-a-1.mail.uk.tiscali.com ([212.74.100.52]:1085 "EHLO mk-filter-1-a-1.mail.uk.tiscali.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754136AbZFIQQp (ORCPT ); Tue, 9 Jun 2009 12:16:45 -0400 X-Trace: 212486052/mk-filter-1.mail.uk.tiscali.com/B2C/$b2c-THROTTLED-DYNAMIC/b2c-CUSTOMER-DYNAMIC-IP/80.41.4.238/None/hugh.dickins@tiscali.co.uk X-SBRS: None X-RemoteIP: 80.41.4.238 X-IP-MAIL-FROM: hugh.dickins@tiscali.co.uk X-SMTP-AUTH: X-MUA: X-IP-BHB: Once X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: ApsEAJ4nLkpQKQTu/2dsb2JhbACBT848hAoF X-IronPort-AV: E=Sophos;i="4.41,333,1241391600"; d="scan'208";a="212486052" Date: Tue, 9 Jun 2009 17:05:53 +0100 (BST) From: Hugh Dickins X-X-Sender: hugh@sister.anvils To: Nick Piggin cc: Andi Kleen , riel@redhat.com, chris.mason@oracle.com, akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, fengguang.wu@intel.com Subject: Re: [PATCH] [13/16] HWPOISON: The high level memory error handler in the VM v5 In-Reply-To: <20090609100922.GF14820@wotan.suse.de> Message-ID: References: <20090603846.816684333@firstfloor.org> <20090603184648.2E2131D028F@basil.firstfloor.org> <20090609100922.GF14820@wotan.suse.de> MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 6750 Lines: 178 On Tue, 9 Jun 2009, Nick Piggin wrote: > On Wed, Jun 03, 2009 at 08:46:47PM +0200, Andi Kleen wrote: > > Why not have this in rmap.c and not export the locking? > I don't know.. does Hugh care? Thanks for catching my eye with this, Nick. As I've said to Andi, I don't actually have time to study all this. To me, it's just another layer of complexity and maintenance burden that one special-interest group is imposing upon mm, and I can't keep up with it myself. But looking at this one set of extracts: you're right that I'm much happier when page_lock_anon_vma() isn't leaked outside of mm/rmap.c, though I'd probably hate this whichever way round it was; and I share your lock ordering concern. However, if I'm interpreting these extracts correctly, the whole thing looks very misguided to me. Are we really going to kill any process that has a cousin who might once have mapped the page that's been found hwpoisonous? The hwpoison secret police are dangerously out of control, I'd say. The usual use of rmap lookup loops is to go on to look into the page table to see whether the page is actually mapped: I see no attempt at that here, just an assumption that anyone on the list is guilty of mapping the page and must be killed. And even if it did go on to check if the page is there, maybe the process lost interest in that page several weeks ago, why kill it? At least in the file's prio_tree case, we'll only be killing those who mmapped the range which happens to include the page. But in the anon case, remember the anon_vma is just a bundle of "related" vmas outside of which the page will not be found; so if one process got a poisonous page through COW, all the other processes which happen to be sharing that anon_vma through fork or through adjacent merging, are going to get killed too. Guilty by association. I think a much more sensible approach would be to follow the page migration technique of replacing the page's ptes by a special swap-like entry, then do the killing from do_swap_page() if a process actually tries to access the page. But perhaps that has already been discussed and found impossible, or I'm taking "kill" too seriously and other checks are done elsewhere, or... Hugh > > > +/* > > + * Collect processes when the error hit an anonymous page. > > + */ > > +static void collect_procs_anon(struct page *page, struct list_head *to_kill, > > + struct to_kill **tkc) > > +{ > > + struct vm_area_struct *vma; > > + struct task_struct *tsk; > > + struct anon_vma *av = page_lock_anon_vma(page); > > + > > + if (av == NULL) /* Not actually mapped anymore */ > > + return; > > + > > + read_lock(&tasklist_lock); > > + for_each_process (tsk) { > > + if (!tsk->mm) > > + continue; > > + list_for_each_entry (vma, &av->head, anon_vma_node) { > > + if (vma->vm_mm == tsk->mm) > > + add_to_kill(tsk, page, vma, to_kill, tkc); > > + } > > + } > > + page_unlock_anon_vma(av); > > + read_unlock(&tasklist_lock); > > +} > > + > > +/* > > + * Collect processes when the error hit a file mapped page. > > + */ > > +static void collect_procs_file(struct page *page, struct list_head *to_kill, > > + struct to_kill **tkc) > > +{ > > + struct vm_area_struct *vma; > > + struct task_struct *tsk; > > + struct prio_tree_iter iter; > > + struct address_space *mapping = page_mapping(page); > > + > > + /* > > + * A note on the locking order between the two locks. > > + * We don't rely on this particular order. > > + * If you have some other code that needs a different order > > + * feel free to switch them around. Or add a reverse link > > + * from mm_struct to task_struct, then this could be all > > + * done without taking tasklist_lock and looping over all tasks. > > + */ > > + > > + read_lock(&tasklist_lock); > > + spin_lock(&mapping->i_mmap_lock); > > This still has my original complaint that it nests tasklist lock inside > anon vma lock and outside inode mmap lock (and anon_vma nests inside i_mmap). > I guess the property of our current rw locks means that does not matter, > but it could if we had "fair" rw locks, or some tree (-rt tree maybe) > changed rw lock to a plain exclusive lock. > > > > + for_each_process(tsk) { > > + pgoff_t pgoff = page->index << (PAGE_CACHE_SHIFT - PAGE_SHIFT); > > + > > + if (!tsk->mm) > > + continue; > > + > > + vma_prio_tree_foreach(vma, &iter, &mapping->i_mmap, pgoff, > > + pgoff) > > + if (vma->vm_mm == tsk->mm) > > + add_to_kill(tsk, page, vma, to_kill, tkc); > > + } > > + spin_unlock(&mapping->i_mmap_lock); > > + read_unlock(&tasklist_lock); > > +} > > + > > +/* > > + * Collect the processes who have the corrupted page mapped to kill. > > + * This is done in two steps for locking reasons. > > + * First preallocate one tokill structure outside the spin locks, > > + * so that we can kill at least one process reasonably reliable. > > + */ > > +static void collect_procs(struct page *page, struct list_head *tokill) > > +{ > > + struct to_kill *tk; > > + > > + tk = kmalloc(sizeof(struct to_kill), GFP_KERNEL); > > + /* memory allocation failure is implicitly handled */ > > + if (PageAnon(page)) > > + collect_procs_anon(page, tokill, &tk); > > + else > > + collect_procs_file(page, tokill, &tk); > > + kfree(tk); > > +} > > > Index: linux/mm/filemap.c > > =================================================================== > > --- linux.orig/mm/filemap.c 2009-06-03 19:37:38.000000000 +0200 > > +++ linux/mm/filemap.c 2009-06-03 20:13:43.000000000 +0200 > > @@ -105,6 +105,10 @@ > > * > > * ->task->proc_lock > > * ->dcache_lock (proc_pid_lookup) > > + * > > + * (code doesn't rely on that order, so you could switch it around) > > + * ->tasklist_lock (memory_failure, collect_procs_ao) > > + * ->i_mmap_lock > > */ > > > > /* > > Index: linux/mm/rmap.c > > =================================================================== > > --- linux.orig/mm/rmap.c 2009-06-03 19:37:38.000000000 +0200 > > +++ linux/mm/rmap.c 2009-06-03 20:13:43.000000000 +0200 > > @@ -36,6 +36,10 @@ > > * mapping->tree_lock (widely used, in set_page_dirty, > > * in arch-dependent flush_dcache_mmap_lock, > > * within inode_lock in __sync_single_inode) > > + * > > + * (code doesn't rely on that order so it could be switched around) > > + * ->tasklist_lock > > + * anon_vma->lock (memory_failure, collect_procs_anon) > > */ > > > > #include -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/