Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933545Ab0BDR20 (ORCPT ); Thu, 4 Feb 2010 12:28:26 -0500 Received: from kroah.org ([198.145.64.141]:34869 "EHLO coco.kroah.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933324Ab0BDRWQ (ORCPT ); Thu, 4 Feb 2010 12:22:16 -0500 X-Mailbox-Line: From linux@linux.site Thu Feb 4 09:15:18 2010 Message-Id: <20100204171518.111950133@linux.site> User-Agent: quilt/0.47-14.9 Date: Thu, 04 Feb 2010 09:12:24 -0800 From: Greg KH To: linux-kernel@vger.kernel.org, stable@kernel.org Cc: stable-review@kernel.org, torvalds@linux-foundation.org, akpm@linux-foundation.org, alan@lxorguk.ukuu.org.uk, linux-mm@kvack.org, Nick Piggin , Greg Kroah-Hartman Subject: [53/74] mm: percpu-vmap fix RCU list walking In-Reply-To: <20100204171850.GA16539@kroah.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3535 Lines: 113 2.6.32-stable review patch. If anyone has any objections, please let us know. ------------------ From: Nick Piggin commit de5604231ce4bc8db1bc1dcd27d8540cbedf1518 upstream. RCU list walking of the per-cpu vmap cache was broken. It did not use RCU primitives, and also the union of free_list and rcu_head is obviously wrong (because free_list is indeed the list we are RCU walking). While we are there, remove a couple of unused fields from an earlier iteration. These APIs aren't actually used anywhere, because of problems with the XFS conversion. Christoph has now verified that the problems are solved with these patches. Also it is an exported interface, so I think it will be good to be merged now (and Christoph wants to get the XFS changes into their local tree). Cc: linux-mm@kvack.org Tested-by: Christoph Hellwig Signed-off-by: Nick Piggin Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- mm/vmalloc.c | 20 ++++++-------------- 1 file changed, 6 insertions(+), 14 deletions(-) --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -667,8 +667,6 @@ static bool vmap_initialized __read_most struct vmap_block_queue { spinlock_t lock; struct list_head free; - struct list_head dirty; - unsigned int nr_dirty; }; struct vmap_block { @@ -678,10 +676,8 @@ struct vmap_block { unsigned long free, dirty; DECLARE_BITMAP(alloc_map, VMAP_BBMAP_BITS); DECLARE_BITMAP(dirty_map, VMAP_BBMAP_BITS); - union { - struct list_head free_list; - struct rcu_head rcu_head; - }; + struct list_head free_list; + struct rcu_head rcu_head; }; /* Queue of free and dirty vmap blocks, for allocation and flushing purposes */ @@ -757,7 +753,7 @@ static struct vmap_block *new_vmap_block vbq = &get_cpu_var(vmap_block_queue); vb->vbq = vbq; spin_lock(&vbq->lock); - list_add(&vb->free_list, &vbq->free); + list_add_rcu(&vb->free_list, &vbq->free); spin_unlock(&vbq->lock); put_cpu_var(vmap_cpu_blocks); @@ -776,8 +772,6 @@ static void free_vmap_block(struct vmap_ struct vmap_block *tmp; unsigned long vb_idx; - BUG_ON(!list_empty(&vb->free_list)); - vb_idx = addr_to_vb_idx(vb->va->va_start); spin_lock(&vmap_block_tree_lock); tmp = radix_tree_delete(&vmap_block_tree, vb_idx); @@ -816,7 +810,7 @@ again: vb->free -= 1UL << order; if (vb->free == 0) { spin_lock(&vbq->lock); - list_del_init(&vb->free_list); + list_del_rcu(&vb->free_list); spin_unlock(&vbq->lock); } spin_unlock(&vb->lock); @@ -860,11 +854,11 @@ static void vb_free(const void *addr, un BUG_ON(!vb); spin_lock(&vb->lock); - bitmap_allocate_region(vb->dirty_map, offset >> PAGE_SHIFT, order); + BUG_ON(bitmap_allocate_region(vb->dirty_map, offset >> PAGE_SHIFT, order)); vb->dirty += 1UL << order; if (vb->dirty == VMAP_BBMAP_BITS) { - BUG_ON(vb->free || !list_empty(&vb->free_list)); + BUG_ON(vb->free); spin_unlock(&vb->lock); free_vmap_block(vb); } else @@ -1033,8 +1027,6 @@ void __init vmalloc_init(void) vbq = &per_cpu(vmap_block_queue, i); spin_lock_init(&vbq->lock); INIT_LIST_HEAD(&vbq->free); - INIT_LIST_HEAD(&vbq->dirty); - vbq->nr_dirty = 0; } /* Import existing vmlist entries. */ -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/