show_pools() walks the page_list of a pool w/o protection against the
list modifications in alloc/free. Take pool->lock to avoid stomping
into nirvana.
Signed-off-by: Thomas Gleixner <[email protected]>
---
diff --git a/mm/dmapool.c b/mm/dmapool.c
index b1f0885..3df0637 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -86,10 +86,12 @@ show_pools(struct device *dev, struct device_attribute *attr, char *buf)
unsigned pages = 0;
unsigned blocks = 0;
+ spin_lock_irq(&pool->lock);
list_for_each_entry(page, &pool->page_list, page_list) {
pages++;
blocks += page->in_use;
}
+ spin_unlock_irq(&pool->lock);
/* per-pool info, no real statistics yet */
temp = scnprintf(next, size, "%-16s %4u %4Zu %4Zu %2u\n",
On Tue, Jun 23, 2009 at 04:41:14PM +0200, Thomas Gleixner wrote:
> show_pools() walks the page_list of a pool w/o protection against the
> list modifications in alloc/free. Take pool->lock to avoid stomping
> into nirvana.
>
> Signed-off-by: Thomas Gleixner <[email protected]>
Looks right to me. We're already holding pools_lock here, but pools_lock
doesn't protect against the page_list.
Signed-off-by: Matthew Wilcox <[email protected]>
I don't have a tree for dmapool work ... might as well go in through
Andrew, I suppose?
> ---
> diff --git a/mm/dmapool.c b/mm/dmapool.c
> index b1f0885..3df0637 100644
> --- a/mm/dmapool.c
> +++ b/mm/dmapool.c
> @@ -86,10 +86,12 @@ show_pools(struct device *dev, struct device_attribute *attr, char *buf)
> unsigned pages = 0;
> unsigned blocks = 0;
>
> + spin_lock_irq(&pool->lock);
> list_for_each_entry(page, &pool->page_list, page_list) {
> pages++;
> blocks += page->in_use;
> }
> + spin_unlock_irq(&pool->lock);
>
> /* per-pool info, no real statistics yet */
> temp = scnprintf(next, size, "%-16s %4u %4Zu %4Zu %2u\n",
>
>
--
Matthew Wilcox Intel Open Source Technology Centre
"Bill, look, we understand that you're interested in selling us this
operating system, but compare it to ours. We can't possibly take such
a retrograde step."