2010-06-03 16:58:20

by Mike Snitzer

[permalink] [raw]
Subject: [PATCH v3] block: avoid unconditionally freeing previously allocated request_queue

On blk_init_allocated_queue_node failure, only free the request_queue if
it is wasn't previously allocated outside the block layer
(e.g. blk_init_queue_node was blk_init_allocated_queue_node caller).

This addresses an interface bug introduced by the following commit:
01effb0 block: allow initialization of previously allocated request_queue

Otherwise the request_queue may be free'd out from underneath a caller
that is managing the request_queue directly (e.g. caller uses
blk_alloc_queue + blk_init_allocated_queue_node).

Signed-off-by: Mike Snitzer <[email protected]>
---
block/blk-core.c | 14 ++++++++------
1 files changed, 8 insertions(+), 6 deletions(-)

v3: leverage fact that blk_cleanup_queue will properly free all memory
associated with a request_queue (e.g.: q->rq_pool and q->elevator)

diff --git a/block/blk-core.c b/block/blk-core.c
index 3bc5579..24683a4 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -570,9 +570,14 @@ EXPORT_SYMBOL(blk_init_queue);
struct request_queue *
blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id)
{
- struct request_queue *q = blk_alloc_queue_node(GFP_KERNEL, node_id);
+ struct request_queue *uninit_q, *q;

- return blk_init_allocated_queue_node(q, rfn, lock, node_id);
+ uninit_q = blk_alloc_queue_node(GFP_KERNEL, node_id);
+ q = blk_init_allocated_queue_node(uninit_q, rfn, lock, node_id);
+ if (!q)
+ blk_cleanup_queue(uninit_q);
+
+ return q;
}
EXPORT_SYMBOL(blk_init_queue_node);

@@ -592,10 +597,8 @@ blk_init_allocated_queue_node(struct request_queue *q, request_fn_proc *rfn,
return NULL;

q->node = node_id;
- if (blk_init_free_list(q)) {
- kmem_cache_free(blk_requestq_cachep, q);
+ if (blk_init_free_list(q))
return NULL;
- }

q->request_fn = rfn;
q->prep_rq_fn = NULL;
@@ -618,7 +621,6 @@ blk_init_allocated_queue_node(struct request_queue *q, request_fn_proc *rfn,
return q;
}

- blk_put_queue(q);
return NULL;
}
EXPORT_SYMBOL(blk_init_allocated_queue_node);


2010-06-03 17:35:06

by Mike Snitzer

[permalink] [raw]
Subject: [PATCH v4] block: avoid unconditionally freeing previously allocated request_queue

block: avoid unconditionally freeing previously allocated request_queue

On blk_init_allocated_queue_node failure, only free the request_queue if
it is wasn't previously allocated outside the block layer
(e.g. blk_init_queue_node was blk_init_allocated_queue_node caller).

This addresses an interface bug introduced by the following commit:
01effb0 block: allow initialization of previously allocated
request_queue

Otherwise the request_queue may be free'd out from underneath a caller
that is managing the request_queue directly (e.g. caller uses
blk_alloc_queue + blk_init_allocated_queue_node).

Signed-off-by: Mike Snitzer <[email protected]>
---
block/blk-core.c | 17 +++++++++++------
1 files changed, 11 insertions(+), 6 deletions(-)

v4: eliminate potential for NULL pointer in call to blk_cleanup_queue
v3: leverage fact that blk_cleanup_queue will properly free all memory
associated with a request_queue (e.g.: q->rq_pool and q->elevator)

diff --git a/block/blk-core.c b/block/blk-core.c
index 3bc5579..826d070 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -570,9 +570,17 @@ EXPORT_SYMBOL(blk_init_queue);
struct request_queue *
blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id)
{
- struct request_queue *q = blk_alloc_queue_node(GFP_KERNEL, node_id);
+ struct request_queue *uninit_q, *q;

- return blk_init_allocated_queue_node(q, rfn, lock, node_id);
+ uninit_q = blk_alloc_queue_node(GFP_KERNEL, node_id);
+ if (!uninit_q)
+ return NULL;
+
+ q = blk_init_allocated_queue_node(uninit_q, rfn, lock, node_id);
+ if (!q)
+ blk_cleanup_queue(uninit_q);
+
+ return q;
}
EXPORT_SYMBOL(blk_init_queue_node);

@@ -592,10 +600,8 @@ blk_init_allocated_queue_node(struct request_queue *q, request_fn_proc *rfn,
return NULL;

q->node = node_id;
- if (blk_init_free_list(q)) {
- kmem_cache_free(blk_requestq_cachep, q);
+ if (blk_init_free_list(q))
return NULL;
- }

q->request_fn = rfn;
q->prep_rq_fn = NULL;
@@ -618,7 +624,6 @@ blk_init_allocated_queue_node(struct request_queue *q, request_fn_proc *rfn,
return q;
}

- blk_put_queue(q);
return NULL;
}
EXPORT_SYMBOL(blk_init_allocated_queue_node);

2010-06-04 11:45:19

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH v4] block: avoid unconditionally freeing previously allocated request_queue

On 2010-06-03 19:34, Mike Snitzer wrote:
> block: avoid unconditionally freeing previously allocated request_queue
>
> On blk_init_allocated_queue_node failure, only free the request_queue if
> it is wasn't previously allocated outside the block layer
> (e.g. blk_init_queue_node was blk_init_allocated_queue_node caller).
>
> This addresses an interface bug introduced by the following commit:
> 01effb0 block: allow initialization of previously allocated
> request_queue
>
> Otherwise the request_queue may be free'd out from underneath a caller
> that is managing the request_queue directly (e.g. caller uses
> blk_alloc_queue + blk_init_allocated_queue_node).

Thanks Mike, this looks a lot better. I have applied this one and
2/2 of the original posting.

--
Jens Axboe