When alloc fails, free_table is being called. Depending on the number of
bytes requested, we determine if we are going to call _get_free_page()
or kmalloc(). When alloc fails, our math is wrong (due to sg_size - 1),
and the last buffer is wrongfully assumed to have been allocated by
kmalloc. Hence, kfree gets called and a panic occurs.
This fix sets the end marker and allows the proper freeing of the
buffers.
Signed-off-by: Olusanya Soyannwo <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Jens Axboe <[email protected]>
Signed-off-by: Jeffrey Carlyle <[email protected]>
---
lib/scatterlist.c | 11 +++++++++--
1 files changed, 9 insertions(+), 2 deletions(-)
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index a5ec428..acf2c6e 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -163,7 +163,7 @@ void __sg_free_table(struct sg_table *table,
unsigned int max_ents,
return;
sgl = table->sgl;
- while (table->orig_nents) {
+ while (table->orig_nents && sgl) {
unsigned int alloc_size = table->orig_nents;
unsigned int sg_size;
@@ -227,6 +227,7 @@ int __sg_alloc_table(struct sg_table *table,
unsigned int nents,
{
struct scatterlist *sg, *prv;
unsigned int left;
+ unsigned int total_alloc = 0;
#ifndef ARCH_HAS_SG_CHAIN
BUG_ON(nents > max_ents);
@@ -248,8 +249,14 @@ int __sg_alloc_table(struct sg_table *table,
unsigned int nents,
left -= sg_size;
sg = alloc_fn(alloc_size, gfp_mask);
- if (unlikely(!sg))
+ if (unlikely(!sg)) {
+ table->orig_nents = total_alloc;
+ /* mark the end of previous entry */
+ sg_mark_end(&prv[alloc_size - 1]);
return -ENOMEM;
+ }
+
+ total_alloc += alloc_size;
sg_init_table(sg, alloc_size);
table->nents = table->orig_nents += sg_size;
--
1.6.3.3
Hello,
First of all, the patch is line-wrapped. Please check your email
settings.
On 08/26/2010 06:04 PM, Jeffrey Carlyle wrote:
> When alloc fails, free_table is being called. Depending on the number of
> bytes requested, we determine if we are going to call _get_free_page()
> or kmalloc(). When alloc fails, our math is wrong (due to sg_size - 1),
> and the last buffer is wrongfully assumed to have been allocated by
> kmalloc. Hence, kfree gets called and a panic occurs.
>
> This fix sets the end marker and allows the proper freeing of the
> buffers.
>
> Signed-off-by: Olusanya Soyannwo <[email protected]>
> Cc: Tejun Heo <[email protected]>
> Cc: Jens Axboe <[email protected]>
> Signed-off-by: Jeffrey Carlyle <[email protected]>
> ---
> lib/scatterlist.c | 11 +++++++++--
> 1 files changed, 9 insertions(+), 2 deletions(-)
>
> diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> index a5ec428..acf2c6e 100644
> --- a/lib/scatterlist.c
> +++ b/lib/scatterlist.c
> @@ -163,7 +163,7 @@ void __sg_free_table(struct sg_table *table,
> unsigned int max_ents,
> return;
>
> sgl = table->sgl;
> - while (table->orig_nents) {
> + while (table->orig_nents && sgl) {
> unsigned int alloc_size = table->orig_nents;
> unsigned int sg_size;
Why is this change necessary?
> @@ -227,6 +227,7 @@ int __sg_alloc_table(struct sg_table *table,
> unsigned int nents,
> {
> struct scatterlist *sg, *prv;
> unsigned int left;
> + unsigned int total_alloc = 0;
>
> #ifndef ARCH_HAS_SG_CHAIN
> BUG_ON(nents > max_ents);
> @@ -248,8 +249,14 @@ int __sg_alloc_table(struct sg_table *table,
> unsigned int nents,
> left -= sg_size;
>
> sg = alloc_fn(alloc_size, gfp_mask);
> - if (unlikely(!sg))
> + if (unlikely(!sg)) {
> + table->orig_nents = total_alloc;
> + /* mark the end of previous entry */
> + sg_mark_end(&prv[alloc_size - 1]);
prv[alloc_size - 1] is already marked as end by sg_init_table() during
the previous iteration. Also, prv can be NULL at this point. AFAICS,
the only thing necessary would be "if (prv) table->nents++", no?
Thanks.
--
tejun
On Fri, Aug 27, 2010 at 5:18 AM, Tejun Heo <[email protected]> wrote:
> First of all, the patch is line-wrapped. ?Please check your email
> settings.
Sorry about that.
> On 08/26/2010 06:04 PM, Jeffrey Carlyle wrote:
>> diff --git a/lib/scatterlist.c b/lib/scatterlist.c
>> index a5ec428..acf2c6e 100644
>> --- a/lib/scatterlist.c
>> +++ b/lib/scatterlist.c
>> @@ -163,7 +163,7 @@ void __sg_free_table(struct sg_table *table,
>> unsigned int max_ents,
>> ? ? ? ? ? ? ? return;
>>
>> ? ? ? sgl = table->sgl;
>> - ? ? while (table->orig_nents) {
>> + ? ? while (table->orig_nents && sgl) {
>> ? ? ? ? ? ? ? unsigned int alloc_size = table->orig_nents;
>> ? ? ? ? ? ? ? unsigned int sg_size;
>
> Why is this change necessary?
Well the problem we were seeing manifested itself when we called
free_fn on a NULL value. This was a naive attempt at avoiding that. If
the logic in __sg_alloc_table is corrected, I agree that we shouldn't
need this.
>> @@ -227,6 +227,7 @@ int __sg_alloc_table(struct sg_table *table,
>> unsigned int nents,
>> ?{
>> ? ? ? struct scatterlist *sg, *prv;
>> ? ? ? unsigned int left;
>> + ? ? unsigned int total_alloc = 0;
>>
>> ?#ifndef ARCH_HAS_SG_CHAIN
>> ? ? ? BUG_ON(nents > max_ents);
>> @@ -248,8 +249,14 @@ int __sg_alloc_table(struct sg_table *table,
>> unsigned int nents,
>> ? ? ? ? ? ? ? left -= sg_size;
>>
>> ? ? ? ? ? ? ? sg = alloc_fn(alloc_size, gfp_mask);
>> - ? ? ? ? ? ? if (unlikely(!sg))
>> + ? ? ? ? ? ? if (unlikely(!sg)) {
>> + ? ? ? ? ? ? ? ? ? ? table->orig_nents = total_alloc;
>> + ? ? ? ? ? ? ? ? ? ? /* mark the end of previous entry */
>> + ? ? ? ? ? ? ? ? ? ? sg_mark_end(&prv[alloc_size - 1]);
>
> prv[alloc_size - 1] is already marked as end by sg_init_table() during
> the previous iteration. ?Also, prv can be NULL at this point. ?AFAICS,
> the only thing necessary would be "if (prv) table->nents++", no?
You are right about prv possibly being NULL here. Sorry for not
catching that earlier; however, I don't think prv will be marked as an
end in the previous iteration. According to my read it will only get
marked if left is equal to 0, in which case the while loop exits.
Perhaps something like this would be more appropriate:
if(prv) {
table->orig_nents = ++table->nents;
sg_mark_end(&prv[alloc_size - 1]);
}
Thank you for taking the time to review this.
On Fri, Aug 27, 2010 at 2:45 PM, Jeffrey Carlyle
<[email protected]> wrote:
> On Fri, Aug 27, 2010 at 5:18 AM, Tejun Heo <[email protected]> wrote:
>> On 08/26/2010 06:04 PM, Jeffrey Carlyle wrote:
>>> diff --git a/lib/scatterlist.c b/lib/scatterlist.c
>>> index a5ec428..acf2c6e 100644
>>> --- a/lib/scatterlist.c
>>> +++ b/lib/scatterlist.c
>>> @@ -163,7 +163,7 @@ void __sg_free_table(struct sg_table *table,
>>> unsigned int max_ents,
>>> ? ? ? ? ? ? ? return;
>>>
>>> ? ? ? sgl = table->sgl;
>>> - ? ? while (table->orig_nents) {
>>> + ? ? while (table->orig_nents && sgl) {
>>> ? ? ? ? ? ? ? unsigned int alloc_size = table->orig_nents;
>>> ? ? ? ? ? ? ? unsigned int sg_size;
>>
>> Why is this change necessary?
>
> Well the problem we were seeing manifested itself when we called
> free_fn on a NULL value. This was a naive attempt at avoiding that. If
> the logic in __sg_alloc_table is corrected, I agree that we shouldn't
> need this.
Actually, please disregard the comment about trying to free NULL. I
don't think we need to add the "&& sgl" under any circumstances.
Hello,
On 08/27/2010 09:45 PM, Jeffrey Carlyle wrote:
>>> sg = alloc_fn(alloc_size, gfp_mask);
>>> - if (unlikely(!sg))
>>> + if (unlikely(!sg)) {
>>> + table->orig_nents = total_alloc;
>>> + /* mark the end of previous entry */
>>> + sg_mark_end(&prv[alloc_size - 1]);
>>
>> prv[alloc_size - 1] is already marked as end by sg_init_table() during
>> the previous iteration. Also, prv can be NULL at this point. AFAICS,
>> the only thing necessary would be "if (prv) table->nents++", no?
>
> You are right about prv possibly being NULL here. Sorry for not
> catching that earlier; however, I don't think prv will be marked as an
> end in the previous iteration.
But we have sg_mark_end(&sgl[nents-1]) in sg_init_table(). I think
explicit end marking in __sg_alloc_table() is redundant.
Thanks.
--
tejun
When alloc fails, free_table is being called. Depending on the number of
bytes requested, we determine if we are going to call _get_free_page()
or kmalloc(). When alloc fails, our math is wrong (due to sg_size - 1),
and the last buffer is wrongfully assumed to have been allocated by
kmalloc. Hence, kfree gets called and a panic occurs.
Signed-off-by: Jeffrey Carlyle <[email protected]>
Signed-off-by: Olusanya Soyannwo <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Jens Axboe <[email protected]>
---
lib/scatterlist.c | 8 +++++++-
1 files changed, 7 insertions(+), 1 deletions(-)
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index a5ec428..9bc637f 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -248,8 +248,14 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
left -= sg_size;
sg = alloc_fn(alloc_size, gfp_mask);
- if (unlikely(!sg))
+ if (unlikely(!sg)) {
+ /*
+ * Adjust entry count so that proper free function is
+ * used in sg_kfree.
+ */
+ table->nents = ++table->orig_nents;
return -ENOMEM;
+ }
sg_init_table(sg, alloc_size);
table->nents = table->orig_nents += sg_size;
--
1.6.3.3
Hello,
On 08/30/2010 05:01 PM, Jeffrey Carlyle wrote:
> When alloc fails, free_table is being called. Depending on the number of
> bytes requested, we determine if we are going to call _get_free_page()
> or kmalloc(). When alloc fails, our math is wrong (due to sg_size - 1),
> and the last buffer is wrongfully assumed to have been allocated by
> kmalloc. Hence, kfree gets called and a panic occurs.
>
> Signed-off-by: Jeffrey Carlyle <[email protected]>
> Signed-off-by: Olusanya Soyannwo <[email protected]>
> Cc: Tejun Heo <[email protected]>
> Cc: Jens Axboe <[email protected]>
Supposing it's verified to fix the same issue,
Acked-by: Tejun Heo <[email protected]>
trivial suggestions below,
> ---
> lib/scatterlist.c | 8 +++++++-
> 1 files changed, 7 insertions(+), 1 deletions(-)
>
> diff --git a/lib/scatterlist.c b/lib/scatterlist.c
> index a5ec428..9bc637f 100644
> --- a/lib/scatterlist.c
> +++ b/lib/scatterlist.c
> @@ -248,8 +248,14 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
> left -= sg_size;
>
> sg = alloc_fn(alloc_size, gfp_mask);
> - if (unlikely(!sg))
> + if (unlikely(!sg)) {
> + /*
> + * Adjust entry count so that proper free function is
> + * used in sg_kfree.
> + */
I think it would be better why the adjustment is necessary. IOW,
something like "Adjust entry count to reflect that the last entry of
the previous table won't be used for linkage. Without this,
sg_kfree() may get confused."
Thanks.
--
tejun
When alloc fails, free_table is being called. Depending on the number of
bytes requested, we determine if we are going to call _get_free_page()
or kmalloc(). When alloc fails, our math is wrong (due to sg_size - 1),
and the last buffer is wrongfully assumed to have been allocated by
kmalloc. Hence, kfree gets called and a panic occurs.
Signed-off-by: Jeffrey Carlyle <[email protected]>
Signed-off-by: Olusanya Soyannwo <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Jens Axboe <[email protected]>
---
lib/scatterlist.c | 10 +++++++++-
1 files changed, 9 insertions(+), 1 deletions(-)
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index a5ec428..daaab2f 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -248,8 +248,16 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
left -= sg_size;
sg = alloc_fn(alloc_size, gfp_mask);
- if (unlikely(!sg))
+ if (unlikely(!sg)) {
+ /*
+ * Adjust entry count so that proper free function is
+ * used in sg_kfree.
+ */
+ if(pvr)
+ table->nents = ++table->orig_nents;
+
return -ENOMEM;
+ }
sg_init_table(sg, alloc_size);
table->nents = table->orig_nents += sg_size;
--
1.6.3.3
When alloc fails, free_table is being called. Depending on the number of
bytes requested, we determine if we are going to call _get_free_page()
or kmalloc(). When alloc fails, our math is wrong (due to sg_size - 1),
and the last buffer is wrongfully assumed to have been allocated by
kmalloc. Hence, kfree gets called and a panic occurs.
Signed-off-by: Jeffrey Carlyle <[email protected]>
Signed-off-by: Olusanya Soyannwo <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Jens Axboe <[email protected]>
---
lib/scatterlist.c | 12 +++++++++++-
1 files changed, 11 insertions(+), 1 deletions(-)
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index a5ec428..a654897 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -248,8 +248,18 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
left -= sg_size;
sg = alloc_fn(alloc_size, gfp_mask);
- if (unlikely(!sg))
+ if (unlikely(!sg)) {
+ /*
+ * Adjust entry count to reflect that the last
+ * entry of the previous table won't be used for
+ * linkage. Without this, sg_kfree() may get
+ * confused.
+ */
+ if(pvr)
+ table->nents = ++table->orig_nents;
+
return -ENOMEM;
+ }
sg_init_table(sg, alloc_size);
table->nents = table->orig_nents += sg_size;
--
1.6.3.3
On 08/30/2010 05:58 PM, Jeffrey Carlyle wrote:
> + /*
> + * Adjust entry count so that proper free function is
> + * used in sg_kfree.
> + */
> + if(pvr)
I suppose you meant "if (prv)"?
--
tejun
When alloc fails, free_table is being called. Depending on the number of
bytes requested, we determine if we are going to call _get_free_page()
or kmalloc(). When alloc fails, our math is wrong (due to sg_size - 1),
and the last buffer is wrongfully assumed to have been allocated by
kmalloc. Hence, kfree gets called and a panic occurs.
Signed-off-by: Jeffrey Carlyle <[email protected]>
Signed-off-by: Olusanya Soyannwo <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Jens Axboe <[email protected]>
---
lib/scatterlist.c | 12 +++++++++++-
1 files changed, 11 insertions(+), 1 deletions(-)
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index a5ec428..4dfdca2 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -248,8 +248,18 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
left -= sg_size;
sg = alloc_fn(alloc_size, gfp_mask);
- if (unlikely(!sg))
+ if (unlikely(!sg)) {
+ /*
+ * Adjust entry count to reflect that the last
+ * entry of the previous table won't be used for
+ * linkage. Without this, sg_kfree() may get
+ * confused.
+ */
+ if (pvr)
+ table->nents = ++table->orig_nents;
+
return -ENOMEM;
+ }
sg_init_table(sg, alloc_size);
table->nents = table->orig_nents += sg_size;
--
1.6.3.3
On Mon, Aug 30, 2010 at 11:05 AM, Tejun Heo <[email protected]> wrote:
> On 08/30/2010 05:58 PM, Jeffrey Carlyle wrote:
>> + ? ? ? ? ? ? ? ? ? ? /*
>> + ? ? ? ? ? ? ? ? ? ? ?* Adjust entry count so that proper free function is
>> + ? ? ? ? ? ? ? ? ? ? ?* used in sg_kfree.
>> + ? ? ? ? ? ? ? ? ? ? ?*/
>> + ? ? ? ? ? ? ? ? ? ? if(pvr)
>
> I suppose you meant "if (prv)"?
I did. I ran checkpatch.pl and submitted a new patch. Thank you for patience.
On 08/30/2010 06:08 PM, Jeffrey Carlyle wrote:
> + if (pvr)
"if (prv)" not "if (pvr)". Please slow down a bit and build before
hitting the send button.
Thanks.
--
tejun
When alloc fails, free_table is being called. Depending on the number of
bytes requested, we determine if we are going to call _get_free_page()
or kmalloc(). When alloc fails, our math is wrong (due to sg_size - 1),
and the last buffer is wrongfully assumed to have been allocated by
kmalloc. Hence, kfree gets called and a panic occurs.
Signed-off-by: Jeffrey Carlyle <[email protected]>
Signed-off-by: Olusanya Soyannwo <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Jens Axboe <[email protected]>
---
lib/scatterlist.c | 12 +++++++++++-
1 files changed, 11 insertions(+), 1 deletions(-)
diff --git a/lib/scatterlist.c b/lib/scatterlist.c
index a5ec428..ecdea70 100644
--- a/lib/scatterlist.c
+++ b/lib/scatterlist.c
@@ -248,8 +248,18 @@ int __sg_alloc_table(struct sg_table *table, unsigned int nents,
left -= sg_size;
sg = alloc_fn(alloc_size, gfp_mask);
- if (unlikely(!sg))
+ if (unlikely(!sg)) {
+ /*
+ * Adjust entry count to reflect that the last
+ * entry of the previous table won't be used for
+ * linkage. Without this, sg_kfree() may get
+ * confused.
+ */
+ if (prv)
+ table->nents = ++table->orig_nents;
+
return -ENOMEM;
+ }
sg_init_table(sg, alloc_size);
table->nents = table->orig_nents += sg_size;
--
1.6.3.3
On 08/30/2010 06:19 PM, Jeffrey Carlyle wrote:
> When alloc fails, free_table is being called. Depending on the number of
> bytes requested, we determine if we are going to call _get_free_page()
> or kmalloc(). When alloc fails, our math is wrong (due to sg_size - 1),
> and the last buffer is wrongfully assumed to have been allocated by
> kmalloc. Hence, kfree gets called and a panic occurs.
>
> Signed-off-by: Jeffrey Carlyle <[email protected]>
> Signed-off-by: Olusanya Soyannwo <[email protected]>
> Cc: Tejun Heo <[email protected]>
> Cc: Jens Axboe <[email protected]>
Acked-by: Tejun Heo <[email protected]>
Thanks. :-)
--
tejun
On 2010-08-30 18:19, Jeffrey Carlyle wrote:
> When alloc fails, free_table is being called. Depending on the number of
> bytes requested, we determine if we are going to call _get_free_page()
> or kmalloc(). When alloc fails, our math is wrong (due to sg_size - 1),
> and the last buffer is wrongfully assumed to have been allocated by
> kmalloc. Hence, kfree gets called and a panic occurs.
That's a lot of revs, thanks for getting it done (and Tejun for
the careful reviews).
--
Jens Axboe