Changelog v3 vs v2:
- Further changes to changelog message to explain the bug case
of incorrect use of kunmapped buffer
Changelog v2 vs v1:
- Changelog message now correctly explains the problem
Fixes a bug introduced by commit c8f2f0db1 ("zram: Fix handling
of incompressible pages") which caused invalid memory references
during disk write. Invalid references could occur in two cases:
- Incoming data expands on compression: In this case, reference was
made to kunmap()'ed bio page.
- Partial (non PAGE_SIZE) write with incompressible data: In this
case, reference was made to a kfree()'ed buffer.
Fixes bug 50081:
https://bugzilla.kernel.org/show_bug.cgi?id=50081
Signed-off-by: Nitin Gupta <[email protected]>
Reported-by: Mihail Kasadjikov <[email protected]>
Reported-by: Tomas M <[email protected]>
Reviewed-by: Minchan Kim <[email protected]>
---
drivers/staging/zram/zram_drv.c | 39 ++++++++++++++++++++++++---------------
1 file changed, 24 insertions(+), 15 deletions(-)
diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
index fb4a7c9..f2a73bd 100644
--- a/drivers/staging/zram/zram_drv.c
+++ b/drivers/staging/zram/zram_drv.c
@@ -265,7 +265,7 @@ out_cleanup:
static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
int offset)
{
- int ret;
+ int ret = 0;
size_t clen;
unsigned long handle;
struct page *page;
@@ -286,10 +286,8 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
goto out;
}
ret = zram_decompress_page(zram, uncmem, index);
- if (ret) {
- kfree(uncmem);
+ if (ret)
goto out;
- }
}
/*
@@ -302,16 +300,18 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
user_mem = kmap_atomic(page);
- if (is_partial_io(bvec))
+ if (is_partial_io(bvec)) {
memcpy(uncmem + offset, user_mem + bvec->bv_offset,
bvec->bv_len);
- else
+ kunmap_atomic(user_mem);
+ user_mem = NULL;
+ } else {
uncmem = user_mem;
+ }
if (page_zero_filled(uncmem)) {
- kunmap_atomic(user_mem);
- if (is_partial_io(bvec))
- kfree(uncmem);
+ if (!is_partial_io(bvec))
+ kunmap_atomic(user_mem);
zram_stat_inc(&zram->stats.pages_zero);
zram_set_flag(zram, index, ZRAM_ZERO);
ret = 0;
@@ -321,9 +321,11 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
ret = lzo1x_1_compress(uncmem, PAGE_SIZE, src, &clen,
zram->compress_workmem);
- kunmap_atomic(user_mem);
- if (is_partial_io(bvec))
- kfree(uncmem);
+ if (!is_partial_io(bvec)) {
+ kunmap_atomic(user_mem);
+ user_mem = NULL;
+ uncmem = NULL;
+ }
if (unlikely(ret != LZO_E_OK)) {
pr_err("Compression failed! err=%d\n", ret);
@@ -332,8 +334,10 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
if (unlikely(clen > max_zpage_size)) {
zram_stat_inc(&zram->stats.bad_compress);
- src = uncmem;
clen = PAGE_SIZE;
+ src = NULL;
+ if (is_partial_io(bvec))
+ src = uncmem;
}
handle = zs_malloc(zram->mem_pool, clen);
@@ -345,7 +349,11 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
}
cmem = zs_map_object(zram->mem_pool, handle, ZS_MM_WO);
+ if ((clen == PAGE_SIZE) && !is_partial_io(bvec))
+ src = kmap_atomic(page);
memcpy(cmem, src, clen);
+ if ((clen == PAGE_SIZE) && !is_partial_io(bvec))
+ kunmap_atomic(src);
zs_unmap_object(zram->mem_pool, handle);
@@ -358,9 +366,10 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
if (clen <= PAGE_SIZE / 2)
zram_stat_inc(&zram->stats.good_compress);
- return 0;
-
out:
+ if (is_partial_io(bvec))
+ kfree(uncmem);
+
if (ret)
zram_stat64_inc(zram, &zram->stats.failed_writes);
return ret;
--
1.7.10.4
Hi Greg,
I would like to tidy zram_bvec_write up but it needs more churn
than needed to fix this bug and makes review hard. So I want you to pick up
this patch asap because it's a candidate of stable.
Thanks.
On Thu, Nov 29, 2012 at 01:18:48AM -0800, Nitin Gupta wrote:
> Changelog v3 vs v2:
> - Further changes to changelog message to explain the bug case
> of incorrect use of kunmapped buffer
> Changelog v2 vs v1:
> - Changelog message now correctly explains the problem
>
> Fixes a bug introduced by commit c8f2f0db1 ("zram: Fix handling
> of incompressible pages") which caused invalid memory references
> during disk write. Invalid references could occur in two cases:
> - Incoming data expands on compression: In this case, reference was
> made to kunmap()'ed bio page.
> - Partial (non PAGE_SIZE) write with incompressible data: In this
> case, reference was made to a kfree()'ed buffer.
>
> Fixes bug 50081:
> https://bugzilla.kernel.org/show_bug.cgi?id=50081
>
> Signed-off-by: Nitin Gupta <[email protected]>
> Reported-by: Mihail Kasadjikov <[email protected]>
> Reported-by: Tomas M <[email protected]>
> Reviewed-by: Minchan Kim <[email protected]>
> ---
> drivers/staging/zram/zram_drv.c | 39 ++++++++++++++++++++++++---------------
> 1 file changed, 24 insertions(+), 15 deletions(-)
>
> diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
> index fb4a7c9..f2a73bd 100644
> --- a/drivers/staging/zram/zram_drv.c
> +++ b/drivers/staging/zram/zram_drv.c
> @@ -265,7 +265,7 @@ out_cleanup:
> static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> int offset)
> {
> - int ret;
> + int ret = 0;
> size_t clen;
> unsigned long handle;
> struct page *page;
> @@ -286,10 +286,8 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> goto out;
> }
> ret = zram_decompress_page(zram, uncmem, index);
> - if (ret) {
> - kfree(uncmem);
> + if (ret)
> goto out;
> - }
> }
>
> /*
> @@ -302,16 +300,18 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>
> user_mem = kmap_atomic(page);
>
> - if (is_partial_io(bvec))
> + if (is_partial_io(bvec)) {
> memcpy(uncmem + offset, user_mem + bvec->bv_offset,
> bvec->bv_len);
> - else
> + kunmap_atomic(user_mem);
> + user_mem = NULL;
> + } else {
> uncmem = user_mem;
> + }
>
> if (page_zero_filled(uncmem)) {
> - kunmap_atomic(user_mem);
> - if (is_partial_io(bvec))
> - kfree(uncmem);
> + if (!is_partial_io(bvec))
> + kunmap_atomic(user_mem);
> zram_stat_inc(&zram->stats.pages_zero);
> zram_set_flag(zram, index, ZRAM_ZERO);
> ret = 0;
> @@ -321,9 +321,11 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> ret = lzo1x_1_compress(uncmem, PAGE_SIZE, src, &clen,
> zram->compress_workmem);
>
> - kunmap_atomic(user_mem);
> - if (is_partial_io(bvec))
> - kfree(uncmem);
> + if (!is_partial_io(bvec)) {
> + kunmap_atomic(user_mem);
> + user_mem = NULL;
> + uncmem = NULL;
> + }
>
> if (unlikely(ret != LZO_E_OK)) {
> pr_err("Compression failed! err=%d\n", ret);
> @@ -332,8 +334,10 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>
> if (unlikely(clen > max_zpage_size)) {
> zram_stat_inc(&zram->stats.bad_compress);
> - src = uncmem;
> clen = PAGE_SIZE;
> + src = NULL;
> + if (is_partial_io(bvec))
> + src = uncmem;
> }
>
> handle = zs_malloc(zram->mem_pool, clen);
> @@ -345,7 +349,11 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> }
> cmem = zs_map_object(zram->mem_pool, handle, ZS_MM_WO);
>
> + if ((clen == PAGE_SIZE) && !is_partial_io(bvec))
> + src = kmap_atomic(page);
> memcpy(cmem, src, clen);
> + if ((clen == PAGE_SIZE) && !is_partial_io(bvec))
> + kunmap_atomic(src);
>
> zs_unmap_object(zram->mem_pool, handle);
>
> @@ -358,9 +366,10 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> if (clen <= PAGE_SIZE / 2)
> zram_stat_inc(&zram->stats.good_compress);
>
> - return 0;
> -
> out:
> + if (is_partial_io(bvec))
> + kfree(uncmem);
> +
> if (ret)
> zram_stat64_inc(zram, &zram->stats.failed_writes);
> return ret;
> --
> 1.7.10.4
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
--
Kind regards,
Minchan Kim
On Fri, Nov 30, 2012 at 09:36:09AM +0900, Minchan Kim wrote:
> Hi Greg,
>
> I would like to tidy zram_bvec_write up but it needs more churn
> than needed to fix this bug and makes review hard. So I want you to pick up
> this patch asap because it's a candidate of stable.
It is? How come no one said that in the patch itself? Please read
Documentation/stable_kernel_rules.txt for how to properly include a
patch into the stable tree.
What stable trees should it go to?
thanks,
greg k-h
On Fri, Nov 30, 2012 at 09:36:09AM +0900, Minchan Kim wrote:
> Hi Greg,
>
> I would like to tidy zram_bvec_write up but it needs more churn
> than needed to fix this bug and makes review hard. So I want you to pick up
> this patch asap because it's a candidate of stable.
Ok, I have way too many different, and competing, zram patches in my
inbox right now.
So I'm just going to delete them all, and ask Nitin to resend them all
so I know what is going on here.
Nitin, is that ok?
thanks,
greg k-h
On Thu, Nov 29, 2012 at 6:13 PM, Greg KH <[email protected]> wrote:
> On Fri, Nov 30, 2012 at 09:36:09AM +0900, Minchan Kim wrote:
>> Hi Greg,
>>
>> I would like to tidy zram_bvec_write up but it needs more churn
>> than needed to fix this bug and makes review hard. So I want you to pick up
>> this patch asap because it's a candidate of stable.
>
> Ok, I have way too many different, and competing, zram patches in my
> inbox right now.
>
> So I'm just going to delete them all, and ask Nitin to resend them all
> so I know what is going on here.
>
> Nitin, is that ok?
>
Sure. I will also include affected stable version in the changelog (or
whatever stable_kernel_rules says).
Thanks,
Nitin