2012-10-11 00:42:31

by Nitin Gupta

[permalink] [raw]
Subject: [PATCH] [staging][zram] Fix handling of incompressible pages

Change 130f315a (staging: zram: remove special handle of uncompressed page)
introduced a bug in the handling of incompressible pages which resulted in
memory allocation failure for such pages.

When a page expands on compression, say from 4K to 4K+30, we were trying to
do zsmalloc(pool, 4K+30). However, the maximum size which zsmalloc can
allocate is PAGE_SIZE (for obvious reasons), so such allocation requests
always return failure (0).

For a page that has compressed size larger than the original size (this may
happen with already compressed or random data), there is no point storing
the compressed version as that would take more space and would also require
time for decompression when needed again. So, the fix is to store any page,
whose compressed size exceeds a threshold (max_zpage_size), as-it-is i.e.
without compression. Memory required for storing this uncompressed page can
then be requested from zsmalloc which supports PAGE_SIZE sized allocations.

Lastly, the fix checks that we do not attempt to "decompress" the page which
we stored in the uncompressed form -- we just memcpy() out such pages.

Signed-off-by: Nitin Gupta <[email protected]>
Reported-by: [email protected]
Reported-by: [email protected]
Reported-by: [email protected]
Acked-by: Minchan Kim <[email protected]>
---
drivers/staging/zram/zram_drv.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
index 653b074..6edefde 100644
--- a/drivers/staging/zram/zram_drv.c
+++ b/drivers/staging/zram/zram_drv.c
@@ -223,8 +223,13 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
cmem = zs_map_object(zram->mem_pool, zram->table[index].handle,
ZS_MM_RO);

- ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
+ if (zram->table[index].size == PAGE_SIZE) {
+ memcpy(uncmem, cmem, PAGE_SIZE);
+ ret = LZO_E_OK;
+ } else {
+ ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
uncmem, &clen);
+ }

if (is_partial_io(bvec)) {
memcpy(user_mem + bvec->bv_offset, uncmem + offset,
@@ -342,8 +347,11 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
goto out;
}

- if (unlikely(clen > max_zpage_size))
+ if (unlikely(clen > max_zpage_size)) {
zram_stat_inc(&zram->stats.bad_compress);
+ src = uncmem;
+ clen = PAGE_SIZE;
+ }

handle = zs_malloc(zram->mem_pool, clen);
if (!handle) {
--
1.7.9.5


2012-10-22 20:43:13

by Greg KH

[permalink] [raw]
Subject: Re: [PATCH] [staging][zram] Fix handling of incompressible pages

On Wed, Oct 10, 2012 at 05:42:18PM -0700, Nitin Gupta wrote:
> Change 130f315a (staging: zram: remove special handle of uncompressed page)
> introduced a bug in the handling of incompressible pages which resulted in
> memory allocation failure for such pages.
>
> When a page expands on compression, say from 4K to 4K+30, we were trying to
> do zsmalloc(pool, 4K+30). However, the maximum size which zsmalloc can
> allocate is PAGE_SIZE (for obvious reasons), so such allocation requests
> always return failure (0).
>
> For a page that has compressed size larger than the original size (this may
> happen with already compressed or random data), there is no point storing
> the compressed version as that would take more space and would also require
> time for decompression when needed again. So, the fix is to store any page,
> whose compressed size exceeds a threshold (max_zpage_size), as-it-is i.e.
> without compression. Memory required for storing this uncompressed page can
> then be requested from zsmalloc which supports PAGE_SIZE sized allocations.
>
> Lastly, the fix checks that we do not attempt to "decompress" the page which
> we stored in the uncompressed form -- we just memcpy() out such pages.

So this fix needs to go to the stable 3.6 release also, right?

thanks,

greg k-h

2012-10-22 21:48:08

by Nitin Gupta

[permalink] [raw]
Subject: Re: [PATCH] [staging][zram] Fix handling of incompressible pages

On Mon, Oct 22, 2012 at 1:43 PM, Greg KH <[email protected]> wrote:
> On Wed, Oct 10, 2012 at 05:42:18PM -0700, Nitin Gupta wrote:
>> Change 130f315a (staging: zram: remove special handle of uncompressed page)
>> introduced a bug in the handling of incompressible pages which resulted in
>> memory allocation failure for such pages.
>>
>> When a page expands on compression, say from 4K to 4K+30, we were trying to
>> do zsmalloc(pool, 4K+30). However, the maximum size which zsmalloc can
>> allocate is PAGE_SIZE (for obvious reasons), so such allocation requests
>> always return failure (0).
>>
>> For a page that has compressed size larger than the original size (this may
>> happen with already compressed or random data), there is no point storing
>> the compressed version as that would take more space and would also require
>> time for decompression when needed again. So, the fix is to store any page,
>> whose compressed size exceeds a threshold (max_zpage_size), as-it-is i.e.
>> without compression. Memory required for storing this uncompressed page can
>> then be requested from zsmalloc which supports PAGE_SIZE sized allocations.
>>
>> Lastly, the fix checks that we do not attempt to "decompress" the page which
>> we stored in the uncompressed form -- we just memcpy() out such pages.
>
> So this fix needs to go to the stable 3.6 release also, right?
>

Forgot to mention -- yes, this needs to be in 3.6 also.

Thanks,
Nitin