2012-10-09 01:32:54

by Nitin Gupta

[permalink] [raw]
Subject: [PATCH] [staging][zram] Fix handling of incompressible pages

Change 130f315a introduced a bug in the handling of incompressible
pages which resulted in memory allocation failure for such pages.
The fix is to store the page as-is i.e. without compression if the
compressed size exceeds a threshold (max_zpage_size) and request
exactly PAGE_SIZE sized buffer from zsmalloc.

Signed-off-by: Nitin Gupta <[email protected]>
Reported-by: [email protected]
Reported-by: [email protected]
Reported-by: [email protected]
Tested-by: [email protected]
Tested-by: [email protected]
---
drivers/staging/zram/zram_drv.c | 12 ++++++++++--
1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
index 653b074..6edefde 100644
--- a/drivers/staging/zram/zram_drv.c
+++ b/drivers/staging/zram/zram_drv.c
@@ -223,8 +223,13 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
cmem = zs_map_object(zram->mem_pool, zram->table[index].handle,
ZS_MM_RO);

- ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
+ if (zram->table[index].size == PAGE_SIZE) {
+ memcpy(uncmem, cmem, PAGE_SIZE);
+ ret = LZO_E_OK;
+ } else {
+ ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
uncmem, &clen);
+ }

if (is_partial_io(bvec)) {
memcpy(user_mem + bvec->bv_offset, uncmem + offset,
@@ -342,8 +347,11 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
goto out;
}

- if (unlikely(clen > max_zpage_size))
+ if (unlikely(clen > max_zpage_size)) {
zram_stat_inc(&zram->stats.bad_compress);
+ src = uncmem;
+ clen = PAGE_SIZE;
+ }

handle = zs_malloc(zram->mem_pool, clen);
if (!handle) {
--
1.7.9.5


2012-10-09 08:46:26

by Dan Carpenter

[permalink] [raw]
Subject: Re: [PATCH] [staging][zram] Fix handling of incompressible pages

On Mon, Oct 08, 2012 at 06:32:44PM -0700, Nitin Gupta wrote:
> Change 130f315a introduced a bug in the handling of incompressible
^^^^^^^^
When you mention a patch, please include the human readable patch
title as well as the hash. "staging: zram: remove special handle of
uncompressed page".

regards,
dan carpenter

2012-10-09 13:31:56

by Minchan Kim

[permalink] [raw]
Subject: Re: [PATCH] [staging][zram] Fix handling of incompressible pages

Hi Nitin,

On Mon, Oct 08, 2012 at 06:32:44PM -0700, Nitin Gupta wrote:
> Change 130f315a introduced a bug in the handling of incompressible
> pages which resulted in memory allocation failure for such pages.
> The fix is to store the page as-is i.e. without compression if the
> compressed size exceeds a threshold (max_zpage_size) and request
> exactly PAGE_SIZE sized buffer from zsmalloc.

It seems you found a bug and already fixed it with below helpers.
But unfortunately, description isn't enough to understand the problem for me.
Could you explain in detail?
You said it results in memory allocation failure. What is failure?
You mean this code by needing a few pages for zspage to meet class size?

handle = zs_malloc(zram->mem_pool, clen);
if (!handle) {
pr_info("Error allocating memory for compressed "
"page: %u, size=%zu\n", index, clen);
ret = -ENOMEM;
goto out;
}

So instead of allocating more pages for incompressible page to make zspage,
just allocate a page for PAGE_SIZE class without compression?

>
> Signed-off-by: Nitin Gupta <[email protected]>
> Reported-by: [email protected]
> Reported-by: [email protected]
> Reported-by: [email protected]
> Tested-by: [email protected]
> Tested-by: [email protected]
> ---
> drivers/staging/zram/zram_drv.c | 12 ++++++++++--
> 1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
> index 653b074..6edefde 100644
> --- a/drivers/staging/zram/zram_drv.c
> +++ b/drivers/staging/zram/zram_drv.c
> @@ -223,8 +223,13 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
> cmem = zs_map_object(zram->mem_pool, zram->table[index].handle,
> ZS_MM_RO);
>
> - ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
> + if (zram->table[index].size == PAGE_SIZE) {
> + memcpy(uncmem, cmem, PAGE_SIZE);
> + ret = LZO_E_OK;
> + } else {
> + ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
> uncmem, &clen);
> + }
>
> if (is_partial_io(bvec)) {
> memcpy(user_mem + bvec->bv_offset, uncmem + offset,
> @@ -342,8 +347,11 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
> goto out;
> }
>
> - if (unlikely(clen > max_zpage_size))
> + if (unlikely(clen > max_zpage_size)) {
> zram_stat_inc(&zram->stats.bad_compress);
> + src = uncmem;
> + clen = PAGE_SIZE;
> + }
>
> handle = zs_malloc(zram->mem_pool, clen);
> if (!handle) {
> --
> 1.7.9.5
>

--
Kind Regards,
Minchan Kim

2012-10-09 17:35:31

by Nitin Gupta

[permalink] [raw]
Subject: Re: [PATCH] [staging][zram] Fix handling of incompressible pages

Hi Minchan,

On 10/09/2012 06:31 AM, Minchan Kim wrote:
>
> On Mon, Oct 08, 2012 at 06:32:44PM -0700, Nitin Gupta wrote:
>> Change 130f315a introduced a bug in the handling of incompressible
>> pages which resulted in memory allocation failure for such pages.
>> The fix is to store the page as-is i.e. without compression if the
>> compressed size exceeds a threshold (max_zpage_size) and request
>> exactly PAGE_SIZE sized buffer from zsmalloc.
>
> It seems you found a bug and already fixed it with below helpers.
> But unfortunately, description isn't enough to understand the problem for me.
> Could you explain in detail?
> You said it results in memory allocation failure. What is failure?
> You mean this code by needing a few pages for zspage to meet class size?
>
> handle = zs_malloc(zram->mem_pool, clen);
> if (!handle) {
> pr_info("Error allocating memory for compressed "
> "page: %u, size=%zu\n", index, clen);
> ret = -ENOMEM;
> goto out;
> }
>
> So instead of allocating more pages for incompressible page to make zspage,
> just allocate a page for PAGE_SIZE class without compression?
>

When a page expands on compression, say from 4K to 4K+30, we were trying
to do zsmalloc(pool, 4K+30). However, the maximum size which zsmalloc
can allocate is PAGE_SIZE (for obvious reasons), so such allocation
requests always return failure (0).

For a page that has compressed size larger than the original size (this
may happen with already compressed or random data), there is no point
storing the compressed version as that would take more space and would
also require time for decompression when needed again. So, the fix is to
store any page, whose compressed size exceeds a threshold
(max_zpage_size), as-it-is i.e. without compression. Memory required
for storing this uncompressed page can then be requested from zsmalloc
which supports PAGE_SIZE sized allocations.

Lastly, the fix checks that we do not attempt to "decompress" the page
which we stored in the uncompressed form -- we just memcpy() out such pages.

Thanks,
Nitin


>>
>> Signed-off-by: Nitin Gupta <[email protected]>
>> Reported-by: [email protected]
>> Reported-by: [email protected]
>> Reported-by: [email protected]
>> Tested-by: [email protected]
>> Tested-by: [email protected]
>> ---
>> drivers/staging/zram/zram_drv.c | 12 ++++++++++--
>> 1 file changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/staging/zram/zram_drv.c b/drivers/staging/zram/zram_drv.c
>> index 653b074..6edefde 100644
>> --- a/drivers/staging/zram/zram_drv.c
>> +++ b/drivers/staging/zram/zram_drv.c
>> @@ -223,8 +223,13 @@ static int zram_bvec_read(struct zram *zram, struct bio_vec *bvec,
>> cmem = zs_map_object(zram->mem_pool, zram->table[index].handle,
>> ZS_MM_RO);
>>
>> - ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
>> + if (zram->table[index].size == PAGE_SIZE) {
>> + memcpy(uncmem, cmem, PAGE_SIZE);
>> + ret = LZO_E_OK;
>> + } else {
>> + ret = lzo1x_decompress_safe(cmem, zram->table[index].size,
>> uncmem, &clen);
>> + }
>>
>> if (is_partial_io(bvec)) {
>> memcpy(user_mem + bvec->bv_offset, uncmem + offset,
>> @@ -342,8 +347,11 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
>> goto out;
>> }
>>
>> - if (unlikely(clen > max_zpage_size))
>> + if (unlikely(clen > max_zpage_size)) {
>> zram_stat_inc(&zram->stats.bad_compress);
>> + src = uncmem;
>> + clen = PAGE_SIZE;
>> + }
>>
>> handle = zs_malloc(zram->mem_pool, clen);
>> if (!handle) {
>> --
>> 1.7.9.5
>>
>

2012-10-09 23:32:31

by Minchan Kim

[permalink] [raw]
Subject: Re: [PATCH] [staging][zram] Fix handling of incompressible pages

On Tue, Oct 09, 2012 at 10:35:24AM -0700, Nitin Gupta wrote:
> Hi Minchan,
>
> On 10/09/2012 06:31 AM, Minchan Kim wrote:
> >
> >On Mon, Oct 08, 2012 at 06:32:44PM -0700, Nitin Gupta wrote:
> >>Change 130f315a introduced a bug in the handling of incompressible
> >>pages which resulted in memory allocation failure for such pages.
> >>The fix is to store the page as-is i.e. without compression if the
> >>compressed size exceeds a threshold (max_zpage_size) and request
> >>exactly PAGE_SIZE sized buffer from zsmalloc.
> >
> >It seems you found a bug and already fixed it with below helpers.
> >But unfortunately, description isn't enough to understand the problem for me.
> >Could you explain in detail?
> >You said it results in memory allocation failure. What is failure?
> >You mean this code by needing a few pages for zspage to meet class size?
> >
> > handle = zs_malloc(zram->mem_pool, clen);
> > if (!handle) {
> > pr_info("Error allocating memory for compressed "
> > "page: %u, size=%zu\n", index, clen);
> > ret = -ENOMEM;
> > goto out;
> > }
> >
> >So instead of allocating more pages for incompressible page to make zspage,
> >just allocate a page for PAGE_SIZE class without compression?
> >
>
> When a page expands on compression, say from 4K to 4K+30, we were
> trying to do zsmalloc(pool, 4K+30). However, the maximum size which
> zsmalloc can allocate is PAGE_SIZE (for obvious reasons), so such
> allocation requests always return failure (0).

Right.
I think it would be better to add this explanation in description.

>
> For a page that has compressed size larger than the original size
> (this may happen with already compressed or random data), there is
> no point storing the compressed version as that would take more
> space and would also require time for decompression when needed
> again. So, the fix is to store any page, whose compressed size
> exceeds a threshold (max_zpage_size), as-it-is i.e. without
> compression. Memory required for storing this uncompressed page can

Yes. It's already definition of max_zpage_size.

> then be requested from zsmalloc which supports PAGE_SIZE sized
> allocations.

>
> Lastly, the fix checks that we do not attempt to "decompress" the
> page which we stored in the uncompressed form -- we just memcpy()
> out such pages.
>
> Thanks,
> Nitin
>
>
> >>
> >>Signed-off-by: Nitin Gupta <[email protected]>
> >>Reported-by: [email protected]
> >>Reported-by: [email protected]
> >>Reported-by: [email protected]
> >>Tested-by: [email protected]
> >>Tested-by: [email protected]
Anyway,
Acked-by: Minchan Kim <[email protected]>

Thanks, Nitin!
--
Kind regards,
Minchan Kim