2022-04-22 19:18:50

by Alexey Romanov

[permalink] [raw]
Subject: [PATCH v1] zram: don't retry compress incompressible page

It doesn't make sense for us to retry to compress an uncompressible
page (comp_len == PAGE_SIZE) in zsmalloc slowpath, because we will
be storing it uncompressed anyway. We can avoid wasting time on
another compression attempt. It is enough to take lock
(zcomp_stream_get) and execute the code below.

Signed-off-by: Alexey Romanov <[email protected]>
Signed-off-by: Dmitry Rokosov <[email protected]>
---
drivers/block/zram/zram_drv.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index cb253d80d72b..bb9dd8b64176 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1413,9 +1413,20 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
handle = zs_malloc(zram->mem_pool, comp_len,
GFP_NOIO | __GFP_HIGHMEM |
__GFP_MOVABLE);
- if (handle)
+ if (!handle)
+ return -ENOMEM;
+
+ if (comp_len != PAGE_SIZE)
goto compress_again;
- return -ENOMEM;
+
+ /*
+ * If the page is not compressible, you need to acquire the lock and
+ * execute the code below. The zcomp_stream_get() call is needed to
+ * disable the cpu hotplug and grab the zstrm buffer back.
+ * It is necessary that the dereferencing of the zstrm variable below
+ * occurs correctly.
+ */
+ zstrm = zcomp_stream_get(zram->comp);
}

alloced_pages = zs_get_total_pages(zram->mem_pool);
--
2.30.1


2022-04-22 23:11:31

by Minchan Kim

[permalink] [raw]
Subject: Re: [PATCH v1] zram: don't retry compress incompressible page

On Fri, Apr 22, 2022 at 02:59:59PM +0300, Alexey Romanov wrote:
> It doesn't make sense for us to retry to compress an uncompressible
> page (comp_len == PAGE_SIZE) in zsmalloc slowpath, because we will
> be storing it uncompressed anyway. We can avoid wasting time on
> another compression attempt. It is enough to take lock
> (zcomp_stream_get) and execute the code below.

Totally make sense. However, I'd like to discuss removing the double
compression logic entirely.

Ccing Sergey to get some opinion.

[da9556a2367c, zram: user per-cpu compression streams]

The 2nd trial allocation under per-cpu pressmption has been used to
prevent regression of allocation failure. However, it makes trouble
for maintenance without significant benefit.
(I gathered some of data from my device and writestall was just 38 for
10 days even though swap was very heap - pswpout 164831211).

Even, such 38 attempts don't guarantee 2nd trial allocation was
successful because it's timing dependent and __GFP_DIRECT_RECLAIM is
never helpful in reclaim context.

I'd like to remove the double compression logic and make it simple.
What do you think?

>
> Signed-off-by: Alexey Romanov <[email protected]>
> Signed-off-by: Dmitry Rokosov <[email protected]>
> ---
> drivers/block/zram/zram_drv.c | 15 +++++++++++++--
> 1 file changed, 13 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index cb253d80d72b..bb9dd8b64176 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -1413,9 +1413,20 @@ static int __zram_bvec_write(struct zram *zram, struct bio_vec *bvec,
> handle = zs_malloc(zram->mem_pool, comp_len,
> GFP_NOIO | __GFP_HIGHMEM |
> __GFP_MOVABLE);
> - if (handle)
> + if (!handle)
> + return -ENOMEM;
> +
> + if (comp_len != PAGE_SIZE)
> goto compress_again;
> - return -ENOMEM;
> +
> + /*
> + * If the page is not compressible, you need to acquire the lock and
> + * execute the code below. The zcomp_stream_get() call is needed to
> + * disable the cpu hotplug and grab the zstrm buffer back.
> + * It is necessary that the dereferencing of the zstrm variable below
> + * occurs correctly.
> + */
> + zstrm = zcomp_stream_get(zram->comp);
> }
>
> alloced_pages = zs_get_total_pages(zram->mem_pool);
> --
> 2.30.1
>

2022-04-25 09:30:29

by Sergey Senozhatsky

[permalink] [raw]
Subject: Re: [PATCH v1] zram: don't retry compress incompressible page

On Sat, Apr 23, 2022 at 5:27 AM Minchan Kim <[email protected]> wrote:
[..]
> The 2nd trial allocation under per-cpu pressmption has been used to
> prevent regression of allocation failure. However, it makes trouble
> for maintenance without significant benefit.

Agreed.

> I'd like to remove the double compression logic and make it simple.
> What do you think?

I'm all for it.