2023-09-19 03:45:41

by Pankaj Raghav

[permalink] [raw]
Subject: Re: [PATCH 0/5] Improve zram writeback performance

Gentle ping Minchan and Sergey.

Regards,
Pankaj

On 2023-09-11 15:34, Pankaj Raghav wrote:
> ZRAM can have a backing device that could be used as a writeback device
> for the pages in RAM. The current writeback code (writeback_store()) does
> a synchronous single page size IO to the backing device.
>
> This series implements IO batching while doing a writeback to a backing
> device. The code still does synchronous IOs but with larger IO sizes
> whenever possible. This crosses off one of the TODO that was there as a part
> of writeback_store() function:
> A single page IO would be inefficient for write...
>
> The idea is to batch the IOs to a certain limit before the data is flushed
> to the backing device. The batch limit is initially chosen based on the
> bdi->io_pages value with an upper limit of 32 pages (128k on x86).
>
> Batching reduces the time of writeback of 4G data to a nvme backing device
> from 68 secs to 15 secs (more than **4x improvement**).
>
> The first 3 patches are prep. 4th patch implements the main logic for IO
> batching and the last patch is another cleanup.
>
> Perf:
>
> $ modprobe zram num_devices=1
> $ echo "/dev/nvme0n1" > /sys/block/zram0/backing_dev
> $ echo 6G > /sys/block/zram0/disksize
> $ fio -iodepth=16 -rw=randwrite -ioengine=io_uring -bs=4k -numjobs=1 -size=4G -filename=/dev/zram0 -name=io_uring_1 > /dev/null
> $ echo all > /sys/block/zram0/idle
>
> Without changes:
> $ time echo idle > /sys/block/zram0/writeback
> real 1m8.648s (68 secs)
> user 0m0.000s
> sys 0m24.899s
> $ cat /sys/block/zram0/bd_stat
> 1048576 0 1048576
>
> With changes:
> $ time echo idle > /sys/block/zram0/writeback
> real 0m15.496s (15 secs)
> user 0m0.000s
> sys 0m7.789s
> $ cat /sys/block/zram0/bd_stat
> 1048576 0 1048576
>
> Testing:
>
> A basic End-End testing (based on Sergey's test flow [1]):
> 1) configure zram0 and add a nvme device as a writeback device
> 2) Get the sha256sum of a tarball
> 3) mkfs.ext4 on zram0, cp tarball
> 4) idle writeback
> 5) cp tarball from zram0 to another device (reread writeback pages) and
> compare the sha256sum again
> The sha before and after are verified to be the same.
>
> Writeback limit testing:
>
> 1) configure zram0 and add a nvme device as a writeback device
> 2) Set writeback limit and enable
> 3) Do a fio that crosses the writeback limit
> 4) idle writeback
> 5) Verify the writeback is limited to the set writeback limit value
>
> $ modprobe zram num_devices=1
> $ echo "/dev/nvme0n1" > /sys/block/zram0/backing_dev
> $ echo 4G > /sys/block/zram0/disksize
> $ echo 1 > /sys/block/zram0/writeback_limit_enable
> $ echo 1002 > /sys/block/zram0/writeback_limit
>
> $ fio -iodepth=16 -rw=write -ioengine=io_uring -bs=4k -numjobs=1 -size=10M -filename=/dev/zram0 -name=io_uring_1
>
> $ echo all > /sys/block/zram0/idle
> $ echo idle > /sys/block/zram0/writeback
> $ cat /sys/block/zram0/bd_stat
> 1002 0 1002
>
> writeback is limited to the set value.
>
> [1] https://lore.kernel.org/lkml/[email protected]/
>
> Pankaj Raghav (5):
> zram: move index preparation to a separate function in writeback_store
> zram: encapsulate writeback to the backing bdev in a function
> zram: add alloc_block_bdev_range() and free_block_bdev_range()
> zram: batch IOs during writeback to improve performance
> zram: don't overload blk_idx variable in writeback_store()
>
> drivers/block/zram/zram_drv.c | 318 ++++++++++++++++++++++------------
> 1 file changed, 210 insertions(+), 108 deletions(-)
>
>
> base-commit: 7bc675554773f09d88101bf1ccfc8537dc7c0be9