Forgot to CC LKML and linux-fsdevel.
- Sedat -
---------- Forwarded message ----------
From: Sedat Dilek <[email protected]>
Date: Wed, Dec 31, 2014 at 11:17 PM
Subject: Re: [PATCH v3 0/5] block: loop: convert to blk-mq
To: Ming Lei <[email protected]>
Cc: Jens Axboe <[email protected]>, Keith Busch <[email protected]>
On Wed, Dec 31, 2014 at 9:54 PM, Sedat Dilek <[email protected]> wrote:
[...]
> Now, I wanted to do some benchmarking.
>
> In 2/5 "block: loop: improve performance via blk-mq" [1] you gave some numbers.
> Can you please tell me how you got that numbers - userspace setup etc.?
>
OK, I have installed fio (1.59-1) and libaio1 (0.3.109-2ubuntu1) here.
You say in [1]:
"In the following test:
- base: v3.19-rc2-2041231
- loop over file in ext4 file system on SSD disk
- bs: 4k, libaio, io depth: 64, O_DIRECT, num of jobs: 1
- throughput: IOPS"
I tried to reproduce that inspired by [2]...
root# fio --name=randread --rw=randread --bs=4k --ioengine=libaio
--iodepth=64 --direct=1 --numjobs=1 --size=1G
...you had no size given (here: 1GiB) - fio requires that parameter to run.
This results in 165 VS. 515 IOPS here.
# grep "iops=" test-*
test-1-next20141231.txt: read : io=1024.0MB, bw=678578 B/s, iops=165
, runt=1582340msec
test-2-next20141231-block-mq-v3.txt: read : io=1024.0MB,
bw=2063.4KB/s, iops=515 , runt=508182msec
Full fio-logs and some other useful configs/logs/patches attached.
- Sedat -
[1] http://marc.info/?l=linux-kernel&m=142003220301459&w=2
[2] http://wiki.mikejung.biz/Benchmarking#Fio_Random_Write_Test_using_libaio_and_direct_flags
Hi Sedat,
On Thu, Jan 1, 2015 at 6:32 AM, Sedat Dilek <[email protected]> wrote:
> Forgot to CC LKML and linux-fsdevel.
>
> - Sedat -
>
> OK, I have installed fio (1.59-1) and libaio1 (0.3.109-2ubuntu1) here.
>
> You say in [1]:
>
> "In the following test:
> - base: v3.19-rc2-2041231
> - loop over file in ext4 file system on SSD disk
> - bs: 4k, libaio, io depth: 64, O_DIRECT, num of jobs: 1
> - throughput: IOPS"
>
> I tried to reproduce that inspired by [2]...
>
> root# fio --name=randread --rw=randread --bs=4k --ioengine=libaio
> --iodepth=64 --direct=1 --numjobs=1 --size=1G
>
> ...you had no size given (here: 1GiB) - fio requires that parameter to run.
>
> This results in 165 VS. 515 IOPS here.
Thanks for your test.
Also if your disk is quick enough, you will observe improvement on
read test too.
> # grep "iops=" test-*
> test-1-next20141231.txt: read : io=1024.0MB, bw=678578 B/s, iops=165
> , runt=1582340msec
> test-2-next20141231-block-mq-v3.txt: read : io=1024.0MB,
> bw=2063.4KB/s, iops=515 , runt=508182msec
>
> Full fio-logs and some other useful configs/logs/patches attached.
>
> - Sedat -
>
> [1] http://marc.info/?l=linux-kernel&m=142003220301459&w=2
> [2] http://wiki.mikejung.biz/Benchmarking#Fio_Random_Write_Test_using_libaio_and_direct_flags
Thanks,
Ming Lei
On Thu, Jan 1, 2015 at 1:01 AM, Ming Lei <[email protected]> wrote:
> Hi Sedat,
>
> On Thu, Jan 1, 2015 at 6:32 AM, Sedat Dilek <[email protected]> wrote:
>> Forgot to CC LKML and linux-fsdevel.
>>
>> - Sedat -
>
>>
>> OK, I have installed fio (1.59-1) and libaio1 (0.3.109-2ubuntu1) here.
>>
>> You say in [1]:
>>
>> "In the following test:
>> - base: v3.19-rc2-2041231
>> - loop over file in ext4 file system on SSD disk
>> - bs: 4k, libaio, io depth: 64, O_DIRECT, num of jobs: 1
>> - throughput: IOPS"
>>
>> I tried to reproduce that inspired by [2]...
>>
>> root# fio --name=randread --rw=randread --bs=4k --ioengine=libaio
>> --iodepth=64 --direct=1 --numjobs=1 --size=1G
>>
>> ...you had no size given (here: 1GiB) - fio requires that parameter to run.
>>
>> This results in 165 VS. 515 IOPS here.
>
> Thanks for your test.
>
> Also if your disk is quick enough, you will observe improvement on
> read test too.
>
This is no SSD here.
# dmesg | egrep -i 'hitachi|ata1|sda'
[ 0.457892] ata1: SATA max UDMA/133 abar m2048@0xf0708000 port
0xf0708100 irq 25
[ 0.777445] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 0.778759] ata1.00: ATA-8: Hitachi HTS545050A7E380, GG2OA6C0, max UDMA/133
[ 0.778778] ata1.00: 976773168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
[ 0.780154] ata1.00: configured for UDMA/133
[ 0.780970] scsi 0:0:0:0: Direct-Access ATA Hitachi
HTS54505 A6C0 PQ: 0 ANSI: 5
[ 0.782050] sd 0:0:0:0: [sda] 976773168 512-byte logical blocks:
(500 GB/465 GiB)
[ 0.782058] sd 0:0:0:0: [sda] 4096-byte physical blocks
[ 0.782255] sd 0:0:0:0: [sda] Write Protect is off
[ 0.782262] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
[ 0.782339] sd 0:0:0:0: [sda] Write cache: enabled, read cache:
enabled, doesn't support DPO or FUA
[ 0.800644] sda: sda1 sda2 sda3
[ 0.802029] sd 0:0:0:0: [sda] Attached SCSI disk
How did you test with fio (your fio lines)?
- Sedat -
>> # grep "iops=" test-*
>> test-1-next20141231.txt: read : io=1024.0MB, bw=678578 B/s, iops=165
>> , runt=1582340msec
>> test-2-next20141231-block-mq-v3.txt: read : io=1024.0MB,
>> bw=2063.4KB/s, iops=515 , runt=508182msec
>>
>> Full fio-logs and some other useful configs/logs/patches attached.
>>
>> - Sedat -
>>
>> [1] http://marc.info/?l=linux-kernel&m=142003220301459&w=2
>> [2] http://wiki.mikejung.biz/Benchmarking#Fio_Random_Write_Test_using_libaio_and_direct_flags
>
>
>
> Thanks,
> Ming Lei
On Thu, Jan 1, 2015 at 8:18 AM, Sedat Dilek <[email protected]> wrote:
> On Thu, Jan 1, 2015 at 1:01 AM, Ming Lei <[email protected]> wrote:
>> Hi Sedat,
>>
>> On Thu, Jan 1, 2015 at 6:32 AM, Sedat Dilek <[email protected]> wrote:
>>> Forgot to CC LKML and linux-fsdevel.
>>>
>>> - Sedat -
>>
>>>
>>> OK, I have installed fio (1.59-1) and libaio1 (0.3.109-2ubuntu1) here.
>>>
>>> You say in [1]:
>>>
>>> "In the following test:
>>> - base: v3.19-rc2-2041231
>>> - loop over file in ext4 file system on SSD disk
>>> - bs: 4k, libaio, io depth: 64, O_DIRECT, num of jobs: 1
>>> - throughput: IOPS"
>>>
>>> I tried to reproduce that inspired by [2]...
>>>
>>> root# fio --name=randread --rw=randread --bs=4k --ioengine=libaio
>>> --iodepth=64 --direct=1 --numjobs=1 --size=1G
>>>
>>> ...you had no size given (here: 1GiB) - fio requires that parameter to run.
>>>
>>> This results in 165 VS. 515 IOPS here.
>>
>> Thanks for your test.
>>
>> Also if your disk is quick enough, you will observe improvement on
>> read test too.
>>
>
> This is no SSD here.
>
> # dmesg | egrep -i 'hitachi|ata1|sda'
> [ 0.457892] ata1: SATA max UDMA/133 abar m2048@0xf0708000 port
> 0xf0708100 irq 25
> [ 0.777445] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
> [ 0.778759] ata1.00: ATA-8: Hitachi HTS545050A7E380, GG2OA6C0, max UDMA/133
> [ 0.778778] ata1.00: 976773168 sectors, multi 16: LBA48 NCQ (depth 31/32), AA
> [ 0.780154] ata1.00: configured for UDMA/133
> [ 0.780970] scsi 0:0:0:0: Direct-Access ATA Hitachi
> HTS54505 A6C0 PQ: 0 ANSI: 5
> [ 0.782050] sd 0:0:0:0: [sda] 976773168 512-byte logical blocks:
> (500 GB/465 GiB)
> [ 0.782058] sd 0:0:0:0: [sda] 4096-byte physical blocks
> [ 0.782255] sd 0:0:0:0: [sda] Write Protect is off
> [ 0.782262] sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00
> [ 0.782339] sd 0:0:0:0: [sda] Write cache: enabled, read cache:
> enabled, doesn't support DPO or FUA
> [ 0.800644] sda: sda1 sda2 sda3
> [ 0.802029] sd 0:0:0:0: [sda] Attached SCSI disk
>
> How did you test with fio (your fio lines)?
Your fio command line is basically same with my fio config, and you
can attach one image to loop via: losetup -f file_name. Looks your
randread result is good, and I can observe ~80 IOPS vs. ~200 IOPS
on my slow HDD. in the randread test too.
#################fio config##########################
[global]
direct=1
size=128G
bsrange=4k-4k
timeout=30
numjobs=1
ioengine=libaio
iodepth=64
filename=/dev/loop0
group_reporting=1
[f]
rw=${RW}
Thanks,
Ming Lei