Hi,
I recently noticed that between 6441998e2e and 9eaa88c703, I/O became
much slower on my machine using ext4 on dm-crypt on NVMe with bfq
scheduler. Checking iostat during heavy usage (find / -xdev and fstrim
-v /), maximum IOPS had fallen from ~10000 to ~100. Reverting cb2ac2912a
("block: reduce kblockd_mod_delayed_work_on() CPU consumption") resolves
the issue.
Thanks,
Alex.
On 12/18/21 11:57 AM, Alex Xu (Hello71) wrote:
> Hi,
>
> I recently noticed that between 6441998e2e and 9eaa88c703, I/O became
> much slower on my machine using ext4 on dm-crypt on NVMe with bfq
> scheduler. Checking iostat during heavy usage (find / -xdev and fstrim
> -v /), maximum IOPS had fallen from ~10000 to ~100. Reverting cb2ac2912a
> ("block: reduce kblockd_mod_delayed_work_on() CPU consumption") resolves
> the issue.
Hmm interesting. I'll try and see if I can reproduce this and come up
with a fix.
--
Jens Axboe
On 12/18/21 12:02 PM, Jens Axboe wrote:
> On 12/18/21 11:57 AM, Alex Xu (Hello71) wrote:
>> Hi,
>>
>> I recently noticed that between 6441998e2e and 9eaa88c703, I/O became
>> much slower on my machine using ext4 on dm-crypt on NVMe with bfq
>> scheduler. Checking iostat during heavy usage (find / -xdev and fstrim
>> -v /), maximum IOPS had fallen from ~10000 to ~100. Reverting cb2ac2912a
>> ("block: reduce kblockd_mod_delayed_work_on() CPU consumption") resolves
>> the issue.
>
> Hmm interesting. I'll try and see if I can reproduce this and come up
> with a fix.
I can reproduce this. Alex, can you see if this one helps? Trying to see
if we can hit a happy medium here that avoids hammering on that timer,
but it really depends on what the mix is here of delay with pending,
or no delay with no pending.
Dexuan, can you test this for your test case too? I'm going to queue
up a revert for -rc6 just in case.
diff --git a/block/blk-core.c b/block/blk-core.c
index c1833f95cb97..a3fbf4360ee9 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1481,12 +1481,15 @@ int kblockd_schedule_work(struct work_struct *work)
}
EXPORT_SYMBOL(kblockd_schedule_work);
-int kblockd_mod_delayed_work_on(int cpu, struct delayed_work *dwork,
- unsigned long delay)
-{
- if (!delay)
- return queue_work_on(cpu, kblockd_workqueue, &dwork->work);
- return mod_delayed_work_on(cpu, kblockd_workqueue, dwork, delay);
+void kblockd_mod_delayed_work_on(int cpu, struct delayed_work *dwork,
+ unsigned long delay)
+{
+ if (!delay) {
+ cancel_delayed_work(dwork);
+ queue_work_on(cpu, kblockd_workqueue, &dwork->work);
+ } else {
+ mod_delayed_work_on(cpu, kblockd_workqueue, dwork, delay);
+ }
}
EXPORT_SYMBOL(kblockd_mod_delayed_work_on);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index bd4370baccca..1c7ba45e8463 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1159,7 +1159,7 @@ static inline unsigned int block_size(struct block_device *bdev)
}
int kblockd_schedule_work(struct work_struct *work);
-int kblockd_mod_delayed_work_on(int cpu, struct delayed_work *dwork, unsigned long delay);
+void kblockd_mod_delayed_work_on(int cpu, struct delayed_work *dwork, unsigned long delay);
#define MODULE_ALIAS_BLOCKDEV(major,minor) \
MODULE_ALIAS("block-major-" __stringify(major) "-" __stringify(minor))
--
Jens Axboe
On 12/19/21 7:58 AM, Jens Axboe wrote:
> On 12/18/21 12:02 PM, Jens Axboe wrote:
>> On 12/18/21 11:57 AM, Alex Xu (Hello71) wrote:
>>> Hi,
>>>
>>> I recently noticed that between 6441998e2e and 9eaa88c703, I/O became
>>> much slower on my machine using ext4 on dm-crypt on NVMe with bfq
>>> scheduler. Checking iostat during heavy usage (find / -xdev and fstrim
>>> -v /), maximum IOPS had fallen from ~10000 to ~100. Reverting cb2ac2912a
>>> ("block: reduce kblockd_mod_delayed_work_on() CPU consumption") resolves
>>> the issue.
>>
>> Hmm interesting. I'll try and see if I can reproduce this and come up
>> with a fix.
>
> I can reproduce this. Alex, can you see if this one helps? Trying to see
> if we can hit a happy medium here that avoids hammering on that timer,
> but it really depends on what the mix is here of delay with pending,
> or no delay with no pending.
>
> Dexuan, can you test this for your test case too? I'm going to queue
> up a revert for -rc6 just in case.
This one should be better...
diff --git a/block/blk-core.c b/block/blk-core.c
index c1833f95cb97..5e9e3c2b7a94 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1481,12 +1481,17 @@ int kblockd_schedule_work(struct work_struct *work)
}
EXPORT_SYMBOL(kblockd_schedule_work);
-int kblockd_mod_delayed_work_on(int cpu, struct delayed_work *dwork,
- unsigned long delay)
+void kblockd_mod_delayed_work_on(int cpu, struct delayed_work *dwork,
+ unsigned long msecs)
{
- if (!delay)
- return queue_work_on(cpu, kblockd_workqueue, &dwork->work);
- return mod_delayed_work_on(cpu, kblockd_workqueue, dwork, delay);
+ if (!msecs) {
+ cancel_delayed_work(dwork);
+ queue_work_on(cpu, kblockd_workqueue, &dwork->work);
+ } else {
+ unsigned long delay = msecs_to_jiffies(msecs);
+
+ mod_delayed_work_on(cpu, kblockd_workqueue, dwork, delay);
+ }
}
EXPORT_SYMBOL(kblockd_mod_delayed_work_on);
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 8874a63ae952..95288a98dae1 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1155,8 +1155,7 @@ EXPORT_SYMBOL(blk_mq_kick_requeue_list);
void blk_mq_delay_kick_requeue_list(struct request_queue *q,
unsigned long msecs)
{
- kblockd_mod_delayed_work_on(WORK_CPU_UNBOUND, &q->requeue_work,
- msecs_to_jiffies(msecs));
+ kblockd_mod_delayed_work_on(WORK_CPU_UNBOUND, &q->requeue_work, msecs);
}
EXPORT_SYMBOL(blk_mq_delay_kick_requeue_list);
@@ -1868,7 +1867,7 @@ static void __blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async,
}
kblockd_mod_delayed_work_on(blk_mq_hctx_next_cpu(hctx), &hctx->run_work,
- msecs_to_jiffies(msecs));
+ msecs);
}
/**
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index bd4370baccca..40748eedddbb 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1159,7 +1159,7 @@ static inline unsigned int block_size(struct block_device *bdev)
}
int kblockd_schedule_work(struct work_struct *work);
-int kblockd_mod_delayed_work_on(int cpu, struct delayed_work *dwork, unsigned long delay);
+void kblockd_mod_delayed_work_on(int cpu, struct delayed_work *dwork, unsigned long msecs);
#define MODULE_ALIAS_BLOCKDEV(major,minor) \
MODULE_ALIAS("block-major-" __stringify(major) "-" __stringify(minor))
--
Jens Axboe
> From: Jens Axboe <[email protected]>
> Sent: Sunday, December 19, 2021 7:28 AM
> > ...
> > Dexuan, can you test this for your test case too? I'm going to queue
> > up a revert for -rc6 just in case.
>
> This one should be better...
> ...
> Jens Axboe
Hi Jens, sorry -- unluckily I lost the test environment.. :-(
I pinged the user to set up the test environment again, but he's
on vacation till the beginning of January.
On Sun, Dec 19, 2021 at 08:28:18AM -0700, Jens Axboe wrote:
> >
> > Dexuan, can you test this for your test case too? I'm going to queue
> > up a revert for -rc6 just in case.
>
> This one should be better...
We might just want something like a revert of
9f993737906b30d7b2454a38637d1f70ffd60f2f.
Or just always use a normal work queue for run_work, and then use a
timer to schedule it for the relatively rare delayed case.
Hi Jens, Dexuan,
On 12/20/2021 1:15 PM, Dexuan Cui wrote:
>> From: Jens Axboe <[email protected]>
>> Sent: Sunday, December 19, 2021 7:28 AM
>>> ...
>>> Dexuan, can you test this for your test case too? I'm going to queue
>>> up a revert for -rc6 just in case.
>>
>> This one should be better...
>> ...
>> Jens Axboe
>
> Hi Jens, sorry -- unluckily I lost the test environment.. :-(
> I pinged the user to set up the test environment again, but he's
> on vacation till the beginning of January.
We hit this issue in our testing env also and will help to verify
your fixing patch. Thanks.
Regards
Yin, Fengwei
Hi Jens, Dexuan,
On Tue, Dec 28, 2021 at 10:30:43AM +0800, Yin Fengwei wrote:
> Hi Jens, Dexuan,
>
> On 12/20/2021 1:15 PM, Dexuan Cui wrote:
> >> From: Jens Axboe <[email protected]>
> >> Sent: Sunday, December 19, 2021 7:28 AM
> >>> ...
> >>> Dexuan, can you test this for your test case too? I'm going to queue
> >>> up a revert for -rc6 just in case.
> >>
> >> This one should be better...
> >> ...
> >> Jens Axboe
> >
> > Hi Jens, sorry -- unluckily I lost the test environment.. :-(
> > I pinged the user to set up the test environment again, but he's
> > on vacation till the beginning of January.
>
> We hit this issue in our testing env also and will help to verify
> your fixing patch. Thanks.
as we reported in [1], we found cb2ac2912a cause big regressions in fxmark
tests.
"[block] cb2ac2912a: fxmark.ssd_f2fs_dbench_client_4_bufferedio.works/sec -66.0% regression"
By applying the patch supplied by Jens Axboe in previous thread directly
upon cb2ac2912a, we confirmed the regression could be fixed:
=========================================================================================
compiler/cpufreq_governor/directio/disk/fstype/kconfig/media/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/bufferedio/1SSD/f2fs/x86_64-rhel-8.3/ssd/debian-10.4-x86_64-20200603.cgz/lkp-snr-a1/dbench_client/fxmark/0x9c02000e
commit:
edaa26334c ("iocost: Fix divide-by-zero on donation from low hweight cgroup")
cb2ac2912a ("block: reduce kblockd_mod_delayed_work_on() CPU consumption")
80f0eab183 (fix for very low IOPS due to "block: reduce kblockd_mod_delayed_work_on() CPU consumption") <-- patch by Jens Axboe
edaa26334c117a58 cb2ac2912a9ca7d3d26291c5119 80f0eab18356919a1c0568c6348
---------------- --------------------------- ---------------------------
%stddev %change %stddev %change %stddev
\ | \ | \
573.42 -66.0% 194.68 -0.6% 570.11 fxmark.ssd_f2fs_dbench_client_4_bufferedio.works/sec
actually there are other subtests which either have regressions also fixed
by new patch, or we didn't observe obvious performance changes.
edaa26334c117a58 cb2ac2912a9ca7d3d26291c5119 80f0eab18356919a1c0568c6348
---------------- --------------------------- ---------------------------
%stddev %change %stddev %change %stddev
\ | \ | \
574.25 -18.5% 468.11 -0.4% 571.83 fxmark.ssd_f2fs_dbench_client_18_bufferedio.works/sec
495.79 +0.2% 496.65 -0.6% 493.04 fxmark.ssd_f2fs_dbench_client_1_bufferedio.works/sec
536.81 -72.8% 145.99 ? 19% -0.6% 533.78 fxmark.ssd_f2fs_dbench_client_2_bufferedio.works/sec
544.81 +1.8% 554.73 -0.1% 544.26 fxmark.ssd_f2fs_dbench_client_36_bufferedio.works/sec
513.26 ? 2% +2.5% 526.05 ? 3% -1.8% 504.04 fxmark.ssd_f2fs_dbench_client_54_bufferedio.works/sec
513.10 ? 4% +3.9% 533.07 ? 3% -0.7% 509.73 ? 5% fxmark.ssd_f2fs_dbench_client_72_bufferedio.works/sec
also attached detail comparison table edaa26334c-cb2ac2912a-80f0eab183
which contains futher details such like perf data.
[1] https://lore.kernel.org/lkml/20211219141852.GH14057@xsang-OptiPlex-9020/
>
>
> Regards
> Yin, Fengwei
> From: Oliver Sang <[email protected]>
> Sent: Tuesday, December 28, 2021 5:49 AM
> ...
> Hi Jens, Dexuan,
>
> On Tue, Dec 28, 2021 at 10:30:43AM +0800, Yin Fengwei wrote:
> > Hi Jens, Dexuan,
> >
> > On 12/20/2021 1:15 PM, Dexuan Cui wrote:
> > >> From: Jens Axboe <[email protected]>
> > >> Sent: Sunday, December 19, 2021 7:28 AM
> > >>> ...
> > >>> Dexuan, can you test this for your test case too? I'm going to queue
> > >>> up a revert for -rc6 just in case.
> > >>
> > >> This one should be better...
> > >> ...
> > >> Jens Axboe
> > >
> > > Hi Jens, sorry -- unluckily I lost the test environment.. :-(
> > > I pinged the user to set up the test environment again, but he's
> > > on vacation till the beginning of January.
> >
> > We hit this issue in our testing env also and will help to verify
> > your fixing patch. Thanks.
>
> as we reported in [1], we found cb2ac2912a cause big regressions in fxmark
> tests.
> "[block] cb2ac2912a:
> fxmark.ssd_f2fs_dbench_client_4_bufferedio.works/sec -66.0% regression"
>
> By applying the patch supplied by Jens Axboe in previous thread directly
> upon cb2ac2912a, we confirmed the regression could be fixed:
> ...
> > Regards
> > Yin, Fengwei
Hi Jens, Fengwei,
I finally regained my test evnironment and tested Jens's Dec-19 patch
(see https://www.spinics.net/lists/linux-block/msg78065.html) and it
also worked fine for me, i.e. no excessive CPU utilization even with the
default dm_mod.dm_mq_queue_depth=2048.
BTW, Jen's patch is not in the mainline yet. I tested it using the linux-block tree.
Thanks,
-- Dexuan
> From: Dexuan Cui
> Sent: Friday, January 21, 2022 10:15 PM
> To: Oliver Sang <[email protected]>; Jens Axboe <[email protected]>; Yin
> Fengwei <[email protected]>; [email protected]
> Cc: Alex Xu (Hello71) <[email protected]>; [email protected];
> [email protected]; Long Li <[email protected]>; Michael Kelley (LINUX)
> <[email protected]>; [email protected]; Li, Philip
> <[email protected]>; Rakesh Ginjupalli <[email protected]>
> Subject: RE: RE: very low IOPS due to "block: reduce
> kblockd_mod_delayed_work_on() CPU consumption"
>
> > From: Oliver Sang <[email protected]>
> > Sent: Tuesday, December 28, 2021 5:49 AM
> > ...
> > Hi Jens, Dexuan,
> >
> > On Tue, Dec 28, 2021 at 10:30:43AM +0800, Yin Fengwei wrote:
> > > Hi Jens, Dexuan,
> > >
> > > On 12/20/2021 1:15 PM, Dexuan Cui wrote:
> > > >> From: Jens Axboe <[email protected]>
> > > >> Sent: Sunday, December 19, 2021 7:28 AM
> > > >>> ...
> > > >>> Dexuan, can you test this for your test case too? I'm going to queue
> > > >>> up a revert for -rc6 just in case.
> > > >>
> > > >> This one should be better...
> > > >> ...
> > > >> Jens Axboe
> > > >
> > > > Hi Jens, sorry -- unluckily I lost the test environment.. :-(
> > > > I pinged the user to set up the test environment again, but he's
> > > > on vacation till the beginning of January.
> > >
> > > We hit this issue in our testing env also and will help to verify
> > > your fixing patch. Thanks.
> >
> > as we reported in [1], we found cb2ac2912a cause big regressions in fxmark
> > tests.
> > "[block] cb2ac2912a:
> > fxmark.ssd_f2fs_dbench_client_4_bufferedio.works/sec -66.0% regression"
> >
> > By applying the patch supplied by Jens Axboe in previous thread directly
> > upon cb2ac2912a, we confirmed the regression could be fixed:
> > ...
> > > Regards
> > > Yin, Fengwei
>
> Hi Jens, Fengwei,
> I finally regained my test evnironment and tested Jens's Dec-19 patch
> (see https://www.spinics.net/lists/linux-block/msg78065.html) and it
> also worked fine for me, i.e. no excessive CPU utilization even with the
> default dm_mod.dm_mq_queue_depth=2048.
>
> BTW, Jen's patch is not in the mainline yet. I tested it using the linux-block tree.
>
> Thanks,
> -- Dexuan
Hi Jens, any plan to commit your 12/19/2021 patch?
It works great according to Fengwei's and my test.