The current linux-block, 4.18 and 4.17 can reliably be crashed within few
minutes by running the following bash snippet:
mkfs.ext4 -v /dev/sda3 && mount /dev/sda3 /mnt/test/ -t ext4;
while true; do
mkdir /sys/fs/cgroup/unified/test/;
echo $$ >/sys/fs/cgroup/unified/test/cgroup.procs;
dd if=/dev/zero of=/mnt/test/test-$(( RANDOM * 10 / 32768 )) bs=1M count=1024 &
echo $$ >/sys/fs/cgroup/unified/cgroup.procs;
sleep 1;
kill -KILL $!; wait $!;
rmdir /sys/fs/cgroup/unified/test;
done
# cat /sys/block/sda/queue/scheduler
noop [cfq]
# cat /sys/block/sda/queue/rotational
1
# cat /sys/fs/cgroup/unified/cgroup.subtree_control
cpu io memory pids
The backtraces vary but often they are NULL pointer dereferences due to
various cfqq fields being NULL.
Or BUG_ON(cfqq->ref <= 0) in cfq_put_queue() triggers due to cfqq reference
count being zero.
Bisection points at
commit 4c6994806f70 ("blk-throttle: fix race between blkcg_bio_issue_check() and cgroup_rmdir()").
The prime suspect looked like .pd_offline_fn() method being called multiple
times, but from analyzing the mentioned commit this didn't seem possible
and runtime trials have confirmed that.
However, CFQ's cfq_pd_offline() implementation of the above method were
leaving queue pointers intact in cfqg after unpinning them.
After making sure that they are cleared to NULL in this function I can no
longer reproduce the crash.
Signed-off-by: Maciej S. Szmigiero <[email protected]>
Fixes: 4c6994806f70 ("blk-throttle: fix race between blkcg_bio_issue_check() and cgroup_rmdir()").
Cc: [email protected]
---
block/cfq-iosched.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 2eb87444b157..ed41aa978c4a 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -1644,14 +1644,20 @@ static void cfq_pd_offline(struct blkg_policy_data *pd)
int i;
for (i = 0; i < IOPRIO_BE_NR; i++) {
- if (cfqg->async_cfqq[0][i])
+ if (cfqg->async_cfqq[0][i]) {
cfq_put_queue(cfqg->async_cfqq[0][i]);
- if (cfqg->async_cfqq[1][i])
+ cfqg->async_cfqq[0][i] = NULL;
+ }
+ if (cfqg->async_cfqq[1][i]) {
cfq_put_queue(cfqg->async_cfqq[1][i]);
+ cfqg->async_cfqq[1][i] = NULL;
+ }
}
- if (cfqg->async_idle_cfqq)
+ if (cfqg->async_idle_cfqq) {
cfq_put_queue(cfqg->async_idle_cfqq);
+ cfqg->async_idle_cfqq = NULL;
+ }
/*
* @blkg is going offline and will be ignored by
> Il giorno 17 ago 2018, alle ore 19:28, Maciej S. Szmigiero <[email protected]> ha scritto:
>
> The current linux-block, 4.18 and 4.17 can reliably be crashed within few
> minutes by running the following bash snippet:
>
> mkfs.ext4 -v /dev/sda3 && mount /dev/sda3 /mnt/test/ -t ext4;
> while true; do
> mkdir /sys/fs/cgroup/unified/test/;
> echo $$ >/sys/fs/cgroup/unified/test/cgroup.procs;
> dd if=/dev/zero of=/mnt/test/test-$(( RANDOM * 10 / 32768 )) bs=1M count=1024 &
> echo $$ >/sys/fs/cgroup/unified/cgroup.procs;
> sleep 1;
> kill -KILL $!; wait $!;
> rmdir /sys/fs/cgroup/unified/test;
> done
>
> # cat /sys/block/sda/queue/scheduler
> noop [cfq]
> # cat /sys/block/sda/queue/rotational
> 1
> # cat /sys/fs/cgroup/unified/cgroup.subtree_control
> cpu io memory pids
>
> The backtraces vary but often they are NULL pointer dereferences due to
> various cfqq fields being NULL.
> Or BUG_ON(cfqq->ref <= 0) in cfq_put_queue() triggers due to cfqq reference
> count being zero.
>
> Bisection points at
> commit 4c6994806f70 ("blk-throttle: fix race between blkcg_bio_issue_check() and cgroup_rmdir()").
> The prime suspect looked like .pd_offline_fn() method being called multiple
> times, but from analyzing the mentioned commit this didn't seem possible
> and runtime trials have confirmed that.
>
> However, CFQ's cfq_pd_offline() implementation of the above method were
> leaving queue pointers intact in cfqg after unpinning them.
> After making sure that they are cleared to NULL in this function I can no
> longer reproduce the crash.
>
By chance, did you check whether is BFQ is ok in this respect?
Thanks,
Paolo
> Signed-off-by: Maciej S. Szmigiero <[email protected]>
> Fixes: 4c6994806f70 ("blk-throttle: fix race between blkcg_bio_issue_check() and cgroup_rmdir()").
> Cc: [email protected]
> ---
> block/cfq-iosched.c | 12 +++++++++---
> 1 file changed, 9 insertions(+), 3 deletions(-)
>
> diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
> index 2eb87444b157..ed41aa978c4a 100644
> --- a/block/cfq-iosched.c
> +++ b/block/cfq-iosched.c
> @@ -1644,14 +1644,20 @@ static void cfq_pd_offline(struct blkg_policy_data *pd)
> int i;
>
> for (i = 0; i < IOPRIO_BE_NR; i++) {
> - if (cfqg->async_cfqq[0][i])
> + if (cfqg->async_cfqq[0][i]) {
> cfq_put_queue(cfqg->async_cfqq[0][i]);
> - if (cfqg->async_cfqq[1][i])
> + cfqg->async_cfqq[0][i] = NULL;
> + }
> + if (cfqg->async_cfqq[1][i]) {
> cfq_put_queue(cfqg->async_cfqq[1][i]);
> + cfqg->async_cfqq[1][i] = NULL;
> + }
> }
>
> - if (cfqg->async_idle_cfqq)
> + if (cfqg->async_idle_cfqq) {
> cfq_put_queue(cfqg->async_idle_cfqq);
> + cfqg->async_idle_cfqq = NULL;
> + }
>
> /*
> * @blkg is going offline and will be ignored by
On 17.08.2018 19:30, Paolo Valente wrote:
>
>
>> Il giorno 17 ago 2018, alle ore 19:28, Maciej S. Szmigiero <[email protected]> ha scritto:
>>
>> The current linux-block, 4.18 and 4.17 can reliably be crashed within few
>> minutes by running the following bash snippet:
>>
>> mkfs.ext4 -v /dev/sda3 && mount /dev/sda3 /mnt/test/ -t ext4;
>> while true; do
>> mkdir /sys/fs/cgroup/unified/test/;
>> echo $$ >/sys/fs/cgroup/unified/test/cgroup.procs;
>> dd if=/dev/zero of=/mnt/test/test-$(( RANDOM * 10 / 32768 )) bs=1M count=1024 &
>> echo $$ >/sys/fs/cgroup/unified/cgroup.procs;
>> sleep 1;
>> kill -KILL $!; wait $!;
>> rmdir /sys/fs/cgroup/unified/test;
>> done
>>
>> # cat /sys/block/sda/queue/scheduler
>> noop [cfq]
>> # cat /sys/block/sda/queue/rotational
>> 1
>> # cat /sys/fs/cgroup/unified/cgroup.subtree_control
>> cpu io memory pids
>>
>> The backtraces vary but often they are NULL pointer dereferences due to
>> various cfqq fields being NULL.
>> Or BUG_ON(cfqq->ref <= 0) in cfq_put_queue() triggers due to cfqq reference
>> count being zero.
>>
>> Bisection points at
>> commit 4c6994806f70 ("blk-throttle: fix race between blkcg_bio_issue_check() and cgroup_rmdir()").
>> The prime suspect looked like .pd_offline_fn() method being called multiple
>> times, but from analyzing the mentioned commit this didn't seem possible
>> and runtime trials have confirmed that.
>>
>> However, CFQ's cfq_pd_offline() implementation of the above method were
>> leaving queue pointers intact in cfqg after unpinning them.
>> After making sure that they are cleared to NULL in this function I can no
>> longer reproduce the crash.
>>
>
> By chance, did you check whether is BFQ is ok in this respect?
I wasn't able to crash BFQ with the above test and in fact had run my
machines on BFQ until I was able to find a fix for this in CFQ.
Also, BFQ has a bit similar code in bfq_put_async_queues() that is
called from bfq_pd_offline() that is already NULL-ing the passed
pointer.
> Thanks,
> Paolo
Regards,
Maciej