2024-03-08 09:44:16

by Yu Kuai

[permalink] [raw]
Subject: [PATCH] raid1: fix use-after-free for original bio in raid1_write_request()

From: Yu Kuai <[email protected]>

r1_bio->bios[] is used to record new bios that will be issued to
underlying disks, however, in raid1_write_request(), r1_bio->bios[]
will set to the original bio temporarily. Meanwhile, if blocked rdev
is set, free_r1bio() will be called causing that all r1_bio->bios[]
to be freed:

raid1_write_request()
r1_bio = alloc_r1bio(mddev, bio); -> r1_bio->bios[] is NULL
for (i = 0; i < disks; i++) -> for each rdev in conf
// first rdev is normal
r1_bio->bios[0] = bio; -> set to original bio
// second rdev is blocked
if (test_bit(Blocked, &rdev->flags))
break

if (blocked_rdev)
free_r1bio()
put_all_bios()
bio_put(r1_bio->bios[0]) -> original bio is freed

Test scripts:

mdadm -CR /dev/md0 -l1 -n4 /dev/sd[abcd] --assume-clean
fio -filename=/dev/md0 -ioengine=libaio -rw=write -bs=4k -numjobs=1 \
-iodepth=128 -name=test -direct=1
echo blocked > /sys/block/md0/md/rd2/state

Test result:

BUG bio-264 (Not tainted): Object already free
-----------------------------------------------------------------------------

Allocated in mempool_alloc_slab+0x24/0x50 age=1 cpu=1 pid=869
kmem_cache_alloc+0x324/0x480
mempool_alloc_slab+0x24/0x50
mempool_alloc+0x6e/0x220
bio_alloc_bioset+0x1af/0x4d0
blkdev_direct_IO+0x164/0x8a0
blkdev_write_iter+0x309/0x440
aio_write+0x139/0x2f0
io_submit_one+0x5ca/0xb70
__do_sys_io_submit+0x86/0x270
__x64_sys_io_submit+0x22/0x30
do_syscall_64+0xb1/0x210
entry_SYSCALL_64_after_hwframe+0x6c/0x74
Freed in mempool_free_slab+0x1f/0x30 age=1 cpu=1 pid=869
kmem_cache_free+0x28c/0x550
mempool_free_slab+0x1f/0x30
mempool_free+0x40/0x100
bio_free+0x59/0x80
bio_put+0xf0/0x220
free_r1bio+0x74/0xb0
raid1_make_request+0xadf/0x1150
md_handle_request+0xc7/0x3b0
md_submit_bio+0x76/0x130
__submit_bio+0xd8/0x1d0
submit_bio_noacct_nocheck+0x1eb/0x5c0
submit_bio_noacct+0x169/0xd40
submit_bio+0xee/0x1d0
blkdev_direct_IO+0x322/0x8a0
blkdev_write_iter+0x309/0x440
aio_write+0x139/0x2f0

Since that bios for underlying disks are not allocated yet, fix this
problem by using mempool_free() directly to free the r1_bio.

Fixes: 992db13a4aee ("md/raid1: free the r1bio before waiting for blocked rdev")
Signed-off-by: Yu Kuai <[email protected]>
---
drivers/md/raid1.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index afca975ec7f3..fde8434c33df 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -1565,7 +1565,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
for (j = 0; j < i; j++)
if (r1_bio->bios[j])
rdev_dec_pending(conf->mirrors[j].rdev, mddev);
- free_r1bio(r1_bio);
+ mempool_free(r1_bio, &conf->r1bio_pool);
allow_barrier(conf, bio->bi_iter.bi_sector);

if (bio->bi_opf & REQ_NOWAIT) {
--
2.39.2



2024-03-08 14:29:55

by Coly Li

[permalink] [raw]
Subject: Re: [PATCH] raid1: fix use-after-free for original bio in raid1_write_request()

On Fri, Mar 08, 2024 at 05:37:26PM +0800, Yu Kuai wrote:
> From: Yu Kuai <[email protected]>
>
> r1_bio->bios[] is used to record new bios that will be issued to
> underlying disks, however, in raid1_write_request(), r1_bio->bios[]
> will set to the original bio temporarily. Meanwhile, if blocked rdev
> is set, free_r1bio() will be called causing that all r1_bio->bios[]
> to be freed:
>
> raid1_write_request()
> r1_bio = alloc_r1bio(mddev, bio); -> r1_bio->bios[] is NULL
> for (i = 0; i < disks; i++) -> for each rdev in conf
> // first rdev is normal
> r1_bio->bios[0] = bio; -> set to original bio
> // second rdev is blocked
> if (test_bit(Blocked, &rdev->flags))
> break
>
> if (blocked_rdev)
> free_r1bio()
> put_all_bios()
> bio_put(r1_bio->bios[0]) -> original bio is freed
>
> Test scripts:
>
> mdadm -CR /dev/md0 -l1 -n4 /dev/sd[abcd] --assume-clean
> fio -filename=/dev/md0 -ioengine=libaio -rw=write -bs=4k -numjobs=1 \
> -iodepth=128 -name=test -direct=1
> echo blocked > /sys/block/md0/md/rd2/state
>
> Test result:
>
> BUG bio-264 (Not tainted): Object already free
> -----------------------------------------------------------------------------
>
> Allocated in mempool_alloc_slab+0x24/0x50 age=1 cpu=1 pid=869
> kmem_cache_alloc+0x324/0x480
> mempool_alloc_slab+0x24/0x50
> mempool_alloc+0x6e/0x220
> bio_alloc_bioset+0x1af/0x4d0
> blkdev_direct_IO+0x164/0x8a0
> blkdev_write_iter+0x309/0x440
> aio_write+0x139/0x2f0
> io_submit_one+0x5ca/0xb70
> __do_sys_io_submit+0x86/0x270
> __x64_sys_io_submit+0x22/0x30
> do_syscall_64+0xb1/0x210
> entry_SYSCALL_64_after_hwframe+0x6c/0x74
> Freed in mempool_free_slab+0x1f/0x30 age=1 cpu=1 pid=869
> kmem_cache_free+0x28c/0x550
> mempool_free_slab+0x1f/0x30
> mempool_free+0x40/0x100
> bio_free+0x59/0x80
> bio_put+0xf0/0x220
> free_r1bio+0x74/0xb0
> raid1_make_request+0xadf/0x1150
> md_handle_request+0xc7/0x3b0
> md_submit_bio+0x76/0x130
> __submit_bio+0xd8/0x1d0
> submit_bio_noacct_nocheck+0x1eb/0x5c0
> submit_bio_noacct+0x169/0xd40
> submit_bio+0xee/0x1d0
> blkdev_direct_IO+0x322/0x8a0
> blkdev_write_iter+0x309/0x440
> aio_write+0x139/0x2f0
>
> Since that bios for underlying disks are not allocated yet, fix this
> problem by using mempool_free() directly to free the r1_bio.
>

Yes, the panic doesn't show up anymore with this patch.

Thanks for the fixup.


> Fixes: 992db13a4aee ("md/raid1: free the r1bio before waiting for blocked rdev")
> Signed-off-by: Yu Kuai <[email protected]>

Reported-and-tested-by: Coly Li <[email protected]>


Coly Li

> ---
> drivers/md/raid1.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
> index afca975ec7f3..fde8434c33df 100644
> --- a/drivers/md/raid1.c
> +++ b/drivers/md/raid1.c
> @@ -1565,7 +1565,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
> for (j = 0; j < i; j++)
> if (r1_bio->bios[j])
> rdev_dec_pending(conf->mirrors[j].rdev, mddev);
> - free_r1bio(r1_bio);
> + mempool_free(r1_bio, &conf->r1bio_pool);
> allow_barrier(conf, bio->bi_iter.bi_sector);
>
> if (bio->bi_opf & REQ_NOWAIT) {
> --
> 2.39.2
>
>

2024-03-08 18:16:55

by Song Liu

[permalink] [raw]
Subject: Re: [PATCH] raid1: fix use-after-free for original bio in raid1_write_request()

On Fri, Mar 8, 2024 at 6:29 AM Coly Li <[email protected]> wrote:
>
> On Fri, Mar 08, 2024 at 05:37:26PM +0800, Yu Kuai wrote:
> > From: Yu Kuai <[email protected]>
> >
> > r1_bio->bios[] is used to record new bios that will be issued to
> > underlying disks, however, in raid1_write_request(), r1_bio->bios[]
> > will set to the original bio temporarily. Meanwhile, if blocked rdev
> > is set, free_r1bio() will be called causing that all r1_bio->bios[]
> > to be freed:
> >
> > raid1_write_request()
> > r1_bio = alloc_r1bio(mddev, bio); -> r1_bio->bios[] is NULL
> > for (i = 0; i < disks; i++) -> for each rdev in conf
> > // first rdev is normal
> > r1_bio->bios[0] = bio; -> set to original bio
> > // second rdev is blocked
> > if (test_bit(Blocked, &rdev->flags))
> > break
> >
> > if (blocked_rdev)
> > free_r1bio()
> > put_all_bios()
> > bio_put(r1_bio->bios[0]) -> original bio is freed
> >
> > Test scripts:
> >
> > mdadm -CR /dev/md0 -l1 -n4 /dev/sd[abcd] --assume-clean
> > fio -filename=/dev/md0 -ioengine=libaio -rw=write -bs=4k -numjobs=1 \
> > -iodepth=128 -name=test -direct=1
> > echo blocked > /sys/block/md0/md/rd2/state
> >
> > Test result:
> >
> > BUG bio-264 (Not tainted): Object already free
> > -----------------------------------------------------------------------------
> >
> > Allocated in mempool_alloc_slab+0x24/0x50 age=1 cpu=1 pid=869
> > kmem_cache_alloc+0x324/0x480
> > mempool_alloc_slab+0x24/0x50
> > mempool_alloc+0x6e/0x220
> > bio_alloc_bioset+0x1af/0x4d0
> > blkdev_direct_IO+0x164/0x8a0
> > blkdev_write_iter+0x309/0x440
> > aio_write+0x139/0x2f0
> > io_submit_one+0x5ca/0xb70
> > __do_sys_io_submit+0x86/0x270
> > __x64_sys_io_submit+0x22/0x30
> > do_syscall_64+0xb1/0x210
> > entry_SYSCALL_64_after_hwframe+0x6c/0x74
> > Freed in mempool_free_slab+0x1f/0x30 age=1 cpu=1 pid=869
> > kmem_cache_free+0x28c/0x550
> > mempool_free_slab+0x1f/0x30
> > mempool_free+0x40/0x100
> > bio_free+0x59/0x80
> > bio_put+0xf0/0x220
> > free_r1bio+0x74/0xb0
> > raid1_make_request+0xadf/0x1150
> > md_handle_request+0xc7/0x3b0
> > md_submit_bio+0x76/0x130
> > __submit_bio+0xd8/0x1d0
> > submit_bio_noacct_nocheck+0x1eb/0x5c0
> > submit_bio_noacct+0x169/0xd40
> > submit_bio+0xee/0x1d0
> > blkdev_direct_IO+0x322/0x8a0
> > blkdev_write_iter+0x309/0x440
> > aio_write+0x139/0x2f0
> >
> > Since that bios for underlying disks are not allocated yet, fix this
> > problem by using mempool_free() directly to free the r1_bio.
> >
>
> Yes, the panic doesn't show up anymore with this patch.
>
> Thanks for the fixup.
>
>
> > Fixes: 992db13a4aee ("md/raid1: free the r1bio before waiting for blocked rdev")
> > Signed-off-by: Yu Kuai <[email protected]>
>
> Reported-and-tested-by: Coly Li <[email protected]>

Applied to md-6.9. Thanks!

Song