2011-11-24 19:46:26

by Tejun Heo

[permalink] [raw]
Subject: [PATCH] ext4: fix racy use-after-free in ext4_end_io_dio()

ext4_end_io_dio() queues io_end->work and then clears iocb->private;
however, io_end->work completes the iocb by calling aio_complete(),
which may happen before io_end->work clearing thus leading to
use-after-free.

Detected and tested with slab poisoning.

Signed-off-by: Tejun Heo <[email protected]>
Reported-by: Kent Overstreet <[email protected]>
Tested-by: Kent Overstreet <[email protected]>
Cc: [email protected]
---
I *think* this is the corret fix but am not too familiar with code
path, so please proceed with caution.

Thank you.

fs/ext4/inode.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 240f6e2..0f5583b 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -2806,8 +2806,8 @@ out:
spin_unlock_irqrestore(&ei->i_completed_io_lock, flags);

/* queue the work to convert unwritten extents to written */
- queue_work(wq, &io_end->work);
iocb->private = NULL;
+ queue_work(wq, &io_end->work);

/* XXX: probably should move into the real I/O completion handler */
inode_dio_done(inode);


2011-11-24 23:18:53

by Theodore Ts'o

[permalink] [raw]
Subject: Re: [PATCH] ext4: fix racy use-after-free in ext4_end_io_dio()

On Thu, Nov 24, 2011 at 11:46:26AM -0800, Tejun Heo wrote:
> ext4_end_io_dio() queues io_end->work and then clears iocb->private;
> however, io_end->work completes the iocb by calling aio_complete(),
> which may happen before io_end->work clearing thus leading to
> use-after-free.
>
> Detected and tested with slab poisoning.
>
> Signed-off-by: Tejun Heo <[email protected]>
> Reported-by: Kent Overstreet <[email protected]>
> Tested-by: Kent Overstreet <[email protected]>
> Cc: [email protected]

Thanks!! I've been trying to track down this bug for a while. The
repro case I had ran the 12 fio's against 12 different file systems
with the following configuration:

[global]
direct=1
ioengine=libaio
iodepth=1
bs=4k
ba=4k
size=128m

[create]
filename=${TESTDIR}
rw=write

... and would leave a few inodes with elevated i_ioend_counts, which
means any attempt to delete those inodes or to unmount the file system
owning those inodes would hang forever.

With your patch this problem goes away.

>I *think* this is the correct fix but am not too familiar with code
>path, so please proceed with caution.

Looks good to me. Thanks, applied.

>Thank you.

No, thank *you*! :-)

- Ted

P.S. It would be nice to get this into xfstests, but it requires at
least 10-12 (12 to repro it reliably) HDD's, and a fairly high core
count machine in order to reproduce it. I played around with trying
to create a reproducer that worked on a smaller number of disks and/or
fio's/CPU's, but I was never able to manage it.

2011-11-24 23:52:51

by Kent Overstreet

[permalink] [raw]
Subject: Re: [PATCH] ext4: fix racy use-after-free in ext4_end_io_dio()

Heh. It took me about 2 seconds to trigger it in vm :)

One reason it triggered so fast is that my VM test setup runs
everything out of ram (the disks on the host are files in a tmpfs),
but the main reason we were hitting it is that bcache usually runs the
bio->bi_endio function out of a workqueue, not irq context.

It also seems to only trigger when a dio write is extending a file;
the same test setup run against an existing file doesn't ever cause
(visible) slab corruption.

Do you think this would also explain the corruption D is seeing in vd?
I haven't yet figured out a mechanism but the bug seems to fit.

On Thu, Nov 24, 2011 at 3:18 PM, Ted Ts'o <[email protected]> wrote:
> On Thu, Nov 24, 2011 at 11:46:26AM -0800, Tejun Heo wrote:
>> ext4_end_io_dio() queues io_end->work and then clears iocb->private;
>> however, io_end->work completes the iocb by calling aio_complete(),
>> which may happen before io_end->work clearing thus leading to
>> use-after-free.
>>
>> Detected and tested with slab poisoning.
>>
>> Signed-off-by: Tejun Heo <[email protected]>
>> Reported-by: Kent Overstreet <[email protected]>
>> Tested-by: Kent Overstreet <[email protected]>
>> Cc: [email protected]
>
> Thanks!! ?I've been trying to track down this bug for a while. ?The
> repro case I had ran the 12 fio's against 12 different file systems
> with the following configuration:
>
> [global]
> direct=1
> ioengine=libaio
> iodepth=1
> bs=4k
> ba=4k
> size=128m
>
> [create]
> filename=${TESTDIR}
> rw=write
>
> ... and would leave a few inodes with elevated i_ioend_counts, which
> means any attempt to delete those inodes or to unmount the file system
> owning those inodes would hang forever.
>
> With your patch this problem goes away.
>
>>I *think* this is the correct fix but am not too familiar with code
>>path, so please proceed with caution.
>
> Looks good to me. ?Thanks, applied.
>
>>Thank you.
>
> No, thank *you*! ?:-)
>
> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?- Ted
>
> P.S. ?It would be nice to get this into xfstests, but it requires at
> least 10-12 (12 to repro it reliably) HDD's, and a fairly high core
> count machine in order to reproduce it. ?I played around with trying
> to create a reproducer that worked on a smaller number of disks and/or
> fio's/CPU's, but I was never able to manage it.
>