syzbot has bisected this issue to:
commit 4cd4aed63125ccd4efc35162627827491c2a7be7
Author: Christoph Hellwig <[email protected]>
Date: Fri May 27 08:43:20 2022 +0000
btrfs: fold repair_io_failure into btrfs_repair_eb_io_failure
bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=1332525ff00000
start commit: ff539ac73ea5 Add linux-next specific files for 20220609
git tree: linux-next
final oops: https://syzkaller.appspot.com/x/report.txt?x=10b2525ff00000
console output: https://syzkaller.appspot.com/x/log.txt?x=1732525ff00000
kernel config: https://syzkaller.appspot.com/x/.config?x=a5002042f00a8bce
dashboard link: https://syzkaller.appspot.com/bug?extid=d2dd123304b4ae59f1bd
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=10d6d7cff00000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=1113b2bff00000
Reported-by: [email protected]
Fixes: 4cd4aed63125 ("btrfs: fold repair_io_failure into btrfs_repair_eb_io_failure")
For information about bisection process see: https://goo.gl/tpsmEJ#bisection
On Fri, Jun 10, 2022 at 12:10:19AM -0700, syzbot wrote:
> syzbot has bisected this issue to:
>
> commit 4cd4aed63125ccd4efc35162627827491c2a7be7
> Author: Christoph Hellwig <[email protected]>
> Date: Fri May 27 08:43:20 2022 +0000
>
> btrfs: fold repair_io_failure into btrfs_repair_eb_io_failure
Josef also reported a crash and found a bug in the patch, now added as
fixup that'll be in for-next:
diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
index 89a319e65197..5eac9ffb7499 100644
--- a/fs/btrfs/extent_io.c
+++ b/fs/btrfs/extent_io.c
@@ -2260,7 +2260,7 @@ int btrfs_repair_eb_io_failure(const struct extent_buffer *eb, int mirror_num)
__bio_add_page(&bio, p, PAGE_SIZE, start - page_offset(p));
ret = btrfs_map_repair_bio(fs_info, &bio, mirror_num);
bio_uninit(&bio);
-
+ start += PAGE_SIZE;
if (ret)
return ret;
}
---
On Mon, Jun 13, 2022 at 09:39:12PM +0200, David Sterba wrote:
> On Fri, Jun 10, 2022 at 12:10:19AM -0700, syzbot wrote:
> > syzbot has bisected this issue to:
> >
> > commit 4cd4aed63125ccd4efc35162627827491c2a7be7
> > Author: Christoph Hellwig <[email protected]>
> > Date: Fri May 27 08:43:20 2022 +0000
> >
> > btrfs: fold repair_io_failure into btrfs_repair_eb_io_failure
>
> Josef also reported a crash and found a bug in the patch, now added as
> fixup that'll be in for-next:
The patch looks correct to me. Two things to note here:
- I hadn't realized you had queued up the series. I've actually
started to merge some of my bio work with the bio split at
submission time work from Qu and after a few iterations I think
I would do the repair code a bit differently based on that.
Can you just drop the series for now?
- I find it interesting that syzbot hits btrfs metadata repair.
xfstests seems to have no coverage and I could not come up with
a good idea how to properly test it. Does anyone have a good
idea on how to intentially corrupt metadata in a deterministic
way?
On 2022/6/14 15:17, Christoph Hellwig wrote:
> On Mon, Jun 13, 2022 at 09:39:12PM +0200, David Sterba wrote:
>> On Fri, Jun 10, 2022 at 12:10:19AM -0700, syzbot wrote:
>>> syzbot has bisected this issue to:
>>>
>>> commit 4cd4aed63125ccd4efc35162627827491c2a7be7
>>> Author: Christoph Hellwig <[email protected]>
>>> Date: Fri May 27 08:43:20 2022 +0000
>>>
>>> btrfs: fold repair_io_failure into btrfs_repair_eb_io_failure
>>
>> Josef also reported a crash and found a bug in the patch, now added as
>> fixup that'll be in for-next:
>
> The patch looks correct to me. Two things to note here:
>
> - I hadn't realized you had queued up the series. I've actually
> started to merge some of my bio work with the bio split at
> submission time work from Qu and after a few iterations I think
> I would do the repair code a bit differently based on that.
> Can you just drop the series for now?
> - I find it interesting that syzbot hits btrfs metadata repair.
> xfstests seems to have no coverage and I could not come up with
> a good idea how to properly test it. Does anyone have a good
> idea on how to intentially corrupt metadata in a deterministic
> way?
The same way as data?
map-logical to find the location of a mirror, write 4 bytes of zero into
the location, then call it a day.
Although for metadata, you may want to choose a metadata that would
definitely get read.
Thus tree root is a good candidate.
Thanks,
Qu
On Tue, Jun 14, 2022 at 04:50:22PM +0800, Qu Wenruo wrote:
> The same way as data?
>
> map-logical to find the location of a mirror, write 4 bytes of zero into
> the location, then call it a day.
>
> Although for metadata, you may want to choose a metadata that would
> definitely get read.
> Thus tree root is a good candidate.
And how do I find out the logic address of the tree root?
On 2022/6/15 21:21, Christoph Hellwig wrote:
> On Tue, Jun 14, 2022 at 04:50:22PM +0800, Qu Wenruo wrote:
>> The same way as data?
>>
>> map-logical to find the location of a mirror, write 4 bytes of zero into
>> the location, then call it a day.
>>
>> Although for metadata, you may want to choose a metadata that would
>> definitely get read.
>> Thus tree root is a good candidate.
>
> And how do I find out the logic address of the tree root?
For tree root, "btrfs ins dump-super <dev> | grep '^root\s'.
For other tree blocks, "btrfs ins dump-tree <dev>" then with other other
keywords to grab.
Thanks,
Qu
On Thu, Jun 16, 2022 at 05:27:04AM +0800, Qu Wenruo wrote:
>> And how do I find out the logic address of the tree root?
>
> For tree root, "btrfs ins dump-super <dev> | grep '^root\s'.
>
> For other tree blocks, "btrfs ins dump-tree <dev>" then with other other
> keywords to grab.
Thanks a lot !
On Tue, Jun 14, 2022 at 09:17:57AM +0200, Christoph Hellwig wrote:
> On Mon, Jun 13, 2022 at 09:39:12PM +0200, David Sterba wrote:
> > On Fri, Jun 10, 2022 at 12:10:19AM -0700, syzbot wrote:
> > > syzbot has bisected this issue to:
> > >
> > > commit 4cd4aed63125ccd4efc35162627827491c2a7be7
> > > Author: Christoph Hellwig <[email protected]>
> > > Date: Fri May 27 08:43:20 2022 +0000
> > >
> > > btrfs: fold repair_io_failure into btrfs_repair_eb_io_failure
> >
> > Josef also reported a crash and found a bug in the patch, now added as
> > fixup that'll be in for-next:
>
> The patch looks correct to me. Two things to note here:
>
> - I hadn't realized you had queued up the series.
I did a review and as it looked ok I added it to the for-next for
testing coverage, but I don't think I've sent any notice about that.
> I've actually
> started to merge some of my bio work with the bio split at
> submission time work from Qu and after a few iterations I think
> I would do the repair code a bit differently based on that.
> Can you just drop the series for now?
Yeah, we consistently hit 2 crashes, one of them has a fix but the other
not, so I removed the topic branch from for-next. I'll wait for the
reworked version you mention.