Hi,
when mounting an intentionally corrupted btrfs filesystem i get the
following warning and bug message. The image can be found here
http://www.cccmz.de/~snakebyte/btrfs.2.img.bck.bz2
[ 297.406152] device fsid e14cf01de423381a-4bd40b603870018a <6>devid
2147483649 transid 9 /dev/loop0
[ 297.411937] ------------[ cut here ]------------
[ 297.412207] WARNING: at fs/btrfs/disk-io.c:805
read_tree_block+0x4c/0x58()
[ 297.412348] Hardware name: System Name
[ 297.412485] Modules linked in:
[ 297.412684] Pid: 5144, comm: mount Tainted: G W
2.6.29-rc1-00195-g1df11ad #200
[ 297.412872] Call Trace:
[ 297.413027] [<c0123a8d>] warn_slowpath+0x79/0x8f
[ 297.413251] [<c013413d>] ? wake_bit_function+0x0/0x48
[ 297.413395] [<c013f692>] ? print_lock_contention_bug+0x11/0xb2
[ 297.413575] [<c04ad9b6>] ? btrfs_num_copies+0x20/0xba
[ 297.413716] [<c07c6b0e>] ? _spin_unlock+0x2c/0x41
[ 297.413869] [<c04ada46>] ? btrfs_num_copies+0xb0/0xba
[ 297.414007] [<c048f4b5>] ? btree_read_extent_buffer_pages+0x7f/0x95
[ 297.414204] [<c048f7f8>] read_tree_block+0x4c/0x58
[ 297.414339] [<c049264a>] open_ctree+0xa51/0xeff
[ 297.414493] [<c01cc814>] ? disk_name+0x2a/0x6c
[ 297.414624] [<c050d6e7>] ? strlcpy+0x17/0x48
[ 297.414770] [<c047b7e5>] btrfs_get_sb+0x20c/0x3d3
[ 297.414913] [<c017888e>] ? kstrdup+0x2f/0x51
[ 297.415110] [<c0191558>] vfs_kern_mount+0x40/0x7b
[ 297.415242] [<c01915e1>] do_kern_mount+0x37/0xbf
[ 297.415401] [<c01a2c78>] do_mount+0x5cc/0x609
[ 297.415533] [<c07c6db3>] ? lock_kernel+0x19/0x8c
[ 297.415679] [<c01a2d0b>] ? sys_mount+0x56/0xa0
[ 297.415809] [<c01a2d1e>] sys_mount+0x69/0xa0
[ 297.415961] [<c0102ea1>] sysenter_do_call+0x12/0x31
[ 297.416145] ---[ end trace 4eaa2a86a8e2da24 ]---
[ 297.416621] BUG: unable to handle kernel NULL pointer dereference at
000001f4
[ 297.416915] IP: [<c01addd2>] bio_get_nr_vecs+0xf/0x3f
[ 297.417199] *pde = 00000000
[ 297.417398] Oops: 0000 [#1] PREEMPT DEBUG_PAGEALLOC
[ 297.417706] last sysfs file: /sys/block/ram9/range
[ 297.417908] Modules linked in:
[ 297.418086]
[ 297.418161] Pid: 5144, comm: mount Tainted: G W
(2.6.29-rc1-00195-g1df11ad #200) System Name
[ 297.418161] EIP: 0060:[<c01addd2>] EFLAGS: 00010246 CPU: 0
[ 297.418161] EIP is at bio_get_nr_vecs+0xf/0x3f
[ 297.418161] EAX: 00000000 EBX: 00000000 ECX: c12fe0c0 EDX: c48b8040
[ 297.418161] ESI: 00000100 EDI: 00000000 EBP: cb38bc68 ESP: cb38bc44
[ 297.418161] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068
[ 297.418161] Process mount (pid: 5144, ti=cb38b000 task=c14e0000
task.ti=cb38b000)
[ 297.418161] Stack:
[ 297.418161] c04a7ed6 00000001 c12fe0c0 c48b8040 00000000 00001000
cb38bda4 00000000
[ 297.418161] c82f31c0 cb38bd40 c04a9af4 0000e028 00000000 00001000
00000000 c807c3e0
[ 297.418161] cb38bda8 ffffe3fb c04aaf4a 00000000 00000000 00000000
cf53c3f0 00000fff
[ 297.418161] Call Trace:
[ 297.418161] [<c04a7ed6>] ? submit_extent_page+0xea/0x190
[ 297.418161] [<c04a9af4>] ? __extent_read_full_page+0x61d/0x6a0
[ 297.418161] [<c04aaf4a>] ? end_bio_extent_readpage+0x0/0x205
[ 297.418161] [<c0491123>] ? btree_get_extent+0x0/0x16c
[ 297.418161] [<c04a8467>] ? test_range_bit+0xd5/0xdf
[ 297.418161] [<c04aa4c7>] ? read_extent_buffer_pages+0x274/0x3aa
[ 297.418161] [<c07c6b0e>] ? _spin_unlock+0x2c/0x41
[ 297.418161] [<c04acda8>] ? alloc_extent_buffer+0x296/0x348
[ 297.418161] [<c048f477>] ? btree_read_extent_buffer_pages+0x41/0x95
[ 297.418161] [<c0491123>] ? btree_get_extent+0x0/0x16c
[ 297.418161] [<c048f7da>] ? read_tree_block+0x2e/0x58
[ 297.418161] [<c0492712>] ? open_ctree+0xb19/0xeff
[ 297.418161] [<c050d6e7>] ? strlcpy+0x17/0x48
[ 297.418161] [<c047b7e5>] ? btrfs_get_sb+0x20c/0x3d3
[ 297.418161] [<c017888e>] ? kstrdup+0x2f/0x51
[ 297.418161] [<c0191558>] ? vfs_kern_mount+0x40/0x7b
[ 297.418161] [<c01915e1>] ? do_kern_mount+0x37/0xbf
[ 297.418161] [<c01a2c78>] ? do_mount+0x5cc/0x609
[ 297.418161] [<c07c6db3>] ? lock_kernel+0x19/0x8c
[ 297.418161] [<c01a2d0b>] ? sys_mount+0x56/0xa0
[ 297.418161] [<c01a2d1e>] ? sys_mount+0x69/0xa0
[ 297.418161] [<c0102ea1>] ? sysenter_do_call+0x12/0x31
[ 297.418161] Code: 39 45 e8 7c a4 8b 45 e4 e8 81 ec ff ff 89 f8 e8 c8
ec ff ff 83 c4 20 5b 5e 5f 5d c3 55 89 e5 0f 1f 44 00 00 8b 80 bc 00 00
00 5d <8b> 88 f4 01 00 00 8b 81 fc 01 00 00 0f b7 91 06 02 00 00 0f b7
[ 297.418161] EIP: [<c01addd2>] bio_get_nr_vecs+0xf/0x3f SS:ESP
0068:cb38bc44
[ 297.432666] ---[ end trace 4eaa2a86a8e2da25 ]---
The BUG occurs here:
(gdb) l *(bio_get_nr_vecs+0xf)
0xc01addd2 is in bio_get_nr_vecs (include/linux/blkdev.h:785).
780 struct request *, int,
rq_end_io_fn *);
781 extern void blk_unplug(struct request_queue *q);
782
783 static inline struct request_queue *bdev_get_queue(struct
block_device *bdev)
784 {
785 return bdev->bd_disk->queue;
786 }
787
788 static inline void blk_run_backing_dev(struct backing_dev_info
*bdi,
789 struct page *page)
Greetings, Eric
ps: seems the btrfs entry in MAINTAINERS is missing
On Tue, 2009-01-13 at 15:21 +0100, Eric Sesterhenn wrote:
> Hi,
>
> when mounting an intentionally corrupted btrfs filesystem i get the
> following warning and bug message. The image can be found here
> http://www.cccmz.de/~snakebyte/btrfs.2.img.bck.bz2
Thanks for looking at things
Aside from catching checksumming errors, we're not quite ready for
fuzzer style attacks. The code will be hardened for this but it isn't
yet.
-chris
* Chris Mason ([email protected]) wrote:
> On Tue, 2009-01-13 at 15:21 +0100, Eric Sesterhenn wrote:
> > Hi,
> >
> > when mounting an intentionally corrupted btrfs filesystem i get the
> > following warning and bug message. The image can be found here
> > http://www.cccmz.de/~snakebyte/btrfs.2.img.bck.bz2
>
> Thanks for looking at things
>
> Aside from catching checksumming errors, we're not quite ready for
> fuzzer style attacks. The code will be hardened for this but it isn't
> yet.
Does this mean i should stop trying to break it for now or are you interested
in further reports?
Greetings, Eric
On Tue, 2009-01-13 at 15:43 +0100, Eric Sesterhenn wrote:
> * Chris Mason ([email protected]) wrote:
> > On Tue, 2009-01-13 at 15:21 +0100, Eric Sesterhenn wrote:
> > > Hi,
> > >
> > > when mounting an intentionally corrupted btrfs filesystem i get the
> > > following warning and bug message. The image can be found here
> > > http://www.cccmz.de/~snakebyte/btrfs.2.img.bck.bz2
> >
> > Thanks for looking at things
> >
> > Aside from catching checksumming errors, we're not quite ready for
> > fuzzer style attacks. The code will be hardened for this but it isn't
> > yet.
>
> Does this mean i should stop trying to break it for now or are you interested
> in further reports?
For now, you'll be able to break it at will ;) We'll send out a please
send us corrupt notice release announcement when we've hardened things.
-chris
On Tue 2009-01-13 15:43:07, Eric Sesterhenn wrote:
> * Chris Mason ([email protected]) wrote:
> > On Tue, 2009-01-13 at 15:21 +0100, Eric Sesterhenn wrote:
> > > Hi,
> > >
> > > when mounting an intentionally corrupted btrfs filesystem i get the
> > > following warning and bug message. The image can be found here
> > > http://www.cccmz.de/~snakebyte/btrfs.2.img.bck.bz2
> >
> > Thanks for looking at things
> >
> > Aside from catching checksumming errors, we're not quite ready for
> > fuzzer style attacks. The code will be hardened for this but it isn't
> > yet.
>
> Does this mean i should stop trying to break it for now or are you interested
> in further reports?
Does ext2/3 and vfat survive that kind of attacks? Those are 'in
production' and should survive it...
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Hi,
* Pavel Machek ([email protected]) wrote:
> On Tue 2009-01-13 15:43:07, Eric Sesterhenn wrote:
> > * Chris Mason ([email protected]) wrote:
> > > On Tue, 2009-01-13 at 15:21 +0100, Eric Sesterhenn wrote:
> > > > Hi,
> > > >
> > > > when mounting an intentionally corrupted btrfs filesystem i get the
> > > > following warning and bug message. The image can be found here
> > > > http://www.cccmz.de/~snakebyte/btrfs.2.img.bck.bz2
> > >
> > > Thanks for looking at things
> > >
> > > Aside from catching checksumming errors, we're not quite ready for
> > > fuzzer style attacks. The code will be hardened for this but it isn't
> > > yet.
> >
> > Does this mean i should stop trying to break it for now or are you interested
> > in further reports?
>
> Does ext2/3 and vfat survive that kind of attacks? Those are 'in
> production' and should survive it...
I regularly (once or twice a week) test 100 corrupted images of
vfat, udf, msdos, swap, iso9660, ext2, ext3, ext4, minix, bfs, befs,
hfs, hfs+, qnx4, affs and cramfs on each of my two test machines.
They are all pretty stable, one remaining thing on my list i didnt have
time to look into was an issue with fat (msdos) triggering a bug in
buffer.c the other is a warning with ext4 in jbd2/checkpoint.c:166
If there is a filesystem you are interested in thats not on the list
or that you want me to test a bit more, just let me know
Greetings, Eric
Hi!
> > > > Thanks for looking at things
> > > >
> > > > Aside from catching checksumming errors, we're not quite ready for
> > > > fuzzer style attacks. The code will be hardened for this but it isn't
> > > > yet.
> > >
> > > Does this mean i should stop trying to break it for now or are you interested
> > > in further reports?
> >
> > Does ext2/3 and vfat survive that kind of attacks? Those are 'in
> > production' and should survive it...
>
> I regularly (once or twice a week) test 100 corrupted images of
> vfat, udf, msdos, swap, iso9660, ext2, ext3, ext4, minix, bfs, befs,
> hfs, hfs+, qnx4, affs and cramfs on each of my two test machines.
>
> They are all pretty stable, one remaining thing on my list i didnt have
> time to look into was an issue with fat (msdos) triggering a bug in
> buffer.c the other is a warning with ext4 in jbd2/checkpoint.c:166
Good, I did not expect filesystems to be in so good state. Thanks!
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Tue, Jan 20, 2009 at 07:31:50AM +0100, Eric Sesterhenn wrote:
> * Pavel Machek ([email protected]) wrote:
> > Does ext2/3 and vfat survive that kind of attacks? Those are 'in
> > production' and should survive it...
>
> I regularly (once or twice a week) test 100 corrupted images of
> vfat, udf, msdos, swap, iso9660, ext2, ext3, ext4, minix, bfs, befs,
> hfs, hfs+, qnx4, affs and cramfs on each of my two test machines.
Any reason you are not testing XFS in that set?
Cheers,
Dave.
--
Dave Chinner
[email protected]
* Dave Chinner ([email protected]) wrote:
> On Tue, Jan 20, 2009 at 07:31:50AM +0100, Eric Sesterhenn wrote:
> > * Pavel Machek ([email protected]) wrote:
> > > Does ext2/3 and vfat survive that kind of attacks? Those are 'in
> > > production' and should survive it...
> >
> > I regularly (once or twice a week) test 100 corrupted images of
> > vfat, udf, msdos, swap, iso9660, ext2, ext3, ext4, minix, bfs, befs,
> > hfs, hfs+, qnx4, affs and cramfs on each of my two test machines.
>
> Any reason you are not testing XFS in that set?
So far the responses from xfs folks have been disappointing, if you are
interested in bugreports i can send you some.
Greetings, Eric
On Tue, Jan 20, 2009 at 11:15:03AM +0100, Eric Sesterhenn wrote:
> * Dave Chinner ([email protected]) wrote:
> > On Tue, Jan 20, 2009 at 07:31:50AM +0100, Eric Sesterhenn wrote:
> > > * Pavel Machek ([email protected]) wrote:
> > > > Does ext2/3 and vfat survive that kind of attacks? Those are 'in
> > > > production' and should survive it...
> > >
> > > I regularly (once or twice a week) test 100 corrupted images of
> > > vfat, udf, msdos, swap, iso9660, ext2, ext3, ext4, minix, bfs, befs,
> > > hfs, hfs+, qnx4, affs and cramfs on each of my two test machines.
> >
> > Any reason you are not testing XFS in that set?
>
> So far the responses from xfs folks have been disappointing, if you are
> interested in bugreports i can send you some.
Sure I am. It would be good if you could start testing XFS along
with all the other filesystems and report anything you find.
Cheers,
Dave.
--
Dave Chinner
[email protected]
On Tue, 2009-01-20 at 07:31 +0100, Eric Sesterhenn wrote:
> Hi,
>
> * Pavel Machek ([email protected]) wrote:
>
> > Does ext2/3 and vfat survive that kind of attacks? Those are 'in
> > production' and should survive it...
>
> I regularly (once or twice a week) test 100 corrupted images of
> vfat, udf, msdos, swap, iso9660, ext2, ext3, ext4, minix, bfs, befs,
> hfs, hfs+, qnx4, affs and cramfs on each of my two test machines.
>
> They are all pretty stable, one remaining thing on my list i didnt have
> time to look into was an issue with fat (msdos) triggering a bug in
> buffer.c the other is a warning with ext4 in jbd2/checkpoint.c:166
>
> If there is a filesystem you are interested in thats not on the list
> or that you want me to test a bit more, just let me know
>
squashfs is in the kernel now, that would be good to see as well. I
didn't realize you were doing such extensive tests, thanks for doing
them.
-chris
On Tue, Jan 20, 2009 at 11:59:44PM +1100, Dave Chinner wrote:
> > So far the responses from xfs folks have been disappointing, if you are
> > interested in bugreports i can send you some.
>
> Sure I am. It would be good if you could start testing XFS along
> with all the other filesystems and report anything you find.
I think that was the issue with the debug builds. If you do this
testing always do it without CONFIG_XFS_DEBUG set as with that option
we intentionally panic on detected disk corruptions.
* Chris Mason ([email protected]) wrote:
> On Tue, 2009-01-20 at 07:31 +0100, Eric Sesterhenn wrote:
> > Hi,
> >
> > * Pavel Machek ([email protected]) wrote:
> >
> > > Does ext2/3 and vfat survive that kind of attacks? Those are 'in
> > > production' and should survive it...
> >
> > I regularly (once or twice a week) test 100 corrupted images of
> > vfat, udf, msdos, swap, iso9660, ext2, ext3, ext4, minix, bfs, befs,
> > hfs, hfs+, qnx4, affs and cramfs on each of my two test machines.
> >
> > They are all pretty stable, one remaining thing on my list i didnt have
> > time to look into was an issue with fat (msdos) triggering a bug in
> > buffer.c the other is a warning with ext4 in jbd2/checkpoint.c:166
> >
> > If there is a filesystem you are interested in thats not on the list
> > or that you want me to test a bit more, just let me know
> >
>
> squashfs is in the kernel now, that would be good to see as well. I
> didn't realize you were doing such extensive tests, thanks for doing
> them.
I already tested squashfs. One issue is basically a problem with
the zlib-api for which i just posted a patch here
http://marc.info/?t=123212807300003&r=1&w=2
The other is an overwritten redzone (also reported in this thread
http://marc.info/?l=linux-fsdevel&m=123212794425497&w=2)
Looks like a length parameter is passed to squashfs_read_data
which is bigger than ((msblk->block_size >> msblk->devblksize_log2) +
1), so the kcalloced buffer gets overwritten later. Maybe you want to
take a look at this issue. Those are the only two problems i have
seen so far with ~8000 tested squashfs images.
Greetings, Eric
* Dave Chinner ([email protected]) wrote:
> On Tue, Jan 20, 2009 at 11:15:03AM +0100, Eric Sesterhenn wrote:
> > * Dave Chinner ([email protected]) wrote:
> > > On Tue, Jan 20, 2009 at 07:31:50AM +0100, Eric Sesterhenn wrote:
> > > > * Pavel Machek ([email protected]) wrote:
> > > > > Does ext2/3 and vfat survive that kind of attacks? Those are 'in
> > > > > production' and should survive it...
> > > >
> > > > I regularly (once or twice a week) test 100 corrupted images of
> > > > vfat, udf, msdos, swap, iso9660, ext2, ext3, ext4, minix, bfs, befs,
> > > > hfs, hfs+, qnx4, affs and cramfs on each of my two test machines.
> > >
> > > Any reason you are not testing XFS in that set?
> >
> > So far the responses from xfs folks have been disappointing, if you are
> > interested in bugreports i can send you some.
>
> Sure I am. It would be good if you could start testing XFS along
> with all the other filesystems and report anything you find.
Ok, i wont report stuff with only xfs-internal backtraces from
xfs_error_report() or are they interesting to you?
This occurs during mount, box is dead afterwards
Image can be found here :
http://www.cccmz.de/~snakebyte/xfs.11.img.bz2
I see this every ~10 images, which makes further testing hard :)
[ 235.250167] ------------[ cut here ]------------
[ 235.250354] kernel BUG at mm/vmalloc.c:164!
[ 235.250478] invalid opcode: 0000 [#1] PREEMPT DEBUG_PAGEALLOC
[ 235.250869] last sysfs file: /sys/block/ram9/range
[ 235.250998] Modules linked in:
[ 235.251037]
[ 235.251037] Pid: 5352, comm: mount Not tainted
(2.6.29-rc2-00021-gd84d31c #216) System Name
[ 235.251037] EIP: 0060:[<c0182af1>] EFLAGS: 00010246 CPU: 0
[ 235.251037] EIP is at vmap_page_range+0x19/0x112
[ 235.251037] EAX: d1000000 EBX: d1000000 ECX: 00000163 EDX: d1000000
[ 235.251037] ESI: 00000003 EDI: d1000000 EBP: cbbd2c08 ESP: cbbd2be8
[ 235.251037] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068
[ 235.251037] Process mount (pid: 5352, ti=cbbd2000 task=cbb85b00
task.ti=cbbd2000)
[ 235.251037] Stack:
[ 235.251037] 00000246 cbb85b00 00000163 c01414cf cbbd2c0c d1000000
00000003 cba0f810
[ 235.251037] cbbd2c40 c018367c c848e280 00100000 00000000 c848e280
00000000 00000014
[ 235.251037] d1000000 cba0f944 00000000 c848e160 00000000 c848e160
cbbd2c54 c03b2e1e
[ 235.251037] Call Trace:
[ 235.251037] [<c01414cf>] ? trace_hardirqs_on+0xb/0xd
[ 235.251037] [<c018367c>] ? vm_map_ram+0x36e/0x38a
[ 235.251037] [<c03b2e1e>] ? _xfs_buf_map_pages+0x42/0x6d
[ 235.251037] [<c03b3773>] ? xfs_buf_get_noaddr+0xbc/0x11f
[ 235.251037] [<c03a2406>] ? xlog_get_bp+0x5a/0x5d
[ 235.251037] [<c03a28fa>] ? xlog_find_verify_log_record+0x26/0x208
[ 235.251037] [<c03a3521>] ? xlog_find_zeroed+0x1d6/0x214
[ 235.251037] [<c03a3584>] ? xlog_find_head+0x25/0x358
[ 235.251037] [<c011ca1f>] ? __enqueue_entity+0xa1/0xa9
[ 235.251037] [<c0107084>] ? native_sched_clock+0x41/0x68
[ 235.251037] [<c03a38d2>] ? xlog_find_tail+0x1b/0x3fa
[ 235.251037] [<c01412b5>] ? mark_held_locks+0x43/0x5a
[ 235.251037] [<c07b06a8>] ? _spin_unlock_irqrestore+0x3b/0x5d
[ 235.251037] [<c07b06b4>] ? _spin_unlock_irqrestore+0x47/0x5d
[ 235.251037] [<c01209c6>] ? try_to_wake_up+0x12f/0x13a
[ 235.251037] [<c03a5654>] ? xlog_recover+0x19/0x81
[ 235.251037] [<c03aaac3>] ? xfs_trans_ail_init+0x4b/0x64
[ 235.251037] [<c039f97d>] ? xfs_log_mount+0xef/0x13d
[ 235.251037] [<c03a7152>] ? xfs_mountfs+0x30d/0x5b8
[ 235.251037] [<c0506101>] ? __debug_object_init+0x28b/0x293
[ 235.251037] [<c012b3b5>] ? init_timer+0x1c/0x1f
[ 235.251037] [<c03a7aa1>] ? xfs_mru_cache_create+0x114/0x14e
[ 235.251037] [<c03ba205>] ? xfs_fs_fill_super+0x196/0x2e5
[ 235.251037] [<c01919c5>] ? get_sb_bdev+0xf1/0x13f
[ 235.251037] [<c0178896>] ? kstrdup+0x2f/0x51
[ 235.251037] [<c03b881b>] ? xfs_fs_get_sb+0x18/0x1a
[ 235.251037] [<c03ba06f>] ? xfs_fs_fill_super+0x0/0x2e5
[ 235.251037] [<c019159c>] ? vfs_kern_mount+0x40/0x7b
[ 235.251037] [<c0191625>] ? do_kern_mount+0x37/0xbf
[ 235.251037] [<c01a2cb0>] ? do_mount+0x5cc/0x609
[ 235.251037] [<c07b087b>] ? lock_kernel+0x19/0x8c
[ 235.251037] [<c01a2d43>] ? sys_mount+0x56/0xa0
[ 235.251037] [<c01a2d56>] ? sys_mount+0x69/0xa0
[ 235.251037] [<c0102ea1>] ? sysenter_do_call+0x12/0x31
[ 235.251037] Code: 0f 0b eb fe ba 01 00 00 00 89 c8 e8 02 ff ff ff 5d
c3 55 89 e5 57 56 53 83 ec 14 0f 1f 44 00 00 39 d0 89 c3 89 d7 89 4d e8
72 04 <0f> 0b eb fe c1 e8 16 8d 34 85 00 00 00 00 03 35 84 91 a3 c0 8d
[ 235.251037] EIP: [<c0182af1>] vmap_page_range+0x19/0x112 SS:ESP
0068:cbbd2be8
[ 235.269534] ---[ end trace 2d923bf9b290e3d9 ]---
[ 235.274695] BUG: unable to handle kernel NULL pointer dereference at
0000000f
[ 235.275005] IP: [<c0136592>] enqueue_hrtimer+0x2d/0x85
[ 235.275041] *pde = 00000000
[ 235.275041] Oops: 0000 [#2] PREEMPT DEBUG_PAGEALLOC
[ 235.275041] last sysfs file: /sys/block/ram9/range
[ 235.275041] Modules linked in:
[ 235.275041]
[ 235.275041] Pid: 5356, comm: syslogd Tainted: G D
(2.6.29-rc2-00021-gd84d31c #216) System Name
[ 235.275041] EIP: 0060:[<c0136592>] EFLAGS: 00010086 CPU: 0
[ 235.275041] EIP is at enqueue_hrtimer+0x2d/0x85
[ 235.275041] EAX: cba3cb00 EBX: cbbbad24 ECX: ffffffff EDX: 0000003a
[ 235.275041] ESI: cba3cb04 EDI: c0a42d84 EBP: c8429f0c ESP: c8429efc
[ 235.275041] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068
[ 235.275041] Process syslogd (pid: 5356, ti=c8429000 task=cbb14e00
task.ti=c8429000)
[ 235.275041] Stack:
[ 235.275041] 00000000 00000000 00000000 cbbbad24 c8429f2c c0136a47
c0a42d84 00000096
[ 235.275041] 00000000 cbb14e00 7e11d600 00000003 c8429f3c c0136b2a
00000000 00000001
[ 235.275041] c8429f80 c0126d3e 00000001 00000000 00000000 cbbbacc0
bfbc59dc c8429f7c
[ 235.275041] Call Trace:
[ 235.275041] [<c0136a47>] ? hrtimer_start_range_ns+0xc2/0x193
[ 235.275041] [<c0136b2a>] ? hrtimer_start+0x12/0x14
[ 235.275041] [<c0126d3e>] ? do_setitimer+0x131/0x2d4
[ 235.275041] [<c0126f93>] ? alarm_setitimer+0x3a/0x59
[ 235.275041] [<c012b5b1>] ? sys_alarm+0x10/0x12
[ 235.275041] [<c0102ea1>] ? sysenter_do_call+0x12/0x31
[ 235.275041] Code: e5 57 56 53 83 ec 04 0f 1f 44 00 00 89 d7 89 c3 8d
72 08 ba c0 2d a4 c0 e8 a1 f5 3c 00 31 c0 c7 45 f0 01 00 00 00 eb 23 8b
53 10 <3b> 51 10 8b 43 0c 7f 0c 7c 05 3b 41 0c 73 05 8d 71 08 eb 0a 8d
[ 235.275041] EIP: [<c0136592>] enqueue_hrtimer+0x2d/0x85 SS:ESP
0068:c8429efc
[ 235.275041] ---[ end trace 2d923bf9b290e3da ]---
[ 235.275041] note: syslogd[5356] exited with preempt_count 2
Different trace from the same issue:
[ 329.292534] ------------[ cut here ]------------
[ 329.292721] kernel BUG at mm/vmalloc.c:164!
[ 329.292849] invalid opcode: 0000 [#1] PREEMPT DEBUG_PAGEALLOC
[ 329.293037] last sysfs file: /sys/block/ram9/range
[ 329.293037] Modules linked in:
[ 329.293037]
[ 329.293037] Pid: 5860, comm: mount Tainted: G W
(2.6.29-rc2-00021-gd84d31c #216) System Name
[ 329.293037] EIP: 0060:[<c0182af1>] EFLAGS: 00010246 CPU: 0
[ 329.293037] EIP is at vmap_page_range+0x19/0x112
[ 329.293037] EAX: d1800000 EBX: d1800000 ECX: 00000163 EDX: d1800000
[ 329.293037] ESI: 00000005 EDI: d1800000 EBP: cacacc08 ESP: cacacbe8
[ 329.293037] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068
[ 329.293037] Process mount (pid: 5860, ti=cacac000 task=caf71a00
task.ti=cacac000)
[ 329.293037] Stack:
[ 329.293037] 00000246 caf71a00 00000163 c01414cf cacacc0c d1800000
00000005 cae795e0
[ 329.293037] cacacc40 c018367c c280e120 00100000 00000000 c280e120
00000000 00000014
[ 329.293037] d1800000 cae79714 00000000 c280e000 00000000 c280e000
cacacc54 c03b2e1e
[ 329.293037] Call Trace:
[ 329.293037] [<c01414cf>] ? trace_hardirqs_on+0xb/0xd
[ 329.293037] [<c018367c>] ? vm_map_ram+0x36e/0x38a
[ 329.293037] [<c03b2e1e>] ? _xfs_buf_map_pages+0x42/0x6d
[ 329.293037] [<c03b3773>] ? xfs_buf_get_noaddr+0xbc/0x11f
[ 329.293037] [<c03a2406>] ? xlog_get_bp+0x5a/0x5d
[ 329.293037] [<c03a28fa>] ? xlog_find_verify_log_record+0x26/0x208
[ 329.293037] [<c03a3521>] ? xlog_find_zeroed+0x1d6/0x214
[ 329.293037] [<c03a3584>] ? xlog_find_head+0x25/0x358
[ 329.293037] [<c011ca1f>] ? __enqueue_entity+0xa1/0xa9
[ 329.293037] [<c03a38d2>] ? xlog_find_tail+0x1b/0x3fa
[ 329.293037] [<c01414cf>] ? trace_hardirqs_on+0xb/0xd
[ 329.293037] [<c07b06a8>] ? _spin_unlock_irqrestore+0x3b/0x5d
[ 329.293037] [<c07b06b4>] ? _spin_unlock_irqrestore+0x47/0x5d
[ 329.293037] [<c01209c6>] ? try_to_wake_up+0x12f/0x13a
[ 329.293037] [<c03a5654>] ? xlog_recover+0x19/0x81
[ 329.293037] [<c03aaac3>] ? xfs_trans_ail_init+0x4b/0x64
[ 329.293037] [<c039f97d>] ? xfs_log_mount+0xef/0x13d
[ 329.293037] [<c03a7152>] ? xfs_mountfs+0x30d/0x5b8
[ 329.293037] [<c0506101>] ? __debug_object_init+0x28b/0x293
[ 329.293037] [<c012b3b5>] ? init_timer+0x1c/0x1f
[ 329.293037] [<c03a7aa1>] ? xfs_mru_cache_create+0x114/0x14e
[ 329.293037] [<c03ba205>] ? xfs_fs_fill_super+0x196/0x2e5
[ 329.293037] [<c01919c5>] ? get_sb_bdev+0xf1/0x13f
[ 329.293037] [<c0178896>] ? kstrdup+0x2f/0x51
[ 329.293037] [<c03b881b>] ? xfs_fs_get_sb+0x18/0x1a
[ 329.293037] [<c03ba06f>] ? xfs_fs_fill_super+0x0/0x2e5
[ 329.293037] [<c019159c>] ? vfs_kern_mount+0x40/0x7b
[ 329.293037] [<c0191625>] ? do_kern_mount+0x37/0xbf
[ 329.293037] [<c01a2cb0>] ? do_mount+0x5cc/0x609
[ 329.293037] [<c07b087b>] ? lock_kernel+0x19/0x8c
[ 329.293037] [<c01a2d43>] ? sys_mount+0x56/0xa0
[ 329.293037] [<c01a2d56>] ? sys_mount+0x69/0xa0
[ 329.293037] [<c0102ea1>] ? sysenter_do_call+0x12/0x31
[ 329.293037] Code: 0f 0b eb fe ba 01 00 00 00 89 c8 e8 02 ff ff ff 5d
c3 55 89 e5 57 56 53 83 ec 14 0f 1f 44 00 00 39 d0 89 c3 89 d7 89 4d e8
72 04 <0f> 0b eb fe c1 e8 16 8d 34 85 00 00 00 00 03 35 84 91 a3 c0 8d
[ 329.293037] EIP: [<c0182af1>] vmap_page_range+0x19/0x112 SS:ESP
0068:cacacbe8
[ 329.310688] ---[ end trace a7919e7f17c0a727 ]---
Greetings, Eric
On Tue 2009-01-20 18:34:55, Eric Sesterhenn wrote:
> * Dave Chinner ([email protected]) wrote:
> > On Tue, Jan 20, 2009 at 11:15:03AM +0100, Eric Sesterhenn wrote:
> > > * Dave Chinner ([email protected]) wrote:
> > > > On Tue, Jan 20, 2009 at 07:31:50AM +0100, Eric Sesterhenn wrote:
> > > > > * Pavel Machek ([email protected]) wrote:
> > > > > > Does ext2/3 and vfat survive that kind of attacks? Those are 'in
> > > > > > production' and should survive it...
> > > > >
> > > > > I regularly (once or twice a week) test 100 corrupted images of
> > > > > vfat, udf, msdos, swap, iso9660, ext2, ext3, ext4, minix, bfs, befs,
> > > > > hfs, hfs+, qnx4, affs and cramfs on each of my two test machines.
> > > >
> > > > Any reason you are not testing XFS in that set?
> > >
> > > So far the responses from xfs folks have been disappointing, if you are
> > > interested in bugreports i can send you some.
> >
> > Sure I am. It would be good if you could start testing XFS along
> > with all the other filesystems and report anything you find.
>
> Ok, i wont report stuff with only xfs-internal backtraces from
> xfs_error_report() or are they interesting to you?
>
> This occurs during mount, box is dead afterwards
> Image can be found here :
> http://www.cccmz.de/~snakebyte/xfs.11.img.bz2
> I see this every ~10 images, which makes further testing hard :)
BTW have you considered trying to run fsck.* on such images? I had lot
of fun with e2fsck/e3fsck/fsck.vfat.
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Tue 2009-01-20 08:28:29, Christoph Hellwig wrote:
> On Tue, Jan 20, 2009 at 11:59:44PM +1100, Dave Chinner wrote:
> > > So far the responses from xfs folks have been disappointing, if you are
> > > interested in bugreports i can send you some.
> >
> > Sure I am. It would be good if you could start testing XFS along
> > with all the other filesystems and report anything you find.
>
> I think that was the issue with the debug builds. If you do this
> testing always do it without CONFIG_XFS_DEBUG set as with that option
> we intentionally panic on detected disk corruptions.
Uhuh, *_DEBUG options are not supposed to make kernel less
stable/robust. Should that crashing functionality be guarded with
command line option or something? ext2 has errors=panic mount
option...
Pavel
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
[drop btrfs list from this thread]
On Tue, Jan 20, 2009 at 06:34:55PM +0100, Eric Sesterhenn wrote:
> * Dave Chinner ([email protected]) wrote:
> > Sure I am. It would be good if you could start testing XFS along
> > with all the other filesystems and report anything you find.
>
> Ok, i wont report stuff with only xfs-internal backtraces from
> xfs_error_report() or are they interesting to you?
If it catches the corruption and shuts down then that's a valid
response to corruption. Mostly they are not interesting.
> This occurs during mount, box is dead afterwards
> Image can be found here :
> http://www.cccmz.de/~snakebyte/xfs.11.img.bz2
> I see this every ~10 images, which makes further testing hard :)
For future bugs, can you start a new thread on [email protected]
for each report?
> [ 235.250167] ------------[ cut here ]------------
> [ 235.250354] kernel BUG at mm/vmalloc.c:164!
> [ 235.250478] invalid opcode: 0000 [#1] PREEMPT DEBUG_PAGEALLOC
> [ 235.250869] last sysfs file: /sys/block/ram9/range
> [ 235.250998] Modules linked in:
> [ 235.251037]
> [ 235.251037] Pid: 5352, comm: mount Not tainted
> (2.6.29-rc2-00021-gd84d31c #216) System Name
> [ 235.251037] EIP: 0060:[<c0182af1>] EFLAGS: 00010246 CPU: 0
> [ 235.251037] EIP is at vmap_page_range+0x19/0x112
> [ 235.251037] EAX: d1000000 EBX: d1000000 ECX: 00000163 EDX: d1000000
> [ 235.251037] ESI: 00000003 EDI: d1000000 EBP: cbbd2c08 ESP: cbbd2be8
> [ 235.251037] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068
> [ 235.251037] Process mount (pid: 5352, ti=cbbd2000 task=cbb85b00
> task.ti=cbbd2000)
> [ 235.251037] Stack:
> [ 235.251037] 00000246 cbb85b00 00000163 c01414cf cbbd2c0c d1000000
> 00000003 cba0f810
> [ 235.251037] cbbd2c40 c018367c c848e280 00100000 00000000 c848e280
> 00000000 00000014
> [ 235.251037] d1000000 cba0f944 00000000 c848e160 00000000 c848e160
> cbbd2c54 c03b2e1e
> [ 235.251037] Call Trace:
> [ 235.251037] [<c01414cf>] ? trace_hardirqs_on+0xb/0xd
> [ 235.251037] [<c018367c>] ? vm_map_ram+0x36e/0x38a
> [ 235.251037] [<c03b2e1e>] ? _xfs_buf_map_pages+0x42/0x6d
> [ 235.251037] [<c03b3773>] ? xfs_buf_get_noaddr+0xbc/0x11f
> [ 235.251037] [<c03a2406>] ? xlog_get_bp+0x5a/0x5d
> [ 235.251037] [<c03a28fa>] ? xlog_find_verify_log_record+0x26/0x208
> [ 235.251037] [<c03a3521>] ? xlog_find_zeroed+0x1d6/0x214
> [ 235.251037] [<c03a3584>] ? xlog_find_head+0x25/0x358
.....
Ok, that's crashing in the new vmap code. It might take a couple
of days before I get a chance to look at this, but I've cc'd Nick Piggin
in case he has a chance to look at it before that. It's probably
an XFS bug, anyway.
Cheers,
Dave.
--
Dave Chinner
[email protected]
On Tue, Jan 20, 2009 at 11:20:19PM +0100, Pavel Machek wrote:
> On Tue 2009-01-20 08:28:29, Christoph Hellwig wrote:
> > On Tue, Jan 20, 2009 at 11:59:44PM +1100, Dave Chinner wrote:
> > > > So far the responses from xfs folks have been disappointing, if you are
> > > > interested in bugreports i can send you some.
> > >
> > > Sure I am. It would be good if you could start testing XFS along
> > > with all the other filesystems and report anything you find.
> >
> > I think that was the issue with the debug builds. If you do this
> > testing always do it without CONFIG_XFS_DEBUG set as with that option
> > we intentionally panic on detected disk corruptions.
>
> Uhuh, *_DEBUG options are not supposed to make kernel less
> stable/robust. Should that crashing functionality be guarded with
> command line option or something? ext2 has errors=panic mount
> option...
No, it's a debugging option that is described as:
"Say N unless you are an XFS developer, or you play one on TV."
Seriously, if you aren't trying to develop XFS stuff then *don't turn it
on*.
Cheers,
Dave.
--
Dave Chinner
[email protected]
On Wednesday 21 January 2009 14:57:03 Dave Chinner wrote:
> [drop btrfs list from this thread]
>
> On Tue, Jan 20, 2009 at 06:34:55PM +0100, Eric Sesterhenn wrote:
> > [ 235.250167] ------------[ cut here ]------------
> > [ 235.250354] kernel BUG at mm/vmalloc.c:164!
> > [ 235.250478] invalid opcode: 0000 [#1] PREEMPT DEBUG_PAGEALLOC
> > [ 235.250869] last sysfs file: /sys/block/ram9/range
> > [ 235.250998] Modules linked in:
> > [ 235.251037]
> > [ 235.251037] Pid: 5352, comm: mount Not tainted
> > (2.6.29-rc2-00021-gd84d31c #216) System Name
> > [ 235.251037] EIP: 0060:[<c0182af1>] EFLAGS: 00010246 CPU: 0
> > [ 235.251037] EIP is at vmap_page_range+0x19/0x112
> > [ 235.251037] EAX: d1000000 EBX: d1000000 ECX: 00000163 EDX: d1000000
> > [ 235.251037] ESI: 00000003 EDI: d1000000 EBP: cbbd2c08 ESP: cbbd2be8
> > [ 235.251037] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068
> > [ 235.251037] Process mount (pid: 5352, ti=cbbd2000 task=cbb85b00
> > task.ti=cbbd2000)
> > [ 235.251037] Stack:
> > [ 235.251037] 00000246 cbb85b00 00000163 c01414cf cbbd2c0c d1000000
> > 00000003 cba0f810
> > [ 235.251037] cbbd2c40 c018367c c848e280 00100000 00000000 c848e280
> > 00000000 00000014
> > [ 235.251037] d1000000 cba0f944 00000000 c848e160 00000000 c848e160
> > cbbd2c54 c03b2e1e
> > [ 235.251037] Call Trace:
> > [ 235.251037] [<c01414cf>] ? trace_hardirqs_on+0xb/0xd
> > [ 235.251037] [<c018367c>] ? vm_map_ram+0x36e/0x38a
> > [ 235.251037] [<c03b2e1e>] ? _xfs_buf_map_pages+0x42/0x6d
> > [ 235.251037] [<c03b3773>] ? xfs_buf_get_noaddr+0xbc/0x11f
> > [ 235.251037] [<c03a2406>] ? xlog_get_bp+0x5a/0x5d
> > [ 235.251037] [<c03a28fa>] ? xlog_find_verify_log_record+0x26/0x208
> > [ 235.251037] [<c03a3521>] ? xlog_find_zeroed+0x1d6/0x214
> > [ 235.251037] [<c03a3584>] ? xlog_find_head+0x25/0x358
>
> .....
>
> Ok, that's crashing in the new vmap code. It might take a couple
> of days before I get a chance to look at this, but I've cc'd Nick Piggin
> in case he has a chance to look at it before that. It's probably
> an XFS bug, anyway.
Hmm, it is crashing in BUG_ON(addr >= end); where this could happen
if XFS asks to map a really huge (or -ve) number of pages and wraps
the range, or if vmap subsystem returns an address right near the
end of the address range and addr+size wraps (which would be a bug
in vmap of course, but I think maybe less likely).
Printing out the values for addr and end might be instructive (XFS
might be asking to map -ERRNO pages or something).
[really cc nick]
On Wed, Jan 21, 2009 at 02:57:03PM +1100, Dave Chinner wrote:
> [drop btrfs list from this thread]
>
> On Tue, Jan 20, 2009 at 06:34:55PM +0100, Eric Sesterhenn wrote:
> > * Dave Chinner ([email protected]) wrote:
> > > Sure I am. It would be good if you could start testing XFS along
> > > with all the other filesystems and report anything you find.
> >
> > Ok, i wont report stuff with only xfs-internal backtraces from
> > xfs_error_report() or are they interesting to you?
>
> If it catches the corruption and shuts down then that's a valid
> response to corruption. Mostly they are not interesting.
>
> > This occurs during mount, box is dead afterwards
> > Image can be found here :
> > http://www.cccmz.de/~snakebyte/xfs.11.img.bz2
> > I see this every ~10 images, which makes further testing hard :)
>
> For future bugs, can you start a new thread on [email protected]
> for each report?
>
> > [ 235.250167] ------------[ cut here ]------------
> > [ 235.250354] kernel BUG at mm/vmalloc.c:164!
> > [ 235.250478] invalid opcode: 0000 [#1] PREEMPT DEBUG_PAGEALLOC
> > [ 235.250869] last sysfs file: /sys/block/ram9/range
> > [ 235.250998] Modules linked in:
> > [ 235.251037]
> > [ 235.251037] Pid: 5352, comm: mount Not tainted
> > (2.6.29-rc2-00021-gd84d31c #216) System Name
> > [ 235.251037] EIP: 0060:[<c0182af1>] EFLAGS: 00010246 CPU: 0
> > [ 235.251037] EIP is at vmap_page_range+0x19/0x112
> > [ 235.251037] EAX: d1000000 EBX: d1000000 ECX: 00000163 EDX: d1000000
> > [ 235.251037] ESI: 00000003 EDI: d1000000 EBP: cbbd2c08 ESP: cbbd2be8
> > [ 235.251037] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068
> > [ 235.251037] Process mount (pid: 5352, ti=cbbd2000 task=cbb85b00
> > task.ti=cbbd2000)
> > [ 235.251037] Stack:
> > [ 235.251037] 00000246 cbb85b00 00000163 c01414cf cbbd2c0c d1000000
> > 00000003 cba0f810
> > [ 235.251037] cbbd2c40 c018367c c848e280 00100000 00000000 c848e280
> > 00000000 00000014
> > [ 235.251037] d1000000 cba0f944 00000000 c848e160 00000000 c848e160
> > cbbd2c54 c03b2e1e
> > [ 235.251037] Call Trace:
> > [ 235.251037] [<c01414cf>] ? trace_hardirqs_on+0xb/0xd
> > [ 235.251037] [<c018367c>] ? vm_map_ram+0x36e/0x38a
> > [ 235.251037] [<c03b2e1e>] ? _xfs_buf_map_pages+0x42/0x6d
> > [ 235.251037] [<c03b3773>] ? xfs_buf_get_noaddr+0xbc/0x11f
> > [ 235.251037] [<c03a2406>] ? xlog_get_bp+0x5a/0x5d
> > [ 235.251037] [<c03a28fa>] ? xlog_find_verify_log_record+0x26/0x208
> > [ 235.251037] [<c03a3521>] ? xlog_find_zeroed+0x1d6/0x214
> > [ 235.251037] [<c03a3584>] ? xlog_find_head+0x25/0x358
> .....
>
> Ok, that's crashing in the new vmap code. It might take a couple
> of days before I get a chance to look at this, but I've cc'd Nick Piggin
> in case he has a chance to look at it before that. It's probably
> an XFS bug, anyway.
>
> Cheers,
>
> Dave.
>
> --
> Dave Chinner
> [email protected]
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
--
Dave Chinner
[email protected]
* Pavel Machek ([email protected]) wrote:
> On Tue 2009-01-20 18:34:55, Eric Sesterhenn wrote:
> > * Dave Chinner ([email protected]) wrote:
> > > On Tue, Jan 20, 2009 at 11:15:03AM +0100, Eric Sesterhenn wrote:
> > > > * Dave Chinner ([email protected]) wrote:
> > > > > On Tue, Jan 20, 2009 at 07:31:50AM +0100, Eric Sesterhenn wrote:
> > > > > > * Pavel Machek ([email protected]) wrote:
> > > > > > > Does ext2/3 and vfat survive that kind of attacks? Those are 'in
> > > > > > > production' and should survive it...
> > > > > >
> > > > > > I regularly (once or twice a week) test 100 corrupted images of
> > > > > > vfat, udf, msdos, swap, iso9660, ext2, ext3, ext4, minix, bfs, befs,
> > > > > > hfs, hfs+, qnx4, affs and cramfs on each of my two test machines.
> > > > >
> > > > > Any reason you are not testing XFS in that set?
> > > >
> > > > So far the responses from xfs folks have been disappointing, if you are
> > > > interested in bugreports i can send you some.
> > >
> > > Sure I am. It would be good if you could start testing XFS along
> > > with all the other filesystems and report anything you find.
> >
> > Ok, i wont report stuff with only xfs-internal backtraces from
> > xfs_error_report() or are they interesting to you?
> >
> > This occurs during mount, box is dead afterwards
> > Image can be found here :
> > http://www.cccmz.de/~snakebyte/xfs.11.img.bz2
> > I see this every ~10 images, which makes further testing hard :)
>
> BTW have you considered trying to run fsck.* on such images? I had lot
> of fun with e2fsck/e3fsck/fsck.vfat.
Not yet, but if one assumes the people writing the userspace code are
the same writing the kernel code, and probably make the same errors in
both versions this might be interesting.
For userspace a more advanced approach than random bit flipping is
possible with tools like bunny (http://code.google.com/p/bunny-the-fuzzer/)
Bunny instrumentates the code, and then tries to change the input file
based on the feedback it gets from the instrumentation. I fired up
an instance testing ext2, lets see if it finishes until evening :-)
Maybe someone is brave enough to test this approach with user mode
linux... :)
Greetings, Eric
On Tue, Jan 20, 2009 at 4:51 PM, Eric Sesterhenn <[email protected]> wrote:
> I already tested squashfs. One issue is basically a problem with
> the zlib-api for which i just posted a patch here
> http://marc.info/?t=123212807300003&r=1&w=2
>
Thanks for testing Squashfs. I've not ignored your emails, but I've
been busy job hunting, and so have not had time to look into this
until now.
I hardened Squashfs against fsfuzzer back in November 2006 (remember
the month of kernel bugs, or MOKB, which highlighted a number of
issues with Squashfs). Your testing has thrown up a regression that I
inadvertently put in last month!
> The other is an overwritten redzone (also reported in this thread
> http://marc.info/?l=linux-fsdevel&m=123212794425497&w=2)
> Looks like a length parameter is passed to squashfs_read_data
> which is bigger than ((msblk->block_size >> msblk->devblksize_log2) +
> 1), so the kcalloced buffer gets overwritten later.
As part of the mainlining effort I changed Squashfs to allocate
buffers in 4K page sizes rather than use vmalloced large buffers. As
far as zlib goes, it means zlib_inflate now decompresses into a
sequence of 4K buffers rather than one large buffer. What this means
is zlib_inflate is called repeatedly moving to the next 4K page
whenever zlib_inflate asks for another buffer (stream.avail_out == 0).
Your testing have thrown up the case where zlib_inflate is asking for
too many output buffers, i.e. it has returned with Z_OK,
stream.avail_in !=0 (more input data to be processed), and
stream.avail_out == 0 (I'd like another output buffer). but it has
consumed all the output buffers. This isn't checked (the code assumes
zlib will do the right thing on corrupt data and bomb out). My guess
is either zlib_inflate is getting confused with corrupt data, or
fsfuzzer gets lucky sometimes and corrupts the filesystem to point to
another valid but larger compressed block (i.e. in your test
filesystems the 4K datablock is being corrupted to point to an 8K
metadata block).
This ultimately leads to an oops in zlib_inflate where it has been
passed a bogus or NULL steam.next_out pointer.
I'll create a patch and send it to you if you're happy to test it.
Phillip
On Wed, Jan 21, 2009 at 03:03:06PM +1100, Nick Piggin wrote:
> On Wednesday 21 January 2009 14:57:03 Dave Chinner wrote:
> > > [ 235.250167] ------------[ cut here ]------------
> > > [ 235.250354] kernel BUG at mm/vmalloc.c:164!
> > > [ 235.250478] invalid opcode: 0000 [#1] PREEMPT DEBUG_PAGEALLOC
> > > [ 235.250869] last sysfs file: /sys/block/ram9/range
> > > [ 235.250998] Modules linked in:
......
> > > [ 235.251037] Call Trace:
> > > [ 235.251037] [<c01414cf>] ? trace_hardirqs_on+0xb/0xd
> > > [ 235.251037] [<c018367c>] ? vm_map_ram+0x36e/0x38a
> > > [ 235.251037] [<c03b2e1e>] ? _xfs_buf_map_pages+0x42/0x6d
> > > [ 235.251037] [<c03b3773>] ? xfs_buf_get_noaddr+0xbc/0x11f
> > > [ 235.251037] [<c03a2406>] ? xlog_get_bp+0x5a/0x5d
> > > [ 235.251037] [<c03a28fa>] ? xlog_find_verify_log_record+0x26/0x208
> > > [ 235.251037] [<c03a3521>] ? xlog_find_zeroed+0x1d6/0x214
> > > [ 235.251037] [<c03a3584>] ? xlog_find_head+0x25/0x358
> >
> > .....
> >
> > Ok, that's crashing in the new vmap code. It might take a couple
> > of days before I get a chance to look at this, but I've cc'd Nick Piggin
> > in case he has a chance to look at it before that. It's probably
> > an XFS bug, anyway.
>
> Hmm, it is crashing in BUG_ON(addr >= end); where this could happen
> if XFS asks to map a really huge (or -ve) number of pages and wraps
> the range, or if vmap subsystem returns an address right near the
> end of the address range and addr+size wraps (which would be a bug
> in vmap of course, but I think maybe less likely).
It's a zero length range, not a negative value. A debug XFS would
have assert failed on it, but it was completely unchecked on
production builds. The following patch checks the length of blocks
to build/read/write for being valid. Instead of an oops, we get:
[ 1572.665001] XFS mounting filesystem loop0
[ 1572.666942] XFS: Invalid block length (0x0) given for buffer
[ 1572.667141] XFS: Log inconsistent (didn't find previous header)
[ 1572.667141] XFS: empty log check failed
[ 1572.667141] XFS: log mount/recovery failed: error 5
[ 1572.671487] XFS: log mount failed
Cheers,
Dave.
--
Dave Chinner
[email protected]
[XFS] Check buffer lengths in log recovery
Before trying to obtain, read or write a buffer,
check that the buffer length is actually valid. If
it is not valid, then something read in the recovery
process has been corrupted and we should abort
recovery.
Reported-by: Eric Sesterhenn <[email protected]>
---
fs/xfs/xfs_log_recover.c | 31 +++++++++++++++++++++++++------
1 files changed, 25 insertions(+), 6 deletions(-)
diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c
index 35cca98..b1047de 100644
--- a/fs/xfs/xfs_log_recover.c
+++ b/fs/xfs/xfs_log_recover.c
@@ -70,16 +70,21 @@ STATIC void xlog_recover_check_summary(xlog_t *);
xfs_buf_t *
xlog_get_bp(
xlog_t *log,
- int num_bblks)
+ int nbblks)
{
- ASSERT(num_bblks > 0);
+ if (nbblks <= 0 || nbblks > log->l_logBBsize) {
+ xlog_warn("XFS: Invalid block length (0x%x) given for buffer", nbblks);
+ XFS_ERROR_REPORT("xlog_get_bp(1)",
+ XFS_ERRLEVEL_HIGH, log->l_mp);
+ return NULL;
+ }
if (log->l_sectbb_log) {
- if (num_bblks > 1)
- num_bblks += XLOG_SECTOR_ROUNDUP_BBCOUNT(log, 1);
- num_bblks = XLOG_SECTOR_ROUNDUP_BBCOUNT(log, num_bblks);
+ if (nbblks > 1)
+ nbblks += XLOG_SECTOR_ROUNDUP_BBCOUNT(log, 1);
+ nbblks = XLOG_SECTOR_ROUNDUP_BBCOUNT(log, nbblks);
}
- return xfs_buf_get_noaddr(BBTOB(num_bblks), log->l_mp->m_logdev_targp);
+ return xfs_buf_get_noaddr(BBTOB(nbblks), log->l_mp->m_logdev_targp);
}
void
@@ -102,6 +107,13 @@ xlog_bread(
{
int error;
+ if (nbblks <= 0 || nbblks > log->l_logBBsize) {
+ xlog_warn("XFS: Invalid block length (0x%x) given for buffer", nbblks);
+ XFS_ERROR_REPORT("xlog_bread(1)",
+ XFS_ERRLEVEL_HIGH, log->l_mp);
+ return EFSCORRUPTED;
+ }
+
if (log->l_sectbb_log) {
blk_no = XLOG_SECTOR_ROUNDDOWN_BLKNO(log, blk_no);
nbblks = XLOG_SECTOR_ROUNDUP_BBCOUNT(log, nbblks);
@@ -139,6 +151,13 @@ xlog_bwrite(
{
int error;
+ if (nbblks <= 0 || nbblks > log->l_logBBsize) {
+ xlog_warn("XFS: Invalid block length (0x%x) given for buffer", nbblks);
+ XFS_ERROR_REPORT("xlog_bwrite(1)",
+ XFS_ERRLEVEL_HIGH, log->l_mp);
+ return EFSCORRUPTED;
+ }
+
if (log->l_sectbb_log) {
blk_no = XLOG_SECTOR_ROUNDDOWN_BLKNO(log, blk_no);
nbblks = XLOG_SECTOR_ROUNDUP_BBCOUNT(log, nbblks);
On Jan 21, 2009, at 10:37 PM, Dave Chinner wrote:
> On Wed, Jan 21, 2009 at 03:03:06PM +1100, Nick Piggin wrote:
>> On Wednesday 21 January 2009 14:57:03 Dave Chinner wrote:
>>>> [ 235.250167] ------------[ cut here ]------------
>>>> [ 235.250354] kernel BUG at mm/vmalloc.c:164!
>>>> [ 235.250478] invalid opcode: 0000 [#1] PREEMPT DEBUG_PAGEALLOC
>>>> [ 235.250869] last sysfs file: /sys/block/ram9/range
>>>> [ 235.250998] Modules linked in:
> ......
>>>> [ 235.251037] Call Trace:
>>>> [ 235.251037] [<c01414cf>] ? trace_hardirqs_on+0xb/0xd
>>>> [ 235.251037] [<c018367c>] ? vm_map_ram+0x36e/0x38a
>>>> [ 235.251037] [<c03b2e1e>] ? _xfs_buf_map_pages+0x42/0x6d
>>>> [ 235.251037] [<c03b3773>] ? xfs_buf_get_noaddr+0xbc/0x11f
>>>> [ 235.251037] [<c03a2406>] ? xlog_get_bp+0x5a/0x5d
>>>> [ 235.251037] [<c03a28fa>] ? xlog_find_verify_log_record
>>>> +0x26/0x208
>>>> [ 235.251037] [<c03a3521>] ? xlog_find_zeroed+0x1d6/0x214
>>>> [ 235.251037] [<c03a3584>] ? xlog_find_head+0x25/0x358
>>>
>>> .....
>>>
>>> Ok, that's crashing in the new vmap code. It might take a couple
>>> of days before I get a chance to look at this, but I've cc'd Nick
>>> Piggin
>>> in case he has a chance to look at it before that. It's probably
>>> an XFS bug, anyway.
>>
>> Hmm, it is crashing in BUG_ON(addr >= end); where this could happen
>> if XFS asks to map a really huge (or -ve) number of pages and wraps
>> the range, or if vmap subsystem returns an address right near the
>> end of the address range and addr+size wraps (which would be a bug
>> in vmap of course, but I think maybe less likely).
>
> It's a zero length range, not a negative value. A debug XFS would
> have assert failed on it, but it was completely unchecked on
> production builds. The following patch checks the length of blocks
> to build/read/write for being valid. Instead of an oops, we get:
>
> [ 1572.665001] XFS mounting filesystem loop0
> [ 1572.666942] XFS: Invalid block length (0x0) given for buffer
> [ 1572.667141] XFS: Log inconsistent (didn't find previous header)
> [ 1572.667141] XFS: empty log check failed
> [ 1572.667141] XFS: log mount/recovery failed: error 5
> [ 1572.671487] XFS: log mount failed
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> [email protected]
>
> [XFS] Check buffer lengths in log recovery
>
> Before trying to obtain, read or write a buffer,
> check that the buffer length is actually valid. If
> it is not valid, then something read in the recovery
> process has been corrupted and we should abort
> recovery.
>
> Reported-by: Eric Sesterhenn <[email protected]>
Reviewed-by: Felix Blyakher <[email protected]>
> ---
> fs/xfs/xfs_log_recover.c | 31 +++++++++++++++++++++++++------
> 1 files changed, 25 insertions(+), 6 deletions(-)
>
> diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c
> index 35cca98..b1047de 100644
> --- a/fs/xfs/xfs_log_recover.c
> +++ b/fs/xfs/xfs_log_recover.c
> @@ -70,16 +70,21 @@ STATIC void xlog_recover_check_summary(xlog_t *);
> xfs_buf_t *
> xlog_get_bp(
> xlog_t *log,
> - int num_bblks)
> + int nbblks)
> {
> - ASSERT(num_bblks > 0);
> + if (nbblks <= 0 || nbblks > log->l_logBBsize) {
> + xlog_warn("XFS: Invalid block length (0x%x) given for buffer",
> nbblks);
> + XFS_ERROR_REPORT("xlog_get_bp(1)",
> + XFS_ERRLEVEL_HIGH, log->l_mp);
> + return NULL;
> + }
>
> if (log->l_sectbb_log) {
> - if (num_bblks > 1)
> - num_bblks += XLOG_SECTOR_ROUNDUP_BBCOUNT(log, 1);
> - num_bblks = XLOG_SECTOR_ROUNDUP_BBCOUNT(log, num_bblks);
> + if (nbblks > 1)
> + nbblks += XLOG_SECTOR_ROUNDUP_BBCOUNT(log, 1);
> + nbblks = XLOG_SECTOR_ROUNDUP_BBCOUNT(log, nbblks);
> }
> - return xfs_buf_get_noaddr(BBTOB(num_bblks), log->l_mp-
> >m_logdev_targp);
> + return xfs_buf_get_noaddr(BBTOB(nbblks), log->l_mp->m_logdev_targp);
> }
>
> void
> @@ -102,6 +107,13 @@ xlog_bread(
> {
> int error;
>
> + if (nbblks <= 0 || nbblks > log->l_logBBsize) {
> + xlog_warn("XFS: Invalid block length (0x%x) given for buffer",
> nbblks);
> + XFS_ERROR_REPORT("xlog_bread(1)",
> + XFS_ERRLEVEL_HIGH, log->l_mp);
> + return EFSCORRUPTED;
> + }
> +
> if (log->l_sectbb_log) {
> blk_no = XLOG_SECTOR_ROUNDDOWN_BLKNO(log, blk_no);
> nbblks = XLOG_SECTOR_ROUNDUP_BBCOUNT(log, nbblks);
> @@ -139,6 +151,13 @@ xlog_bwrite(
> {
> int error;
>
> + if (nbblks <= 0 || nbblks > log->l_logBBsize) {
> + xlog_warn("XFS: Invalid block length (0x%x) given for buffer",
> nbblks);
> + XFS_ERROR_REPORT("xlog_bwrite(1)",
> + XFS_ERRLEVEL_HIGH, log->l_mp);
> + return EFSCORRUPTED;
> + }
> +
> if (log->l_sectbb_log) {
> blk_no = XLOG_SECTOR_ROUNDDOWN_BLKNO(log, blk_no);
> nbblks = XLOG_SECTOR_ROUNDUP_BBCOUNT(log, nbblks);
>
> _______________________________________________
> xfs mailing list
> [email protected]
> http://oss.sgi.com/mailman/listinfo/xfs
On Thu, Jan 22, 2009 at 03:37:47PM +1100, Dave Chinner wrote:
> xfs_buf_t *
> xlog_get_bp(
> xlog_t *log,
> - int num_bblks)
> + int nbblks)
Any reason for reanming this variable? That causes quite a bit of
churn.
> {
> - ASSERT(num_bblks > 0);
> + if (nbblks <= 0 || nbblks > log->l_logBBsize) {
> + xlog_warn("XFS: Invalid block length (0x%x) given for buffer", nbblks);
And doesn't prevent this line from needing a linebreak to stay under 80
characters :)
Except for these nitpicks it looks fine to me.
* Christoph Hellwig ([email protected]) wrote:
> On Thu, Jan 22, 2009 at 03:37:47PM +1100, Dave Chinner wrote:
> > xfs_buf_t *
> > xlog_get_bp(
> > xlog_t *log,
> > - int num_bblks)
> > + int nbblks)
>
> Any reason for reanming this variable? That causes quite a bit of
> churn.
>
> > {
> > - ASSERT(num_bblks > 0);
> > + if (nbblks <= 0 || nbblks > log->l_logBBsize) {
> > + xlog_warn("XFS: Invalid block length (0x%x) given for buffer", nbblks);
>
> And doesn't prevent this line from needing a linebreak to stay under 80
> characters :)
>
> Except for these nitpicks it looks fine to me.
Just want to report that it fixes the issue for me. Thanks,
i will continue xfs testing.
Greetings, Eric
* Christoph Hellwig ([email protected]) wrote:
> On Thu, Jan 22, 2009 at 03:37:47PM +1100, Dave Chinner wrote:
> > xfs_buf_t *
> > xlog_get_bp(
> > xlog_t *log,
> > - int num_bblks)
> > + int nbblks)
>
> Any reason for reanming this variable? That causes quite a bit of
> churn.
>
> > {
> > - ASSERT(num_bblks > 0);
> > + if (nbblks <= 0 || nbblks > log->l_logBBsize) {
> > + xlog_warn("XFS: Invalid block length (0x%x) given for buffer", nbblks);
>
> And doesn't prevent this line from needing a linebreak to stay under 80
> characters :)
>
> Except for these nitpicks it looks fine to me.
Using the image at http://www.cccmz.de/~snakebyte/xfs.254.img.bz2
I was able to produce a pretty similar error with the patch applied
[ 227.138277] XFS mounting filesystem loop0
[ 227.142703] ------------[ cut here ]------------
[ 227.142881] kernel BUG at mm/vmalloc.c:164!
[ 227.143009] invalid opcode: 0000 [#1] PREEMPT DEBUG_PAGEALLOC
[ 227.143040] last sysfs file: /sys/block/ram9/range
[ 227.143040] Modules linked in:
[ 227.143040]
[ 227.143040] Pid: 5021, comm: mount Tainted: G W
(2.6.29-rc2-00021-gd84d31c-dirty #221) System Name
[ 227.143040] EIP: 0060:[<c0182af1>] EFLAGS: 00010246 CPU: 0
[ 227.143040] EIP is at vmap_page_range+0x19/0x112
[ 227.143040] EAX: d1000000 EBX: d1000000 ECX: 00000163 EDX: d1000000
[ 227.143040] ESI: 00000003 EDI: d1000000 EBP: cb774c48 ESP: cb774c28
[ 227.143040] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068
[ 227.143040] Process mount (pid: 5021, ti=cb774000 task=cb760000
task.ti=cb774000)
[ 227.143040] Stack:
[ 227.143040] 00000246 cb760000 00000163 c01414cf cb774c4c d1000000
00000003 cb66ba40
[ 227.143040] cb774c80 c018367c cb7f7ee0 00100000 00000000 cb7f7ee0
00000000 00000014
[ 227.143040] d1000000 cb66bb74 00000000 cb7f7dc0 00000000 cb7f7dc0
cb774c94 c03b2ed6
[ 227.143040] Call Trace:
[ 227.143040] [<c01414cf>] ? trace_hardirqs_on+0xb/0xd
[ 227.143040] [<c018367c>] ? vm_map_ram+0x36e/0x38a
[ 227.143040] [<c03b2ed6>] ? _xfs_buf_map_pages+0x42/0x6d
[ 227.143040] [<c03b382b>] ? xfs_buf_get_noaddr+0xbc/0x11f
[ 227.143040] [<c03a24bc>] ? xlog_get_bp+0x92/0x97
[ 227.143040] [<c03a343a>] ? xlog_find_zeroed+0x37/0x214
[ 227.143040] [<c0141381>] ? trace_hardirqs_on_caller+0x17/0x15a
[ 227.143040] [<c07b07bc>] ? _spin_unlock_irq+0x32/0x47
[ 227.143040] [<c03a363c>] ? xlog_find_head+0x25/0x358
[ 227.143040] [<c011ca1f>] ? __enqueue_entity+0xa1/0xa9
[ 227.143040] [<c03a398a>] ? xlog_find_tail+0x1b/0x3fa
[ 227.143040] [<c01414cf>] ? trace_hardirqs_on+0xb/0xd
[ 227.143040] [<c07b0768>] ? _spin_unlock_irqrestore+0x3b/0x5d
[ 227.143040] [<c07b0774>] ? _spin_unlock_irqrestore+0x47/0x5d
[ 227.143040] [<c01209c6>] ? try_to_wake_up+0x12f/0x13a
[ 227.143040] [<c03a570c>] ? xlog_recover+0x19/0x81
[ 227.143040] [<c03aab7b>] ? xfs_trans_ail_init+0x4b/0x64
[ 227.143040] [<c039f97d>] ? xfs_log_mount+0xef/0x13d
[ 227.143040] [<c03a720a>] ? xfs_mountfs+0x30d/0x5b8
[ 227.143040] [<c05061c1>] ? __debug_object_init+0x28b/0x293
[ 227.143040] [<c012b3b5>] ? init_timer+0x1c/0x1f
[ 227.143040] [<c03a7b59>] ? xfs_mru_cache_create+0x114/0x14e
[ 227.143040] [<c03ba2bd>] ? xfs_fs_fill_super+0x196/0x2e5
[ 227.143040] [<c01919c5>] ? get_sb_bdev+0xf1/0x13f
[ 227.143040] [<c0178896>] ? kstrdup+0x2f/0x51
[ 227.143040] [<c03b88d3>] ? xfs_fs_get_sb+0x18/0x1a
[ 227.143040] [<c03ba127>] ? xfs_fs_fill_super+0x0/0x2e5
[ 227.143040] [<c019159c>] ? vfs_kern_mount+0x40/0x7b
[ 227.143040] [<c0191625>] ? do_kern_mount+0x37/0xbf
[ 227.143040] [<c01a2cb0>] ? do_mount+0x5cc/0x609
[ 227.143040] [<c07b093b>] ? lock_kernel+0x19/0x8c
[ 227.143040] [<c01a2d43>] ? sys_mount+0x56/0xa0
[ 227.143040] [<c01a2d56>] ? sys_mount+0x69/0xa0
[ 227.143040] [<c0102ea1>] ? sysenter_do_call+0x12/0x31
[ 227.143040] Code: 0f 0b eb fe ba 01 00 00 00 89 c8 e8 02 ff ff ff 5d
c3 55 89 e5 57 56 53 83 ec 14 0f 1f 44 00 00 39 d0 89 c3 89 d7 89 4d e8
72 04 <0f> 0b eb fe c1 e8 16 8d 34 85 00 00 00 00 03 35 84 91 a3 c0 8d
[ 227.143040] EIP: [<c0182af1>] vmap_page_range+0x19/0x112 SS:ESP
0068:cb774c28
[ 227.161166] ---[ end trace a7919e7f17c0a727 ]---
[ 553.569010] general protection fault: 0000 [#2] PREEMPT
DEBUG_PAGEALLOC
[ 553.569043] last sysfs file: /sys/block/ram9/range
[ 553.569043] Modules linked in:
[ 553.569043]
[ 553.569043] Pid: 328, comm: kswapd0 Tainted: G D W
(2.6.29-rc2-00021-gd84d31c-dirty #221) System Name
[ 553.569043] EIP: 0060:[<c0181bb6>] EFLAGS: 00010282 CPU: 0
[ 553.569043] EIP is at page_referenced+0x85/0xdf
[ 553.569043] EAX: 00000000 EBX: c12b5f20 ECX: cf168e60 EDX: ffffffff
[ 553.569043] ESI: ffffffcb EDI: cb665580 EBP: cf168e70 ESP: cf168e54
[ 553.569043] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068
[ 553.569043] Process kswapd0 (pid: 328, ti=cf168000 task=cf090000
task.ti=cf168000)
[ 553.569043] Stack:
[ 553.569043] cb6655a4 00000000 00000000 00000001 c12b5f20 00000000
c0a88964 cf168ef4
[ 553.569043] c0174049 cf168edc cf168f70 00000000 c0a88c00 c0a88bb0
cf090000 00000005
[ 553.569043] c1074578 cf168ea8 cf168ed8 cf168eb4 c013f6f2 c0a88bb0
cf090000 ffffffff
[ 553.569043] Call Trace:
[ 553.569043] [<c0174049>] ? shrink_active_list+0x12d/0x2e3
[ 553.569043] [<c013f6f2>] ? print_lock_contention_bug+0x11/0xb2
[ 553.569043] [<c0174c5f>] ? shrink_zone+0x95/0x24b
[ 553.569043] [<c01414cf>] ? trace_hardirqs_on+0xb/0xd
[ 553.569043] [<c07b07bc>] ? _spin_unlock_irq+0x32/0x47
[ 553.569043] [<c0174e00>] ? shrink_zone+0x236/0x24b
[ 553.569043] [<c0175546>] ? kswapd+0x2d1/0x443
[ 553.569043] [<c01735d1>] ? isolate_pages_global+0x0/0x1d2
[ 553.569043] [<c013416c>] ? autoremove_wake_function+0x0/0x35
[ 553.569043] [<c0175275>] ? kswapd+0x0/0x443
[ 553.569043] [<c0134094>] ? kthread+0x3e/0x66
[ 553.569043] [<c0134056>] ? kthread+0x0/0x66
[ 553.569043] [<c01035a7>] ? kernel_thread_helper+0x7/0x10
[ 553.569043] Code: 45 f0 8d 47 24 8b 77 24 89 45 e4 83 ee 34 eb 1b 8d
4d f0 89 f2 89 d8 e8 0d f7 ff ff 01 45 ec 83 7d f0 00 74 15 8b 76 34 83
ee 34 <8b> 46 34 0f 18 00 90 8d 46 34 3b 45 e4 75 d6 89 f8 e8 51 f4 ff
[ 553.569043] EIP: [<c0181bb6>] page_referenced+0x85/0xdf SS:ESP
0068:cf168e54
[ 553.582779] ---[ end trace a7919e7f17c0a728 ]---
[ 553.582900] note: kswapd0[328] exited with preempt_count 1
[ 697.933024] BUG: soft lockup - CPU#0 stuck for 61s! [make:14851]
[ 697.933024] Modules linked in:
[ 697.933024] irq event stamp: 0
[ 697.933024] hardirqs last enabled at (0): [<(null)>] (null)
[ 697.933024] hardirqs last disabled at (0): [<c01221f9>]
copy_process+0x31f/0xfef
[ 697.933024] softirqs last enabled at (0): [<c01221f9>]
copy_process+0x31f/0xfef
[ 697.933024] softirqs last disabled at (0): [<(null)>] (null)
[ 697.933024]
[ 697.933024] Pid: 14851, comm: make Tainted: G D W
(2.6.29-rc2-00021-gd84d31c-dirty #221) System Name
[ 697.933024] EIP: 0060:[<c07b2628>] EFLAGS: 00000286 CPU: 0
[ 697.933024] EIP is at add_preempt_count+0x0/0xde
[ 697.933024] EAX: 00000001 EBX: 00000001 ECX: c018100f EDX: 00000096
[ 697.933024] ESI: 166556bb EDI: 00000001 EBP: c1c83bd0 ESP: c1c83bbc
[ 697.933024] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068
[ 697.933024] CR0: 8005003b CR2: 400c3044 CR3: 01c87000 CR4: 000006d0
[ 697.933024] DR0: c0122f08 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 697.933024] DR6: ffff0ff0 DR7: 00000400
[ 697.933024] Call Trace:
[ 697.933024] [<c04f87fa>] ? delay_tsc+0x1a/0x88
[ 697.933024] [<c04f8778>] __delay+0xe/0x10
[ 697.933024] [<c0505465>] _raw_spin_lock+0x83/0xcb
[ 697.933024] [<c07b00da>] _spin_lock+0x4c/0x5d
[ 697.933024] [<c018100f>] page_lock_anon_vma+0x29/0x37
[ 697.933024] [<c0181b79>] page_referenced+0x48/0xdf
[ 697.933024] [<c07b07bc>] ? _spin_unlock_irq+0x32/0x47
[ 697.933024] [<c0173ddb>] ? __remove_mapping+0xb8/0xcd
[ 697.933024] [<c017433f>] shrink_page_list+0x140/0x5ff
[ 697.933024] [<c017499c>] shrink_list+0x19e/0x3cc
[ 697.933024] [<c0174d9f>] shrink_zone+0x1d5/0x24b
[ 697.933024] [<c0175121>] try_to_free_pages+0x1b4/0x29e
[ 697.933024] [<c01735d1>] ? isolate_pages_global+0x0/0x1d2
[ 697.933024] [<c017022f>] __alloc_pages_internal+0x247/0x3af
[ 697.933024] [<c018b4cd>] ? __slab_alloc+0x181/0x50b
[ 697.933024] [<c018b50d>] __slab_alloc+0x1c1/0x50b
[ 697.933024] [<c018a797>] ? check_bytes_and_report+0x26/0x94
[ 697.933024] [<c018bb5a>] kmem_cache_alloc+0x7c/0xea
[ 697.933024] [<c0121f43>] ? copy_process+0x69/0xfef
[ 697.933024] [<c0121f43>] ? copy_process+0x69/0xfef
[ 697.933024] [<c0121f43>] copy_process+0x69/0xfef
[ 697.933024] [<c013f6f2>] ? print_lock_contention_bug+0x11/0xb2
[ 697.933024] [<c018de9a>] ? fd_install+0x28/0x55
[ 697.933024] [<c0123029>] do_fork+0x121/0x2b8
[ 697.933024] [<c04f9195>] ? copy_to_user+0x38/0x43
[ 697.933024] [<c04f8b60>] ? trace_hardirqs_on_thunk+0xc/0x10
[ 697.933024] [<c0102ecf>] ? sysenter_exit+0xf/0x16
[ 697.933024] [<c01015a2>] sys_vfork+0x1e/0x20
[ 697.933024] [<c0102f7a>] syscall_call+0x7/0xb
On Thu, Jan 22, 2009 at 01:11:58AM -0500, Christoph Hellwig wrote:
> On Thu, Jan 22, 2009 at 03:37:47PM +1100, Dave Chinner wrote:
> > xfs_buf_t *
> > xlog_get_bp(
> > xlog_t *log,
> > - int num_bblks)
> > + int nbblks)
>
> Any reason for reanming this variable? That causes quite a bit of
> churn.
Just so it was the same variable name as the other I/O functions
around it. It's kind of strange to have one function use num_bblks
and another 4 or so related functions use nbblks....
> > {
> > - ASSERT(num_bblks > 0);
> > + if (nbblks <= 0 || nbblks > log->l_logBBsize) {
> > + xlog_warn("XFS: Invalid block length (0x%x) given for buffer", nbblks);
>
> And doesn't prevent this line from needing a linebreak to stay under 80
> characters :)
I'll respin it to fix this.
Cheers,
Dave.
--
Dave Chinner
[email protected]
On Thu, Jan 22, 2009 at 11:06:48AM +0100, Eric Sesterhenn wrote:
> * Christoph Hellwig ([email protected]) wrote:
> > On Thu, Jan 22, 2009 at 03:37:47PM +1100, Dave Chinner wrote:
> > > xfs_buf_t *
> > > xlog_get_bp(
> > > xlog_t *log,
> > > - int num_bblks)
> > > + int nbblks)
> >
> > Any reason for reanming this variable? That causes quite a bit of
> > churn.
> >
> > > {
> > > - ASSERT(num_bblks > 0);
> > > + if (nbblks <= 0 || nbblks > log->l_logBBsize) {
> > > + xlog_warn("XFS: Invalid block length (0x%x) given for buffer", nbblks);
> >
> > And doesn't prevent this line from needing a linebreak to stay under 80
> > characters :)
> >
> > Except for these nitpicks it looks fine to me.
>
> Using the image at http://www.cccmz.de/~snakebyte/xfs.254.img.bz2
> I was able to produce a pretty similar error with the patch applied
Different problem, obviously. ;)
I'll have a look at this later today....
Cheers,
Dave.
--
Dave Chinner
[email protected]
On Thu, Jan 22, 2009 at 01:11:58AM -0500, Christoph Hellwig wrote:
> On Thu, Jan 22, 2009 at 03:37:47PM +1100, Dave Chinner wrote:
> > xfs_buf_t *
> > xlog_get_bp(
> > xlog_t *log,
> > - int num_bblks)
> > + int nbblks)
>
> Any reason for reanming this variable? That causes quite a bit of
> churn.
>
> > {
> > - ASSERT(num_bblks > 0);
> > + if (nbblks <= 0 || nbblks > log->l_logBBsize) {
> > + xlog_warn("XFS: Invalid block length (0x%x) given for buffer", nbblks);
>
> And doesn't prevent this line from needing a linebreak to stay under 80
> characters :)
And now with line breaks.
------
[XFS] Check buffer lengths in log recovery
Before trying to obtain, read or write a buffer,
check that the buffer length is actually valid. If
it is not valid, then something read in the recovery
process has been corrupted and we should abort
recovery.
Reported-by: Eric Sesterhenn <[email protected]>
Signed-off-by: Dave Chinner <[email protected]>
Tested-by: Eric Sesterhenn <[email protected]>
---
fs/xfs/xfs_log_recover.c | 34 ++++++++++++++++++++++++++++------
1 files changed, 28 insertions(+), 6 deletions(-)
diff --git a/fs/xfs/xfs_log_recover.c b/fs/xfs/xfs_log_recover.c
index 35cca98..a37e4aa 100644
--- a/fs/xfs/xfs_log_recover.c
+++ b/fs/xfs/xfs_log_recover.c
@@ -70,16 +70,22 @@ STATIC void xlog_recover_check_summary(xlog_t *);
xfs_buf_t *
xlog_get_bp(
xlog_t *log,
- int num_bblks)
+ int nbblks)
{
- ASSERT(num_bblks > 0);
+ if (nbblks <= 0 || nbblks > log->l_logBBsize) {
+ xlog_warn("XFS: Invalid block length (0x%x) given for buffer",
+ nbblks);
+ XFS_ERROR_REPORT("xlog_get_bp(1)",
+ XFS_ERRLEVEL_HIGH, log->l_mp);
+ return NULL;
+ }
if (log->l_sectbb_log) {
- if (num_bblks > 1)
- num_bblks += XLOG_SECTOR_ROUNDUP_BBCOUNT(log, 1);
- num_bblks = XLOG_SECTOR_ROUNDUP_BBCOUNT(log, num_bblks);
+ if (nbblks > 1)
+ nbblks += XLOG_SECTOR_ROUNDUP_BBCOUNT(log, 1);
+ nbblks = XLOG_SECTOR_ROUNDUP_BBCOUNT(log, nbblks);
}
- return xfs_buf_get_noaddr(BBTOB(num_bblks), log->l_mp->m_logdev_targp);
+ return xfs_buf_get_noaddr(BBTOB(nbblks), log->l_mp->m_logdev_targp);
}
void
@@ -102,6 +108,14 @@ xlog_bread(
{
int error;
+ if (nbblks <= 0 || nbblks > log->l_logBBsize) {
+ xlog_warn("XFS: Invalid block length (0x%x) given for buffer",
+ nbblks);
+ XFS_ERROR_REPORT("xlog_bread(1)",
+ XFS_ERRLEVEL_HIGH, log->l_mp);
+ return EFSCORRUPTED;
+ }
+
if (log->l_sectbb_log) {
blk_no = XLOG_SECTOR_ROUNDDOWN_BLKNO(log, blk_no);
nbblks = XLOG_SECTOR_ROUNDUP_BBCOUNT(log, nbblks);
@@ -139,6 +153,14 @@ xlog_bwrite(
{
int error;
+ if (nbblks <= 0 || nbblks > log->l_logBBsize) {
+ xlog_warn("XFS: Invalid block length (0x%x) given for buffer",
+ nbblks);
+ XFS_ERROR_REPORT("xlog_bwrite(1)",
+ XFS_ERRLEVEL_HIGH, log->l_mp);
+ return EFSCORRUPTED;
+ }
+
if (log->l_sectbb_log) {
blk_no = XLOG_SECTOR_ROUNDDOWN_BLKNO(log, blk_no);
nbblks = XLOG_SECTOR_ROUNDUP_BBCOUNT(log, nbblks);
> [XFS] Check buffer lengths in log recovery
>
> Before trying to obtain, read or write a buffer,
> check that the buffer length is actually valid. If
> it is not valid, then something read in the recovery
> process has been corrupted and we should abort
> recovery.
>
> Reported-by: Eric Sesterhenn <[email protected]>
> Signed-off-by: Dave Chinner <[email protected]>
> Tested-by: Eric Sesterhenn <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
On Fri, Jan 23, 2009 at 10:37:17AM +1100, Dave Chinner wrote:
> On Thu, Jan 22, 2009 at 11:06:48AM +0100, Eric Sesterhenn wrote:
> > * Christoph Hellwig ([email protected]) wrote:
> > > On Thu, Jan 22, 2009 at 03:37:47PM +1100, Dave Chinner wrote:
> > > > xfs_buf_t *
> > > > xlog_get_bp(
> > > > xlog_t *log,
> > > > - int num_bblks)
> > > > + int nbblks)
> > >
> > > Any reason for reanming this variable? That causes quite a bit of
> > > churn.
> > >
> > > > {
> > > > - ASSERT(num_bblks > 0);
> > > > + if (nbblks <= 0 || nbblks > log->l_logBBsize) {
> > > > + xlog_warn("XFS: Invalid block length (0x%x) given for buffer", nbblks);
> > >
> > > And doesn't prevent this line from needing a linebreak to stay under 80
> > > characters :)
> > >
> > > Except for these nitpicks it looks fine to me.
> >
> > Using the image at http://www.cccmz.de/~snakebyte/xfs.254.img.bz2
> > I was able to produce a pretty similar error with the patch applied
>
> Different problem, obviously. ;)
>
> I'll have a look at this later today....
One word: Ouch.
Basically the corruption introduced adds random feature bits into the
superblock that aren't actually in use. And hence instead of having
valid superblock fields for each of those features, they are zero
and so strange stuff happens.
What is really stupid is that the fields are often checked. By
ASSERT(), not by production code so a debug kernel will pick up the
problem and panic, while a production kernel will continue onwards
until it panics. This is not going to be a small patch.....
Cheers,
Dave.
--
Dave Chinner
[email protected]
On Jan 22, 2009, at 6:06 PM, Christoph Hellwig wrote:
>> [XFS] Check buffer lengths in log recovery
>>
>> Before trying to obtain, read or write a buffer,
>> check that the buffer length is actually valid. If
>> it is not valid, then something read in the recovery
>> process has been corrupted and we should abort
>> recovery.
>>
>> Reported-by: Eric Sesterhenn <[email protected]>
>> Signed-off-by: Dave Chinner <[email protected]>
>> Tested-by: Eric Sesterhenn <[email protected]>
>
>
> Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: Felix Blyakher <[email protected]>
On Wed 2009-01-21 15:00:42, Dave Chinner wrote:
> On Tue, Jan 20, 2009 at 11:20:19PM +0100, Pavel Machek wrote:
> > On Tue 2009-01-20 08:28:29, Christoph Hellwig wrote:
> > > On Tue, Jan 20, 2009 at 11:59:44PM +1100, Dave Chinner wrote:
> > > > > So far the responses from xfs folks have been disappointing, if you are
> > > > > interested in bugreports i can send you some.
> > > >
> > > > Sure I am. It would be good if you could start testing XFS along
> > > > with all the other filesystems and report anything you find.
> > >
> > > I think that was the issue with the debug builds. If you do this
> > > testing always do it without CONFIG_XFS_DEBUG set as with that option
> > > we intentionally panic on detected disk corruptions.
> >
> > Uhuh, *_DEBUG options are not supposed to make kernel less
> > stable/robust. Should that crashing functionality be guarded with
> > command line option or something? ext2 has errors=panic mount
> > option...
>
> No, it's a debugging option that is described as:
>
> "Say N unless you are an XFS developer, or you play one on TV."
>
> Seriously, if you aren't trying to develop XFS stuff then *don't turn it
> on*.
What about this, then?
Pavel
---
Warn in documentation that XFS_DEBUG panics machines.
Signed-off-by: Pavel Machek <[email protected]>
diff --git a/fs/xfs/Kconfig b/fs/xfs/Kconfig
index 3f53dd1..55c98eb 100644
--- a/fs/xfs/Kconfig
+++ b/fs/xfs/Kconfig
@@ -76,4 +76,7 @@ config XFS_DEBUG
Note that the resulting code will be HUGE and SLOW, and probably
not useful unless you are debugging a particular problem.
+ Turning this option on will result in kernel panicking any time
+ it detects on-disk corruption.
+
Say N unless you are an XFS developer, or you play one on TV.
--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
On Mon, Jan 26, 2009 at 05:27:11PM +0100, Pavel Machek wrote:
> On Wed 2009-01-21 15:00:42, Dave Chinner wrote:
> > On Tue, Jan 20, 2009 at 11:20:19PM +0100, Pavel Machek wrote:
> > > On Tue 2009-01-20 08:28:29, Christoph Hellwig wrote:
> > > > I think that was the issue with the debug builds. If you do this
> > > > testing always do it without CONFIG_XFS_DEBUG set as with that option
> > > > we intentionally panic on detected disk corruptions.
> > >
> > > Uhuh, *_DEBUG options are not supposed to make kernel less
> > > stable/robust. Should that crashing functionality be guarded with
> > > command line option or something? ext2 has errors=panic mount
> > > option...
> >
> > No, it's a debugging option that is described as:
> >
> > "Say N unless you are an XFS developer, or you play one on TV."
> >
> > Seriously, if you aren't trying to develop XFS stuff then *don't turn it
> > on*.
>
> What about this, then?
....
> + Turning this option on will result in kernel panicking any time
> + it detects on-disk corruption.
Thin end of a wedge. There's a couple of thousand conditions that
CONFIG_XFS_DEBUG introduces kernel panics on:
$ grep -r ASSERT fs/xfs |wc -l
2095
CONFIG_*_DEBUG means include *debug* code there to help developers,
including adding additional failure tests into the kernel. Besides,
which bit of "don't turn it on unless you are an XFS developer"
don't you understand?
Cheers,
Dave.
--
Dave Chinner
[email protected]