I have updated the mballoc patches. The same can be found at
http://www.radian.org/~kvaneesh/ext4/patch-series/
The series file is
# This series applies on GIT commit b07d68b5ca4d55a16fab223d63d5fb36f89ff42f
ext4-journal_chksum-2.6.20.patch
ext4-journal-chksum-review-fix.patch
ext4_uninit_blockgroup.patch
64-bit-i_version.patch
i_version_hi.patch
ext4_i_version_hi_2.patch
i_version_update_ext4.patch
jbd-stats-through-procfs
jbd-stats-through-procfs_fix.patch
delalloc-vfs.patch
delalloc-ext4.patch
ext-truncate-mutex.patch
ext3-4-migrate.patch
new-extent-function.patch
generic-find-next-le-bit
mballoc-core.patch
ext234_enlarge_blocksize.patch
ext2_rec_len_overflow_with_64kblk_fix.patch
ext3_rec_len_overflow_with_64kblk_fix.patch
ext4_rec_len_overflow_with_64kblk_fix.patch
RFC-1-2-JBD-slab-management-support-for-large-b.patch
RFC-2-2-JBD-blocks-reservation-fix-for-large-bl.patch
Changes:
a) deleted mballoc-fixup.patch; merged with mballoc-core.path
b) deleted mballoc-mkdir-oops-fix.patch; merged with mballoc-core.patch
c) merged Uninitialized group related changes for mballoc
d) merged the checkpatch.pl error fix patch for mballoc
e) merged the fix for ext4_ext_search_right for error
EXT4-fs error (device sdc): ext4_ext_search_right: bad header in inode #3745797: unexpected eh_depth - magic f30a, entries 81, max 84(0), depth 0(1)
f) PPC build fix by introducing generic_find_next_le_bit()
g) Document the code. Rest of the places that needs more comments are marked with FIXME!!
h) 48 bit block usage by converting bg_*_bitmap and bg_inode_table to respective ext4_*_bitmap and ext4_inode_table() function
i) Added jbd fixes patches from Mingming for large block size with the comment from hch. (this is added towards the end of the series)
k) Updated the ext234_enlarge_blocksize.patch ext2_rec_len_overflow_with_64kblk_fix.patch ext3_rec_len_overflow_with_64kblk_fix.patch
ext4_rec_len_overflow_with_64kblk_fix.patch to make sure they can be applied with stgit. They didn't had a valid email id in the
From: field. ( This is only commit message update )
Test status:
Minor testing with KVM. I also didn't do a PPC build.
Mingming/Avantika,
Can we update the patch queue with the series ?
-aneesh
Aneesh Kumar K.V wrote:
> I have updated the mballoc patches. The same can be found at
>
> http://www.radian.org/~kvaneesh/ext4/patch-series/
>
>
>
>
>
> Test status:
> Minor testing with KVM. I also didn't do a PPC build.
>
running fsstress on ppc64 give me
EXT4-fs: group 9: 16384 blocks in bitmap, 32254 in gd
EXT4-fs error (device sda7): ext4_mb_mark_diskspace_used: Allocating block in system zone - block = 294915
EXT4-fs error (device sda7): ext4_ext_find_extent: bad header in inode #213792: invalid magic - magic 5e5e, entries 24158, max 24158(0), depth 24158(0)
RTAS: event: 137875, Type: Platform Error, Severity: 2
EXT4-fs error (device sda7): ext4_ext_find_extent: bad header in inode #232149: invalid magic - magic e5e5, entries 58853, max 58853(0), depth 58853(0)
RTAS: event: 137876, Type: Platform Error, Severity: 2
EXT4-fs error (device sda7): ext4_ext_find_extent: bad header in inode #214332: invalid magic - magic 0, entries 0, max 0(0), depth 0(0)
EXT4-fs error (device sda7): ext4_ext_remove_space: bad header in inode #232149: invalid magic - magic e5e5, entries 58853, max 58853(0), depth 58853(0)
-aneesh
On Tue, 2007-09-11 at 13:59 +0530, Aneesh Kumar K.V wrote:
> I have updated the mballoc patches. The same can be found at
>
> http://www.radian.org/~kvaneesh/ext4/patch-series/
>
>
>
> The series file is
>
> # This series applies on GIT commit b07d68b5ca4d55a16fab223d63d5fb36f89ff42f
> ext4-journal_chksum-2.6.20.patch
> ext4-journal-chksum-review-fix.patch
> ext4_uninit_blockgroup.patch
> 64-bit-i_version.patch
> i_version_hi.patch
> ext4_i_version_hi_2.patch
> i_version_update_ext4.patch
> jbd-stats-through-procfs
> jbd-stats-through-procfs_fix.patch
> delalloc-vfs.patch
> delalloc-ext4.patch
> ext-truncate-mutex.patch
> ext3-4-migrate.patch
> new-extent-function.patch
> generic-find-next-le-bit
> mballoc-core.patch
> ext234_enlarge_blocksize.patch
> ext2_rec_len_overflow_with_64kblk_fix.patch
> ext3_rec_len_overflow_with_64kblk_fix.patch
> ext4_rec_len_overflow_with_64kblk_fix.patch
> RFC-1-2-JBD-slab-management-support-for-large-b.patch
> RFC-2-2-JBD-blocks-reservation-fix-for-large-bl.patch
>
>
>
> Changes:
> a) deleted mballoc-fixup.patch; merged with mballoc-core.path
> b) deleted mballoc-mkdir-oops-fix.patch; merged with mballoc-core.patch
> c) merged Uninitialized group related changes for mballoc
> d) merged the checkpatch.pl error fix patch for mballoc
> e) merged the fix for ext4_ext_search_right for error
> EXT4-fs error (device sdc): ext4_ext_search_right: bad header in inode #3745797: unexpected eh_depth - magic f30a, entries 81, max 84(0), depth 0(1)
> f) PPC build fix by introducing generic_find_next_le_bit()
> g) Document the code. Rest of the places that needs more comments are marked with FIXME!!
> h) 48 bit block usage by converting bg_*_bitmap and bg_inode_table to respective ext4_*_bitmap and ext4_inode_table() function
> i) Added jbd fixes patches from Mingming for large block size with the comment from hch. (this is added towards the end of the series)
> k) Updated the ext234_enlarge_blocksize.patch ext2_rec_len_overflow_with_64kblk_fix.patch ext3_rec_len_overflow_with_64kblk_fix.patch
> ext4_rec_len_overflow_with_64kblk_fix.patch to make sure they can be applied with stgit. They didn't had a valid email id in the
> From: field. ( This is only commit message update )
>
> Test status:
> Minor testing with KVM. I also didn't do a PPC build.
>
> Mingming/Avantika,
>
> Can we update the patch queue with the series ?
>
Thanks for the update. I just back to home from London late last night,
still catching up emails, so I haven't get a chance to update the ext4
patch queue yet. Will do so shortly.
Cheers,
Mingming
> -aneesh
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
Valerie,
Valerie Clement wrote:
> Aneesh Kumar K.V wrote:
>>
>> running fsstress on ppc64 give me
>> EXT4-fs: group 9: 16384 blocks in bitmap, 32254 in gd
>> EXT4-fs error (device sda7): ext4_mb_mark_diskspace_used: Allocating
>> block in system zone - block = 294915
>>
>> EXT4-fs error (device sda7): ext4_ext_find_extent: bad header in inode
>> #213792: invalid magic - magic 5e5e, entries 24158, max 24158(0),
>> depth 24158(0)
>> RTAS: event: 137875, Type: Platform Error, Severity: 2
>> EXT4-fs error (device sda7): ext4_ext_find_extent: bad header in inode
>> #232149: invalid magic - magic e5e5, entries 58853, max 58853(0),
>> depth 58853(0)
>> RTAS: event: 137876, Type: Platform Error, Severity: 2
>> EXT4-fs error (device sda7): ext4_ext_find_extent: bad header in inode
>> #214332: invalid magic - magic 0, entries 0, max 0(0), depth 0(0)
>> EXT4-fs error (device sda7): ext4_ext_remove_space: bad header in
>> inode #232149: invalid magic - magic e5e5, entries 58853, max
>> 58853(0), depth 58853(0)
>>
>> -aneesh
>
> endian problem?
> When running sparse on fs/ext4/ (make C=2 CF="-D__CHECK_ENDIAN__") :
>
> fs/ext4/extents.c:2570:12: warning: incorrect type in assignment
> (different base types)
> fs/ext4/extents.c:2570:12: expected unsigned long [unsigned]
> [assigned] allocated
> fs/ext4/extents.c:2570:12: got restricted unsigned short
> [addressable] [assigned] [usertype] ee_len
>
> I think the following line in extents.c
> allocated = newex.ee_len;
> should be replaced by
> allocated = le16_to_cpu(newex.ee_len);
>
>
Yes i guess that is the issue. fsstress is now running for the last 30 minutes without any error.
I have updated the patch series at
http://www.radian.org/~kvaneesh/ext4/patch-series/
Changes:
a) mballoc patch fixes for sparse warning
b) ext4_i_version_hi_2.patch patch fixes for sparse warning
c) Added commit message for ext4_uninit_blockgroup.patch
d) Added new patch sparse-fix.patch ( this can be pushed to upstream separately)
The complete patch set in
http://www.radian.org/~kvaneesh/ext4/patch-series/ext4-patch-queue.tar
-aneesh
On Wed, 2007-09-12 at 23:29 +0530, Aneesh Kumar K.V wrote:
> Valerie,
>
>
> Valerie Clement wrote:
> > Aneesh Kumar K.V wrote:
> >>
> >> running fsstress on ppc64 give me
> >> EXT4-fs: group 9: 16384 blocks in bitmap, 32254 in gd
> >> EXT4-fs error (device sda7): ext4_mb_mark_diskspace_used: Allocating
> >> block in system zone - block = 294915
> >>
> >> EXT4-fs error (device sda7): ext4_ext_find_extent: bad header in inode
> >> #213792: invalid magic - magic 5e5e, entries 24158, max 24158(0),
> >> depth 24158(0)
> >> RTAS: event: 137875, Type: Platform Error, Severity: 2
> >> EXT4-fs error (device sda7): ext4_ext_find_extent: bad header in inode
> >> #232149: invalid magic - magic e5e5, entries 58853, max 58853(0),
> >> depth 58853(0)
> >> RTAS: event: 137876, Type: Platform Error, Severity: 2
> >> EXT4-fs error (device sda7): ext4_ext_find_extent: bad header in inode
> >> #214332: invalid magic - magic 0, entries 0, max 0(0), depth 0(0)
> >> EXT4-fs error (device sda7): ext4_ext_remove_space: bad header in
> >> inode #232149: invalid magic - magic e5e5, entries 58853, max
> >> 58853(0), depth 58853(0)
> >>
> >> -aneesh
> >
> > endian problem?
> > When running sparse on fs/ext4/ (make C=2 CF="-D__CHECK_ENDIAN__") :
> >
> > fs/ext4/extents.c:2570:12: warning: incorrect type in assignment
> > (different base types)
> > fs/ext4/extents.c:2570:12: expected unsigned long [unsigned]
> > [assigned] allocated
> > fs/ext4/extents.c:2570:12: got restricted unsigned short
> > [addressable] [assigned] [usertype] ee_len
> >
> > I think the following line in extents.c
> > allocated = newex.ee_len;
> > should be replaced by
> > allocated = le16_to_cpu(newex.ee_len);
> >
> >
>
>
> Yes i guess that is the issue. fsstress is now running for the last 30 minutes without any error.
>
> I have updated the patch series at
>
> http://www.radian.org/~kvaneesh/ext4/patch-series/
>
Thanks Aneesh. These changes will show in 2.6.23-rc5 ext4 patch queue.
> Changes:
> a) mballoc patch fixes for sparse warning
> b) ext4_i_version_hi_2.patch patch fixes for sparse warning
> c) Added commit message for ext4_uninit_blockgroup.patch
That's very helpful. I think this might ready to merge for 2.6.24.
> d) Added new patch sparse-fix.patch ( this can be pushed to upstream separately)
This patch has sparse fix for uninit block group, so I fold that part to
the the ext4_uninit_blockgroup.patch.
I moved this patch to the beginning of the series as this fix should be
merged to upstream separately.
Mingming
>
>
> The complete patch set in
> http://www.radian.org/~kvaneesh/ext4/patch-series/ext4-patch-queue.tar
>
>
> -aneesh
>
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
I checked the logs today for the fsstress run i found this in my dmesg log. The
same stack trace is repeated for many times
uh! busy PA
Call Trace:
[c0000000efa72fc0] [c00000000000fe30] .show_stack+0x6c/0x1a0 (unreliable)
[c0000000efa73060] [c0000000001954a0] .ext4_mb_discard_group_preallocations+0x208/0x4fc
[c0000000efa73180] [c0000000001957c8] .ext4_mb_discard_preallocations+0x34/0x94
[c0000000efa73220] [c000000000197fc8] .ext4_mb_new_blocks+0x1fc/0x2c0
[c0000000efa733a0] [c00000000018dfb4] .ext4_ext_get_blocks+0x540/0x6f8
[c0000000efa734d0] [c00000000017bfe8] .ext4_get_block+0x12c/0x1b4
[c0000000efa73580] [c000000000109104] .__blockdev_direct_IO+0x554/0xb94
[c0000000efa736b0] [c000000000179b28] .ext4_direct_IO+0x138/0x208
[c0000000efa73790] [c00000000009a314] .generic_file_direct_IO+0x134/0x1a0
[c0000000efa73840] [c00000000009a404] .generic_file_direct_write+0x84/0x150
[c0000000efa73900] [c00000000009bf54] .__generic_file_aio_write_nolock+0x2c4/0x3d4
[c0000000efa73a00] [c00000000009c0e4] .generic_file_aio_write+0x80/0x114
[c0000000efa73ac0] [c000000000175c90] .ext4_file_write+0x2c/0xd4
[c0000000efa73b50] [c0000000000d0cf4] .do_sync_write+0xc4/0x124
[c0000000efa73cf0] [c0000000000d16bc] .vfs_write+0x120/0x1f4
[c0000000efa73d90] [c0000000000d21a8] .sys_write+0x4c/0x8c
[c0000000efa73e30] [c00000000000852c] syscall_exit+0x0/0x40
uh! busy PA
Call Trace:
[c0000000eed76f40] [c00000000000fe30] .show_stack+0x6c/0x1a0 (unreliable)
[c0000000eed76fe0] [c0000000001954a0] .ext4_mb_discard_group_preallocations+0x208/0x4fc
[c0000000eed77100] [c0000000001957c8] .ext4_mb_discard_preallocations+0x34/0x94
[c0000000eed771a0] [c000000000197fc8] .ext4_mb_new_blocks+0x1fc/0x2c0
[c0000000eed77320] [c00000000018dfb4] .ext4_ext_get_blocks+0x540/0x6f8
[c0000000eed77450] [c00000000017bfe8] .ext4_get_block+0x12c/0x1b4
[c0000000eed77500] [c000000000102b0c] .__block_prepare_write+0x194/0x584
[c0000000eed77610] [c000000000102f30] .block_prepare_write+0x34/0x64
[c0000000eed77690] [c00000000017a638] .ext4_prepare_write+0xd8/0x1e0
[c0000000eed77750] [c00000000009ac18] .generic_file_buffered_write+0x310/0x834
[c0000000eed77900] [c00000000009bf88] .__generic_file_aio_write_nolock+0x2f8/0x3d4
[c0000000eed77a00] [c00000000009c0e4] .generic_file_aio_write+0x80/0x114
[c0000000eed77ac0] [c000000000175c90] .ext4_file_write+0x2c/0xd4
[c0000000eed77b50] [c0000000000d0cf4] .do_sync_write+0xc4/0x124
[c0000000eed77cf0] [c0000000000d16bc] .vfs_write+0x120/0x1f4
[c0000000eed77d90] [c0000000000d21a8] .sys_write+0x4c/0x8c
[c0000000eed77e30] [c00000000000852c] syscall_exit+0x0/0x40
uh! busy PA
Call Trace:
[c0000000ec97abe0] [c00000000000fe30] .show_stack+0x6c/0x1a0 (unreliable)
[c0000000ec97ac80] [c0000000001954a0] .ext4_mb_discard_group_preallocations+0x208/0x4fc
[c0000000ec97ada0] [c0000000001957c8] .ext4_mb_discard_preallocations+0x34/0x94
[c0000000ec97ae40] [c000000000197fc8] .ext4_mb_new_blocks+0x1fc/0x2c0
[c0000000ec97afc0] [c0000000001748c0] .ext4_new_block+0x84/0x98
[c0000000ec97b090] [c00000000018a55c] .ext4_ext_new_block+0x5c/0x7c
[c0000000ec97b130] [c00000000018b994] .ext4_ext_create_new_leaf+0x9e4/0xcf4
[c0000000ec97b250] [c00000000018c524] .ext4_ext_insert_extent+0x36c/0x5c4
[c0000000ec97b320] [c00000000018e020] .ext4_ext_get_blocks+0x5ac/0x6f8
[c0000000ec97b450] [c00000000017bfe8] .ext4_get_block+0x12c/0x1b4
[c0000000ec97b500] [c000000000102b0c] .__block_prepare_write+0x194/0x584
[c0000000ec97b610] [c000000000102f30] .block_prepare_write+0x34/0x64
[c0000000ec97b690] [c00000000017a638] .ext4_prepare_write+0xd8/0x1e0
[c0000000ec97b750] [c00000000009ac18] .generic_file_buffered_write+0x310/0x834
[c0000000ec97b900] [c00000000009bf88] .__generic_file_aio_write_nolock+0x2f8/0x3d4
[c0000000ec97ba00] [c00000000009c0e4] .generic_file_aio_write+0x80/0x114
[c0000000ec97bac0] [c000000000175c90] .ext4_file_write+0x2c/0xd4
[c0000000ec97bb50] [c0000000000d0cf4] .do_sync_write+0xc4/0x124
[c0000000ec97bcf0] [c0000000000d16bc] .vfs_write+0x120/0x1f4
[c0000000ec97bd90] [c0000000000d21a8] .sys_write+0x4c/0x8c
[c0000000ec97be30] [c00000000000852c] syscall_exit+0x0/0x4
Hi Alex,
Aneesh Kumar K.V wrote:
>
> I checked the logs today for the fsstress run i found this in my dmesg
> log. The
> same stack trace is repeated for many times
>
>
> uh! busy PA
> Call Trace:
> [c0000000efa72fc0] [c00000000000fe30] .show_stack+0x6c/0x1a0 (unreliable)
> [c0000000efa73060] [c0000000001954a0]
> .ext4_mb_discard_group_preallocations+0x208/0x4fc
> [c0000000efa73180] [c0000000001957c8]
> .ext4_mb_discard_preallocations+0x34/0x94
> [c0000000efa73220] [c000000000197fc8] .ext4_mb_new_blocks+0x1fc/0x2c0
> [c0000000efa733a0] [c00000000018dfb4] .ext4_ext_get_blocks+0x540/0x6f8
> [c0000000efa734d0] [c00000000017bfe8] .ext4_get_block+0x12c/0x1b4
> [c0000000efa73580] [c000000000109104] .__blockdev_direct_IO+0x554/0xb94
> [c0000000efa736b0] [c000000000179b28] .ext4_direct_IO+0x138/0x208
> [c0000000efa73790] [c00000000009a314] .generic_file_direct_IO+0x134/0x1a0
> [c0000000efa73840] [c00000000009a404] .generic_file_direct_write+0x84/0x150
> [c0000000efa73900] [c00000000009bf54]
> .__generic_file_aio_write_nolock+0x2c4/0x3d4
> [c0000000efa73a00] [c00000000009c0e4] .generic_file_aio_write+0x80/0x114
> [c0000000efa73ac0] [c000000000175c90] .ext4_file_write+0x2c/0xd4
> [c0000000efa73b50] [c0000000000d0cf4] .do_sync_write+0xc4/0x124
> [c0000000efa73cf0] [c0000000000d16bc] .vfs_write+0x120/0x1f4
> [c0000000efa73d90] [c0000000000d21a8] .sys_write+0x4c/0x8c
> [c0000000efa73e30] [c00000000000852c] syscall_exit+0x0/0x40
> uh! busy PA
I think we can remove from the code this dump_stack.
list_for_each_entry_safe(pa, tmp,
&grp->bb_prealloc_list, pa_group_list) {
spin_lock(&pa->pa_lock);
if (atomic_read(&pa->pa_count)) {
spin_unlock(&pa->pa_lock);
printk(KERN_ERR "uh! busy PA\n");
dump_stack();
busy = 1;
continue;
}
This happens during ext4_mb_discard_group_prealloction. It is quiet possible that
during the discard operation some other CPUs can use the preallocated space right ?
Infact down the code we see if we have skipped some of the PA (busy == 1 )and the
free space retrieved is not enough they we loop again
Can you let me know why we marked the dump_stack there ?
-aneesh
On Tue, 11 Sep 2007 13:59:26 +0530 "Aneesh Kumar K.V" <[email protected]> wrote:
> I have updated the mballoc patches.
Has anyone reviewed this stuff? I don't see much evidence of it here?
Just a quick scan shows up heavy over-inlining, many
macros-which-should-be-functions and numerous needlessly global symbols.
Whether there are more substantial problems in there I have not the time to
tell.
Andrew Morton wrote:
> On Tue, 11 Sep 2007 13:59:26 +0530 "Aneesh Kumar K.V" <[email protected]> wrote:
>
>> I have updated the mballoc patches.
>
> Has anyone reviewed this stuff? I don't see much evidence of it here?
>
> Just a quick scan shows up heavy over-inlining, many
> macros-which-should-be-functions and numerous needlessly global symbols.
> Whether there are more substantial problems in there I have not the time to
> tell.
I idea of getting it added to the patch queue was to get more testing and also review.
I am right now looking at the code trying to add more documentation. Things which are not
clear at the moment is marked with FIXME!!.
-aneesh
Hi,
yes, it's absolutely safe to remove. I just wanted to see how many
collisions happen in "real life".
On 9/14/07, Aneesh Kumar K.V <[email protected]> wrote:
> Hi Alex,
>
> Aneesh Kumar K.V wrote:
> >
> > I checked the logs today for the fsstress run i found this in my dmesg
> > log. The
> > same stack trace is repeated for many times
> >
> >
> > uh! busy PA
> > Call Trace:
> > [c0000000efa72fc0] [c00000000000fe30] .show_stack+0x6c/0x1a0 (unreliable)
> > [c0000000efa73060] [c0000000001954a0]
> > .ext4_mb_discard_group_preallocations+0x208/0x4fc
> > [c0000000efa73180] [c0000000001957c8]
> > .ext4_mb_discard_preallocations+0x34/0x94
> > [c0000000efa73220] [c000000000197fc8] .ext4_mb_new_blocks+0x1fc/0x2c0
> > [c0000000efa733a0] [c00000000018dfb4] .ext4_ext_get_blocks+0x540/0x6f8
> > [c0000000efa734d0] [c00000000017bfe8] .ext4_get_block+0x12c/0x1b4
> > [c0000000efa73580] [c000000000109104] .__blockdev_direct_IO+0x554/0xb94
> > [c0000000efa736b0] [c000000000179b28] .ext4_direct_IO+0x138/0x208
> > [c0000000efa73790] [c00000000009a314] .generic_file_direct_IO+0x134/0x1a0
> > [c0000000efa73840] [c00000000009a404] .generic_file_direct_write+0x84/0x150
> > [c0000000efa73900] [c00000000009bf54]
> > .__generic_file_aio_write_nolock+0x2c4/0x3d4
> > [c0000000efa73a00] [c00000000009c0e4] .generic_file_aio_write+0x80/0x114
> > [c0000000efa73ac0] [c000000000175c90] .ext4_file_write+0x2c/0xd4
> > [c0000000efa73b50] [c0000000000d0cf4] .do_sync_write+0xc4/0x124
> > [c0000000efa73cf0] [c0000000000d16bc] .vfs_write+0x120/0x1f4
> > [c0000000efa73d90] [c0000000000d21a8] .sys_write+0x4c/0x8c
> > [c0000000efa73e30] [c00000000000852c] syscall_exit+0x0/0x40
> > uh! busy PA
>
>
> I think we can remove from the code this dump_stack.
>
>
> list_for_each_entry_safe(pa, tmp,
> &grp->bb_prealloc_list, pa_group_list) {
> spin_lock(&pa->pa_lock);
> if (atomic_read(&pa->pa_count)) {
> spin_unlock(&pa->pa_lock);
> printk(KERN_ERR "uh! busy PA\n");
> dump_stack();
> busy = 1;
> continue;
> }
>
> This happens during ext4_mb_discard_group_prealloction. It is quiet possible that
> during the discard operation some other CPUs can use the preallocated space right ?
> Infact down the code we see if we have skipped some of the PA (busy == 1 )and the
> free space retrieved is not enough they we loop again
>
> Can you let me know why we marked the dump_stack there ?
>
>
> -aneesh
>
>
>
--
thanks, Alex