Hi
The kernel crashes when IO is being submitted to a block device and block
size of that device is changed simultaneously.
To reproduce the crash, apply this patch:
--- linux-3.4.3-fast.orig/fs/block_dev.c 2012-06-27 20:24:07.000000000 +0200
+++ linux-3.4.3-fast/fs/block_dev.c 2012-06-27 20:28:34.000000000 +0200
@@ -28,6 +28,7 @@
#include <linux/log2.h>
#include <linux/cleancache.h>
#include <asm/uaccess.h>
+#include <linux/delay.h>
#include "internal.h"
struct bdev_inode {
@@ -203,6 +204,7 @@ blkdev_get_blocks(struct inode *inode, s
bh->b_bdev = I_BDEV(inode);
bh->b_blocknr = iblock;
+ msleep(1000);
bh->b_size = max_blocks << inode->i_blkbits;
if (max_blocks)
set_buffer_mapped(bh);
Use some device with 4k blocksize, for example a ramdisk.
Run "dd if=/dev/ram0 of=/dev/null bs=4k count=1 iflag=direct"
While it is sleeping in the msleep function, run "blockdev --setbsz 2048
/dev/ram0" on the other console.
You get a BUG at fs/direct-io.c:1013 - BUG_ON(this_chunk_bytes == 0);
One may ask "why would anyone do this - submit I/O and change block size
simultaneously?" - the problem is that udev and lvm can scan and read all
block devices anytime - so anytime you change block device size, there may
be some i/o to that device in flight and the crash may happen. That BUG
actually happened in production environment because of lvm scanning block
devices and some other software changing block size at the same time.
I would like to know, what is your opinion on fixing this crash? There are
several possibilities:
* we could potentially read i_blkbits once, store it in the direct i/o
structure and never read it again - direct i/o could be maybe modified for
this (it reads i_blkbits only at a few places). But what about non-direct
i/o? Non-direct i/o is reading i_blkbits much more often and the code was
obviously written without consideration that it may change - for block
devices, i_blkbits is essentially a random value that can change anytime
you read it and the code of block_read_full_page, __block_write_begin,
__block_write_full_page and others doesn't seem to take it into account.
* put some rw-lock arond all I/Os on block device. The rw-lock would be
taken for read on all I/O paths and it would be taken for write when
changing the block device size. The downside would be a possible
performance hit of the rw-lock. The rw-lock could be made per-cpu to avoid
cache line bouncing (take the rw-lock belonging to the current cpu for
read; for write take all cpus' locks).
* allow changing block size only if the device is open only once and the
process is singlethreaded? (so there couldn't be any outstanding I/Os). I
don't know if this could be tested reliably... Another question: what to
do if the device is open multiple times?
Do you have any other ideas what to do with it?
Mikulas
On Wed 27-06-12 23:04:09, Mikulas Patocka wrote:
> The kernel crashes when IO is being submitted to a block device and block
> size of that device is changed simultaneously.
Nasty ;-)
> To reproduce the crash, apply this patch:
>
> --- linux-3.4.3-fast.orig/fs/block_dev.c 2012-06-27 20:24:07.000000000 +0200
> +++ linux-3.4.3-fast/fs/block_dev.c 2012-06-27 20:28:34.000000000 +0200
> @@ -28,6 +28,7 @@
> #include <linux/log2.h>
> #include <linux/cleancache.h>
> #include <asm/uaccess.h>
> +#include <linux/delay.h>
> #include "internal.h"
> struct bdev_inode {
> @@ -203,6 +204,7 @@ blkdev_get_blocks(struct inode *inode, s
>
> bh->b_bdev = I_BDEV(inode);
> bh->b_blocknr = iblock;
> + msleep(1000);
> bh->b_size = max_blocks << inode->i_blkbits;
> if (max_blocks)
> set_buffer_mapped(bh);
>
> Use some device with 4k blocksize, for example a ramdisk.
> Run "dd if=/dev/ram0 of=/dev/null bs=4k count=1 iflag=direct"
> While it is sleeping in the msleep function, run "blockdev --setbsz 2048
> /dev/ram0" on the other console.
> You get a BUG at fs/direct-io.c:1013 - BUG_ON(this_chunk_bytes == 0);
>
>
> One may ask "why would anyone do this - submit I/O and change block size
> simultaneously?" - the problem is that udev and lvm can scan and read all
> block devices anytime - so anytime you change block device size, there may
> be some i/o to that device in flight and the crash may happen. That BUG
> actually happened in production environment because of lvm scanning block
> devices and some other software changing block size at the same time.
>
>
> I would like to know, what is your opinion on fixing this crash? There are
> several possibilities:
>
> * we could potentially read i_blkbits once, store it in the direct i/o
> structure and never read it again - direct i/o could be maybe modified for
> this (it reads i_blkbits only at a few places). But what about non-direct
> i/o? Non-direct i/o is reading i_blkbits much more often and the code was
> obviously written without consideration that it may change - for block
> devices, i_blkbits is essentially a random value that can change anytime
> you read it and the code of block_read_full_page, __block_write_begin,
> __block_write_full_page and others doesn't seem to take it into account.
>
> * put some rw-lock arond all I/Os on block device. The rw-lock would be
> taken for read on all I/O paths and it would be taken for write when
> changing the block device size. The downside would be a possible
> performance hit of the rw-lock. The rw-lock could be made per-cpu to avoid
> cache line bouncing (take the rw-lock belonging to the current cpu for
> read; for write take all cpus' locks).
>
> * allow changing block size only if the device is open only once and the
> process is singlethreaded? (so there couldn't be any outstanding I/Os). I
> don't know if this could be tested reliably... Another question: what to
> do if the device is open multiple times?
>
> Do you have any other ideas what to do with it?
Yeah, it's nasty and neither solution looks particularly appealing. One
idea that came to my mind is: I'm trying to solve some races between direct
IO, buffered IO, hole punching etc. by a new mapping interval lock. I'm not
sure if it will go anywhere yet but if it does, we can fix the above race
by taking the mapping lock for the whole block device around setting block
size thus effectivelly disallowing any IO to it.
Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR
On Thu, 28 Jun 2012, Jan Kara wrote:
> > Do you have any other ideas what to do with it?
> Yeah, it's nasty and neither solution looks particularly appealing. One
> idea that came to my mind is: I'm trying to solve some races between direct
> IO, buffered IO, hole punching etc. by a new mapping interval lock. I'm not
> sure if it will go anywhere yet but if it does, we can fix the above race
> by taking the mapping lock for the whole block device around setting block
> size thus effectivelly disallowing any IO to it.
>
> Honza
> --
> Jan Kara <[email protected]>
> SUSE Labs, CR
What races are you trying to solve? There used to be i_alloc_mem that
prevented direct i/o while the file is being truncated, but it disappeared
in recent kernels...
Mikulas
On Thu 28-06-12 11:44:03, Mikulas Patocka wrote:
> On Thu, 28 Jun 2012, Jan Kara wrote:
>
> > > Do you have any other ideas what to do with it?
> > Yeah, it's nasty and neither solution looks particularly appealing. One
> > idea that came to my mind is: I'm trying to solve some races between direct
> > IO, buffered IO, hole punching etc. by a new mapping interval lock. I'm not
> > sure if it will go anywhere yet but if it does, we can fix the above race
> > by taking the mapping lock for the whole block device around setting block
> > size thus effectivelly disallowing any IO to it.
> >
> > Honza
> > --
> > Jan Kara <[email protected]>
> > SUSE Labs, CR
>
> What races are you trying to solve? There used to be i_alloc_mem that
> prevented direct i/o while the file is being truncated, but it disappeared
> in recent kernels...
i_alloc_sem has been replaced by inode_dio_wait() and friends. The
problem is mainly with hole punching - see the thread starting here:
http://www.spinics.net/lists/linux-ext4/msg32059.html
Honza
Hi,
I have some simple idea. Maybe it does nothing. What about physical
sector size? Anyway, block size can be measured as in bytes as in
sectors count. The block size can't be lesser than physical sector size
during any changing of it. Thereby, any block contains of any count of
sectors. And if processing of submitted I/O will be on sector basis then
it doesn't matter how block size was changed.
With the best regards,
Vyacheslav Dubeyko.
On Wed, 2012-06-27 at 23:04 -0400, Mikulas Patocka wrote:
> Hi
>
> The kernel crashes when IO is being submitted to a block device and block
> size of that device is changed simultaneously.
>
> To reproduce the crash, apply this patch:
>
> --- linux-3.4.3-fast.orig/fs/block_dev.c 2012-06-27 20:24:07.000000000 +0200
> +++ linux-3.4.3-fast/fs/block_dev.c 2012-06-27 20:28:34.000000000 +0200
> @@ -28,6 +28,7 @@
> #include <linux/log2.h>
> #include <linux/cleancache.h>
> #include <asm/uaccess.h>
> +#include <linux/delay.h>
> #include "internal.h"
> struct bdev_inode {
> @@ -203,6 +204,7 @@ blkdev_get_blocks(struct inode *inode, s
>
> bh->b_bdev = I_BDEV(inode);
> bh->b_blocknr = iblock;
> + msleep(1000);
> bh->b_size = max_blocks << inode->i_blkbits;
> if (max_blocks)
> set_buffer_mapped(bh);
>
> Use some device with 4k blocksize, for example a ramdisk.
> Run "dd if=/dev/ram0 of=/dev/null bs=4k count=1 iflag=direct"
> While it is sleeping in the msleep function, run "blockdev --setbsz 2048
> /dev/ram0" on the other console.
> You get a BUG at fs/direct-io.c:1013 - BUG_ON(this_chunk_bytes == 0);
>
>
> One may ask "why would anyone do this - submit I/O and change block size
> simultaneously?" - the problem is that udev and lvm can scan and read all
> block devices anytime - so anytime you change block device size, there may
> be some i/o to that device in flight and the crash may happen. That BUG
> actually happened in production environment because of lvm scanning block
> devices and some other software changing block size at the same time.
>
>
> I would like to know, what is your opinion on fixing this crash? There are
> several possibilities:
>
> * we could potentially read i_blkbits once, store it in the direct i/o
> structure and never read it again - direct i/o could be maybe modified for
> this (it reads i_blkbits only at a few places). But what about non-direct
> i/o? Non-direct i/o is reading i_blkbits much more often and the code was
> obviously written without consideration that it may change - for block
> devices, i_blkbits is essentially a random value that can change anytime
> you read it and the code of block_read_full_page, __block_write_begin,
> __block_write_full_page and others doesn't seem to take it into account.
>
> * put some rw-lock arond all I/Os on block device. The rw-lock would be
> taken for read on all I/O paths and it would be taken for write when
> changing the block device size. The downside would be a possible
> performance hit of the rw-lock. The rw-lock could be made per-cpu to avoid
> cache line bouncing (take the rw-lock belonging to the current cpu for
> read; for write take all cpus' locks).
>
> * allow changing block size only if the device is open only once and the
> process is singlethreaded? (so there couldn't be any outstanding I/Os). I
> don't know if this could be tested reliably... Another question: what to
> do if the device is open multiple times?
>
> Do you have any other ideas what to do with it?
>
> Mikulas
> --
> To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html