2010-03-23 11:10:23

by bugzilla-daemon

[permalink] [raw]
Subject: [Bug 15579] ext4 -o discard produces incorrect blocks of zeroes in newly created files under heavy read+truncate+append-new-file load


--- Comment #5 from Andreas Beckmann <[email protected]> 2010-03-23 11:10:15 ---
(In reply to comment #4)
> Just for what it's worth, I've had trouble reproducing this on another brand of
> SSD... something like this (don't let the xfs_io throw you; it's just a
> convenient way to generate the IO). I did this on a 512M filesystem.

Might be a probability issue. For the 250 GB case I did in total about 200000
truncations on about 250 files and found in the output file 8 and 13 corrupt
blocks (I only kept detailed numbers for two cases). Reducing the block size
might "help" by increasing the number of I/Os.

I can't test your script right now, the disks are all busy with some long
running experiments. There should be another one just back from RMA on my desk,
so I can try it tomorrow when I'm back there (was travelling for a week).

What do you do on the remaining space of the SSD? Try putting a file system
there and fill it with something so that the SSD is 99% filled so it can't that
easily remap the blocks you are writing to.

Configure bugmail: https://bugzilla.kernel.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are watching the assignee of the bug.