Subject: Race to power off harming SATA SSDs

Summary:

Linux properly issues the SSD prepare-to-poweroff command to SATA SSDs,
but it does not wait for long enough to ensure the SSD has carried it
through.

This causes a race between the platform power-off path, and the SSD
device. When the SSD loses the race, its power is cut while it is still
doing its final book-keeping for poweroff. This is known to be harmful
to most SSDs, and there is a non-zero chance of it even bricking.

Apparently, it is enough to wait a few seconds before powering off the
platform to give the SSDs enough time to fully enter STANDBY IMMEDIATE.

This issue was verified to exist on SATA SSDs made by at least Crucial
(and thus likely also Micron), Intel, and Samsung. It was verified to
exist on several 3.x to 4.9 kernels, both distro (Debian) and also
upstream stable/longterm kernels from kernel.org. Only x86-64 was
tested.

A proof of concept patch is attached, which was sufficient to
*completely* avoid the issue on the test set, for a perid of six to
eight weeks of testing.


Details and hypothesis:

For a long while I have noticed that S.M.A.R.T-provided attributes for
SSDs related to "unit was powered off unexpectedly" under my care where
raising on several boxes, without any unexpected power cuts being
accounted for.

This has been going for a *long* time (several years, since the first
SSD I got). But it was too rare an event for me to try to track down
the root cause... until a friend reported his SSD was already reporting
several hundred unclean power-offs on his laptop. That made it much
easier to track down.

Per spec (and device manuals), SCSI, SATA and ATA-attached SSDs must be
informed of an imminent poweroff to checkpoing background tasks, flush
RAM caches and close logs. For SCSI SSDs, you must tissue a
START_STOP_UNIT (stop) command. For SATA, you must issue a STANDBY
IMMEDIATE command. I haven't checked ATA, but it should be the same as
SATA.

In order to comply with this requirement, the Linux SCSI "sd" device
driver issues a START_STOP_UNIT command when the device is shutdown[1].
For SATA SSD devices, the SCSI START_STOP_UNIT command is properly
translated by the kernel SAT layer to STANDBY IMMEDIATE for SSDs.

After issuing the command, the kernel properly waits for the device to
report that the command has been completed before it proceeds.

However, *IN PRACTICE*, SATA STANDBY IMMEDIATE command completion
[often?] only indicates that the device is now switching to the target
power management state, not that it has reached the target state. Any
further device status inquires would return that it is in STANDBY mode,
even if it is still entering that state.

The kernel then continues the shutdown path while the SSD is still
preparing itself to be powered off, and it becomes a race. When the
kernel + firmware wins, platform power is cut before the SSD has
finished (i.e. the SSD is subject to an unclean power-off).

Evidently, how often the SSD will lose the race depends on a platform
and SSD combination, and also on how often the system is powered off.
A sluggish firmware that takes its time to cut power can save the day...


Observing the effects:

An unclean SSD power-off will be signaled by the SSD device through an
increase on a specific S.M.A.R.T attribute. These SMART attributes can
be read using the smartmontools package from http://www.smartmontools.org,
which should be available in just about every Linux distro.

smartctl -A /dev/sd#

The SMART attribute related to unclean power-off is vendor-specific, so
one might have to track down the SSD datasheet to know which attribute a
particular SSD uses. The naming of the attribute also varies.

For a Crucial M500 SSD with up-to-date firmware, this would be attribute
174 "Unexpect_Power_Loss_Ct", for example.

NOTE: unclean SSD power-offs are dangerous and may brick the device in
the worst case, or otherwise harm it (reduce longevity, damage flash
blocks). It is also not impossible to get data corruption.


Testing, and working around the issue:

I've asked for several Debian developers to test a patch (attached) in
any of their boxes that had SSDs complaining of unclean poweroffs. This
gave us a test corpus of Intel, Crucial and Samsung SSDs, on laptops,
desktops, and a few workstations.

The proof-of-concept patch adds a delay of one second to the SD-device
shutdown path.

Previously, the more sensitive devices/platforms in the test set would
report at least one or two unclean SSD power-offs a month. With the
patch, there was NOT a single increase reported after several weeks of
testing.

This is obviously not a test with 100% confidence, but it indicates very
strongly that the above analysis was correct, and that an added delay
was enough to work around the issue in the entire test set.



Fixing the issue properly:

The proof of concept patch works fine, but it "punishes" the system with
too much delay. Also, if sd device shutdown is serialized, it will
punish systems with many /dev/sd devices severely.

1. The delay needs to happen only once right before powering down for
hibernation/suspend/power-off. There is no need to delay per-device
for platform power off/suspend/hibernate.

2. A per-device delay needs to happen before signaling that a device
can be safely removed when doing controlled hotswap (e.g. when
deleting the SD device due to a sysfs command).

I am unsure how much *total* delay would be enough. Two seconds seems
like a safe bet.

Any comments? Any clues on how to make the delay "smarter" to trigger
only once during platform shutdown, but still trigger per-device when
doing per-device hotswapping ?


[1] In ancient times, it didn't, or at least the ATA/SATA side didn't.
It has been fixed for at least a decade, refer to "manage_start_stop", a
deprecated sysfs node that should have been removed in y2008 :-)

--
Henrique Holschuh


2017-04-10 23:34:39

by Bart Van Assche

[permalink] [raw]
Subject: Re: Race to power off harming SATA SSDs

On Mon, 2017-04-10 at 20:21 -0300, Henrique de Moraes Holschuh wrote:
> A proof of concept patch is attached

Thank you for the very detailed write-up. Sorry but no patch was attached
to the e-mail I received from you ...

Bart.

Subject: sd: wait for slow devices on shutdown path

Author: Henrique de Moraes Holschuh <[email protected]>
Date: Wed Feb 1 20:42:02 2017 -0200

sd: wait for slow devices on shutdown path

Wait 1s during suspend/shutdown for the device to settle after
we issue the STOP command.

Otherwise we race ATA SSDs to powerdown, possibly causing damage to
FLASH/data and even bricking the device.

This is an experimental patch, there are likely better ways of doing
this that don't punish non-SSDs.

Signed-off-by: Henrique de Moraes Holschuh <[email protected]>

diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 4e08d1cd..3c6d5d3 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3230,6 +3230,38 @@ static int sd_start_stop_device(struct scsi_disk *sdkp, int start)
res = 0;
}

+ /*
+ * Wait for slow devices that signal they have fully entered
+ * the stopped state before they actully did it.
+ *
+ * This behavior is apparently allowed per-spec for ATA
+ * devices, and our SAT layer does not account for it.
+ * Thus, on return, the device might still be in the process
+ * of entering STANDBY state.
+ *
+ * Worse, apparently the ATA spec also says the unit should
+ * return that it is already in STANDBY state *while still
+ * entering that state*.
+ *
+ * SSDs absolutely depend on receiving a STANDBY IMMEDIATE
+ * command prior to power off for a clean shutdown (and
+ * likely we don't want to send them *anything else* in-
+ * between either, to be on the safe side).
+ *
+ * As things stand, we are racing the SSD's firmware. If it
+ * finishes first, nothing bad happens. If it doesn't, we
+ * cut power while it is still saving metadata, and not only
+ * this will cause extra FLASH wear (and maybe even damage
+ * some cells), it also has a non-zero chance of bricking the
+ * SSD.
+ *
+ * Issue reported on Intel, Crucial and Micron SSDs.
+ * Issue can be detected by S.M.A.R.T. signaling unexpected
+ * power cuts.
+ */
+ if (!res && !start)
+ msleep(1000);
+
/* SCSI error codes must not go to the generic layer */
if (res)
return -EIO;

--
Henrique Holschuh

Subject: Re: Race to power off harming SATA SSDs

On Mon, 10 Apr 2017, Bart Van Assche wrote:
> On Mon, 2017-04-10 at 20:21 -0300, Henrique de Moraes Holschuh wrote:
> > A proof of concept patch is attached
>
> Thank you for the very detailed write-up. Sorry but no patch was attached
> to the e-mail I received from you ...

Indeed. It should arrive shortly, helpfully undamaged. Otherwise I
will resend with git-send-email :p

--
Henrique Holschuh

2017-04-10 23:52:23

by Tejun Heo

[permalink] [raw]
Subject: Re: Race to power off harming SATA SSDs

Hello,

On Mon, Apr 10, 2017 at 08:21:19PM -0300, Henrique de Moraes Holschuh wrote:
...
> Per spec (and device manuals), SCSI, SATA and ATA-attached SSDs must be
> informed of an imminent poweroff to checkpoing background tasks, flush
> RAM caches and close logs. For SCSI SSDs, you must tissue a
> START_STOP_UNIT (stop) command. For SATA, you must issue a STANDBY
> IMMEDIATE command. I haven't checked ATA, but it should be the same as
> SATA.

Yeah, it's the same. Even hard drives are expected to survive a lot
of unexpected power losses tho. They have to do emergency head
unloads but they're designed to withstand a healthy number of them.

> In order to comply with this requirement, the Linux SCSI "sd" device
> driver issues a START_STOP_UNIT command when the device is shutdown[1].
> For SATA SSD devices, the SCSI START_STOP_UNIT command is properly
> translated by the kernel SAT layer to STANDBY IMMEDIATE for SSDs.
>
> After issuing the command, the kernel properly waits for the device to
> report that the command has been completed before it proceeds.
>
> However, *IN PRACTICE*, SATA STANDBY IMMEDIATE command completion
> [often?] only indicates that the device is now switching to the target
> power management state, not that it has reached the target state. Any
> further device status inquires would return that it is in STANDBY mode,
> even if it is still entering that state.
>
> The kernel then continues the shutdown path while the SSD is still
> preparing itself to be powered off, and it becomes a race. When the
> kernel + firmware wins, platform power is cut before the SSD has
> finished (i.e. the SSD is subject to an unclean power-off).

At that point, the device is fully flushed and in terms of data
integrity should be fine with losing power at any point anyway.

> Evidently, how often the SSD will lose the race depends on a platform
> and SSD combination, and also on how often the system is powered off.
> A sluggish firmware that takes its time to cut power can save the day...
>
>
> Observing the effects:
>
> An unclean SSD power-off will be signaled by the SSD device through an
> increase on a specific S.M.A.R.T attribute. These SMART attributes can
> be read using the smartmontools package from http://www.smartmontools.org,
> which should be available in just about every Linux distro.
>
> smartctl -A /dev/sd#
>
> The SMART attribute related to unclean power-off is vendor-specific, so
> one might have to track down the SSD datasheet to know which attribute a
> particular SSD uses. The naming of the attribute also varies.
>
> For a Crucial M500 SSD with up-to-date firmware, this would be attribute
> 174 "Unexpect_Power_Loss_Ct", for example.
>
> NOTE: unclean SSD power-offs are dangerous and may brick the device in
> the worst case, or otherwise harm it (reduce longevity, damage flash
> blocks). It is also not impossible to get data corruption.

I get that the incrementing counters might not be pretty but I'm a bit
skeptical about this being an actual issue. Because if that were
true, the device would be bricking itself from any sort of power
losses be that an actual power loss, battery rundown or hard power off
after crash.

> Testing, and working around the issue:
>
> I've asked for several Debian developers to test a patch (attached) in
> any of their boxes that had SSDs complaining of unclean poweroffs. This
> gave us a test corpus of Intel, Crucial and Samsung SSDs, on laptops,
> desktops, and a few workstations.
>
> The proof-of-concept patch adds a delay of one second to the SD-device
> shutdown path.
>
> Previously, the more sensitive devices/platforms in the test set would
> report at least one or two unclean SSD power-offs a month. With the
> patch, there was NOT a single increase reported after several weeks of
> testing.
>
> This is obviously not a test with 100% confidence, but it indicates very
> strongly that the above analysis was correct, and that an added delay
> was enough to work around the issue in the entire test set.
>
>
>
> Fixing the issue properly:
>
> The proof of concept patch works fine, but it "punishes" the system with
> too much delay. Also, if sd device shutdown is serialized, it will
> punish systems with many /dev/sd devices severely.
>
> 1. The delay needs to happen only once right before powering down for
> hibernation/suspend/power-off. There is no need to delay per-device
> for platform power off/suspend/hibernate.
>
> 2. A per-device delay needs to happen before signaling that a device
> can be safely removed when doing controlled hotswap (e.g. when
> deleting the SD device due to a sysfs command).
>
> I am unsure how much *total* delay would be enough. Two seconds seems
> like a safe bet.
>
> Any comments? Any clues on how to make the delay "smarter" to trigger
> only once during platform shutdown, but still trigger per-device when
> doing per-device hotswapping ?

So, if this is actually an issue, sure, we can try to work around;
however, can we first confirm that this has any other consequences
than a SMART counter being bumped up? I'm not sure how meaningful
that is in itself.

Thanks.

--
tejun

2017-04-10 23:58:00

by James Bottomley

[permalink] [raw]
Subject: Re: Race to power off harming SATA SSDs

On Tue, 2017-04-11 at 08:52 +0900, Tejun Heo wrote:
[...]
> > Any comments? Any clues on how to make the delay "smarter" to
> > trigger only once during platform shutdown, but still trigger per
> > -device when doing per-device hotswapping ?
>
> So, if this is actually an issue, sure, we can try to work around;
> however, can we first confirm that this has any other consequences
> than a SMART counter being bumped up? I'm not sure how meaningful
> that is in itself.

Seconded; especially as the proposed patch is way too invasive: we run
single threaded on shutdown and making every disk wait 1s is going to
drive enterprises crazy. I'm with Tejun: If the device replies GOOD to
SYNCHRONIZE CACHE, that means we're entitled to assume all written data
is safely on non-volatile media and any "essential housekeeping" can be
redone if the power goes away.

James


Subject: Re: Race to power off harming SATA SSDs

On Tue, 11 Apr 2017, Tejun Heo wrote:
> > The kernel then continues the shutdown path while the SSD is still
> > preparing itself to be powered off, and it becomes a race. When the
> > kernel + firmware wins, platform power is cut before the SSD has
> > finished (i.e. the SSD is subject to an unclean power-off).
>
> At that point, the device is fully flushed and in terms of data
> integrity should be fine with losing power at any point anyway.

All bets are off at this point, really.

We issued a command that explicitly orders the SSD to checkpoint and
stop all background tasks, and flush *everything* including invisible
state (device data, stats, logs, translation tables, flash metadata,
etc)... and then cut its power before it finished.

> > NOTE: unclean SSD power-offs are dangerous and may brick the device in
> > the worst case, or otherwise harm it (reduce longevity, damage flash
> > blocks). It is also not impossible to get data corruption.
>
> I get that the incrementing counters might not be pretty but I'm a bit
> skeptical about this being an actual issue. Because if that were

As an *example* I know of because I tracked it personally, Crucial SSDs
models from a few years ago were known to eventually brick on any
platforms where they were being subject to repeated unclean shutdowns,
*Windows included*. There are some threads on their forums about it.
Firmware revisions made it harder to happen, but still...

> true, the device would be bricking itself from any sort of power
> losses be that an actual power loss, battery rundown or hard power off
> after crash.

Bricking is a worst-case, really. I guess they learned to keep the
device always in a will-not-brick state using append-only logs for
critical state or something, so it really takes very nasty flash damage
to exactly the wrong place to render it unusable.

> > Fixing the issue properly:
> >
> > The proof of concept patch works fine, but it "punishes" the system with
> > too much delay. Also, if sd device shutdown is serialized, it will
> > punish systems with many /dev/sd devices severely.
> >
> > 1. The delay needs to happen only once right before powering down for
> > hibernation/suspend/power-off. There is no need to delay per-device
> > for platform power off/suspend/hibernate.
> >
> > 2. A per-device delay needs to happen before signaling that a device
> > can be safely removed when doing controlled hotswap (e.g. when
> > deleting the SD device due to a sysfs command).
> >
> > I am unsure how much *total* delay would be enough. Two seconds seems
> > like a safe bet.
> >
> > Any comments? Any clues on how to make the delay "smarter" to trigger
> > only once during platform shutdown, but still trigger per-device when
> > doing per-device hotswapping ?
>
> So, if this is actually an issue, sure, we can try to work around;
> however, can we first confirm that this has any other consequences
> than a SMART counter being bumped up? I'm not sure how meaningful
> that is in itself.

I have no idea how to confirm an SSD is being either less, or more
damaged by the "STANDBY-IMMEDIATE and cut power too early", when
compared with "sudden power cut". At least not without actually
damaging the SSDs using three groups (normal power cuts,
STANDBY-IMMEDIATE + power cut, control group).

A "SSD power cut test" search on duckduckgo shows several papers and
testing reports on the first results page. I don't think there is any
doubt whatsoever that your typical consumer SSD *can* get damaged by a
"sudden power cut" so badly that it is actually noticed by the user.

That FLASH itself gets damaged or can have stored data corrupted by
power cuts at bad times is quite clear:

http://cseweb.ucsd.edu/users/swanson/papers/DAC2011PowerCut.pdf

SSDs do a lot of work to recover from that without data loss. You won't
notice it easily unless that recovery work *fails*.

--
Henrique Holschuh

Subject: Re: Race to power off harming SATA SSDs

On Mon, 10 Apr 2017, James Bottomley wrote:
> On Tue, 2017-04-11 at 08:52 +0900, Tejun Heo wrote:
> [...]
> > > Any comments? Any clues on how to make the delay "smarter" to
> > > trigger only once during platform shutdown, but still trigger per
> > > -device when doing per-device hotswapping ?
> >
> > So, if this is actually an issue, sure, we can try to work around;
> > however, can we first confirm that this has any other consequences
> > than a SMART counter being bumped up? I'm not sure how meaningful
> > that is in itself.
>
> Seconded; especially as the proposed patch is way too invasive: we run

It is a proof of concept thing. It even says so in the patch commit
log, and in the cover text.

I don't want an one second delay per device. I never proposed that,
either. In fact, I *specifically* asked for something else in the
paragraph you quoted.

I would much prefer an one- or two-seconds delay per platform *power
off*. And that's for platforms that do ACPI-like heavy-duty S3/S4/S5
like x86/x86-64. Opportunistic high-frequency suspend on mobile likely
requires no such handling.

The per-device delay would be needed only for hotplug removal (device
delete), and that's just because some hardware powers down bays (like
older thinkpads with ATA-compatible bays, and some industrial systems).

--
Henrique Holschuh

2017-04-11 10:46:57

by Martin Steigerwald

[permalink] [raw]
Subject: Re: Race to power off harming SATA SSDs

Am Dienstag, 11. April 2017, 08:52:06 CEST schrieb Tejun Heo:
> > Evidently, how often the SSD will lose the race depends on a platform
> > and SSD combination, and also on how often the system is powered off.
> > A sluggish firmware that takes its time to cut power can save the day...
> >
> >
> > Observing the effects:
> >
> > An unclean SSD power-off will be signaled by the SSD device through an
> > increase on a specific S.M.A.R.T attribute. These SMART attributes can
> > be read using the smartmontools package from http://www.smartmontools.org,
> > which should be available in just about every Linux distro.
> >
> > smartctl -A /dev/sd#
> >
> > The SMART attribute related to unclean power-off is vendor-specific, so
> > one might have to track down the SSD datasheet to know which attribute a
> > particular SSD uses. The naming of the attribute also varies.
> >
> > For a Crucial M500 SSD with up-to-date firmware, this would be attribute
> > 174 "Unexpect_Power_Loss_Ct", for example.
> >
> > NOTE: unclean SSD power-offs are dangerous and may brick the device in
> > the worst case, or otherwise harm it (reduce longevity, damage flash
> > blocks). It is also not impossible to get data corruption.
>
> I get that the incrementing counters might not be pretty but I'm a bit
> skeptical about this being an actual issue. Because if that were
> true, the device would be bricking itself from any sort of power
> losses be that an actual power loss, battery rundown or hard power off
> after crash.

The write-up by Henrique has been a very informative and interesting read for
me. I wondered about the same question tough.

I do have a Crucial M500 and I do have an increase of that counter:

martin@merkaba:~[…]/Crucial-M500> grep "^174" smartctl-a-201*
smartctl-a-2014-03-05.txt:174 Unexpect_Power_Loss_Ct 0x0032 100 100 000
Old_age Always - 1
smartctl-a-2014-10-11-nach-prüfsummenfehlern.txt:174 Unexpect_Power_Loss_Ct
0x0032 100 100 000 Old_age Always - 67
smartctl-a-2015-05-01.txt:174 Unexpect_Power_Loss_Ct 0x0032 100 100 000
Old_age Always - 105
smartctl-a-2016-02-06.txt:174 Unexpect_Power_Loss_Ct 0x0032 100 100 000
Old_age Always - 148
smartctl-a-2016-07-08-unreadable-sector.txt:174 Unexpect_Power_Loss_Ct 0x0032
100 100 000 Old_age Always - 201
smartctl-a-2017-04-11.txt:174 Unexpect_Power_Loss_Ct 0x0032 100 100 000
Old_age Always - 272


I mostly didn´t notice anything, except for one time where I indeed had a
BTRFS checksum error, luckily within a BTRFS RAID 1 with an Intel SSD (which
also has an attribute for unclean shutdown which raises).

I blogged about this in german language quite some time ago:

https://blog.teamix.de/2015/01/19/btrfs-raid-1-selbstheilung-in-aktion/

(I think its easy enough to get the point of the blog post even when not
understanding german)

Result of scrub:

scrub started at Thu Oct 9 15:52:00 2014 and finished after 564 seconds
total bytes scrubbed: 268.36GiB with 60 errors
error details: csum=60
corrected errors: 60, uncorrectable errors: 0, unverified errors: 0

Device errors were on:

merkaba:~> btrfs device stats /home
[/dev/mapper/msata-home].write_io_errs 0
[/dev/mapper/msata-home].read_io_errs 0
[/dev/mapper/msata-home].flush_io_errs 0
[/dev/mapper/msata-home].corruption_errs 60
[/dev/mapper/msata-home].generation_errs 0
[…]

(thats the Crucial m500)


I didn´t have any explaination of this, but I suspected some unclean shutdown,
even tough I remembered no unclean shutdown. I take good care to always has a
battery in this ThinkPad T520, due to unclean shutdown issues with Intel SSD
320 (bricked device which reports 8 MiB as capacity, probably fixed by the
firmware update I applied back then).

The write-up Henrique gave me the idea, that maybe it wasn´t an user triggered
unclean shutdown that caused the issue, but an unclean shutdown triggered by
the Linux kernel SSD shutdown procedure implementation.

Of course, I don´t know whether this is the case and I think there is no way
to proof or falsify it years after this happened. I never had this happen
again.

Thanks,
--
Martin

Subject: Re: Race to power off harming SATA SSDs

On Tue, 11 Apr 2017, Martin Steigerwald wrote:
> I do have a Crucial M500 and I do have an increase of that counter:
>
> martin@merkaba:~[…]/Crucial-M500> grep "^174" smartctl-a-201*
> smartctl-a-2014-03-05.txt:174 Unexpect_Power_Loss_Ct 0x0032 100 100 000
> Old_age Always - 1
> smartctl-a-2014-10-11-nach-prüfsummenfehlern.txt:174 Unexpect_Power_Loss_Ct
> 0x0032 100 100 000 Old_age Always - 67
> smartctl-a-2015-05-01.txt:174 Unexpect_Power_Loss_Ct 0x0032 100 100 000
> Old_age Always - 105
> smartctl-a-2016-02-06.txt:174 Unexpect_Power_Loss_Ct 0x0032 100 100 000
> Old_age Always - 148
> smartctl-a-2016-07-08-unreadable-sector.txt:174 Unexpect_Power_Loss_Ct 0x0032
> 100 100 000 Old_age Always - 201
> smartctl-a-2017-04-11.txt:174 Unexpect_Power_Loss_Ct 0x0032 100 100 000
> Old_age Always - 272
>
>
> I mostly didn´t notice anything, except for one time where I indeed had a
> BTRFS checksum error, luckily within a BTRFS RAID 1 with an Intel SSD (which
> also has an attribute for unclean shutdown which raises).

The Crucial M500 has something called "RAIN" which it got unmodified
from its Micron datacenter siblings of the time, along with a large
amount of flash overprovisioning. Too bad it lost the overprovisioned
supercapacitor bank present on the Microns.

RAIN does block-level N+1 RAID5-like parity across the flash chips on
top of the usual block-based ECC, and the SSD has a background scrubber
task that repairs and blocks that fail ECC correction using the RAIN
parity information.

On such an SSD, you really need multi-chip flash corruption beyond what
ECC can fix to even get the operating system/filesystem to notice any
damage, unless you are paying attention to its SMART attributes (it
counts the number of blocks that required RAIN recovery -- which implies
ECC failed to correct that block in the first place), etc.

Unfortunately, I do not have correlation data to know whether there is
an increase on RAIN-corrected or ECC-corrected blocks during the 24h
after an unclean poweroff right after STANDBY IMMEDIATE on a Crucial
M500 SSD.

> The write-up Henrique gave me the idea, that maybe it wasn´t an user triggered
> unclean shutdown that caused the issue, but an unclean shutdown triggered by
> the Linux kernel SSD shutdown procedure implementation.

Maybe. But that corruption could easily having been caused by something
else. There is no shortage of possible culprits.

I expect most damage caused by unclean SSD power-offs to be hidden from
the user/operating system/filesystem by the extensive recovery
facilities present on most SSDs.

Note that the fact that data was transparently (and sucessfully)
recovered doesn't mean damage did not happen, or that the unit was not
harmed by it: it likely got some extra flash wear at the very least.

BTW, for the record, Windows 7 also appears to have had (and maybe still
have) this issue as far as I can tell. Almost every user report of
excessive unclean power off alerts (and also of SSD bricking) to be
found on SSD vendor forums come from Windows users.

--
Henrique Holschuh

2017-04-12 07:47:19

by Martin Steigerwald

[permalink] [raw]
Subject: Re: Race to power off harming SATA SSDs

Am Dienstag, 11. April 2017, 11:31:29 CEST schrieb Henrique de Moraes
Holschuh:
> On Tue, 11 Apr 2017, Martin Steigerwald wrote:
> > I do have a Crucial M500 and I do have an increase of that counter:
> >
> > martin@merkaba:~[…]/Crucial-M500> grep "^174" smartctl-a-201*
> > smartctl-a-2014-03-05.txt:174 Unexpect_Power_Loss_Ct 0x0032 100 100
> > 000 Old_age Always - 1
> > smartctl-a-2014-10-11-nach-prüfsummenfehlern.txt:174
> > Unexpect_Power_Loss_Ct
> > 0x0032 100 100 000 Old_age Always - 67
> > smartctl-a-2015-05-01.txt:174 Unexpect_Power_Loss_Ct 0x0032 100 100
> > 000 Old_age Always - 105
> > smartctl-a-2016-02-06.txt:174 Unexpect_Power_Loss_Ct 0x0032 100 100
> > 000 Old_age Always - 148
> > smartctl-a-2016-07-08-unreadable-sector.txt:174 Unexpect_Power_Loss_Ct
> > 0x0032 100 100 000 Old_age Always - 201
> > smartctl-a-2017-04-11.txt:174 Unexpect_Power_Loss_Ct 0x0032 100 100
> > 000 Old_age Always - 272
> >
> >
> > I mostly didn´t notice anything, except for one time where I indeed had a
> > BTRFS checksum error, luckily within a BTRFS RAID 1 with an Intel SSD
> > (which also has an attribute for unclean shutdown which raises).
>
> The Crucial M500 has something called "RAIN" which it got unmodified
> from its Micron datacenter siblings of the time, along with a large
> amount of flash overprovisioning. Too bad it lost the overprovisioned
> supercapacitor bank present on the Microns.

I think I read about this some time ago. I decided for a Crucial M500 cause in
tests it wasn´t the fastest, but there were hints that it may be one of the
most reliable mSATA SSDs of that time.

[… RAIN explaination …]

> > The write-up Henrique gave me the idea, that maybe it wasn´t an user
> > triggered unclean shutdown that caused the issue, but an unclean shutdown
> > triggered by the Linux kernel SSD shutdown procedure implementation.
>
> Maybe. But that corruption could easily having been caused by something
> else. There is no shortage of possible culprits.

Yes.

> I expect most damage caused by unclean SSD power-offs to be hidden from
> the user/operating system/filesystem by the extensive recovery
> facilities present on most SSDs.
>
> Note that the fact that data was transparently (and sucessfully)
> recovered doesn't mean damage did not happen, or that the unit was not
> harmed by it: it likely got some extra flash wear at the very least.

Okay, I understand.

Well my guess back then, I didn´t fully elaborate on it in the initial mail,
but did so in the blog post, was exactly that I didn´t see any capacitor on
the mSATA SSD board. But I know the Intel SSD 320 has capacitors. So I
thought, okay, maybe there really has been a sudden powerloss due to me trying
to exchange battery during suspend to RAM / standby, without me remembering
this event. And I thought, okay, without capacitor the SSD then didn´t get a
chance to write some of the data. But again this also is just a guess.

I can provide to you smart data files in case you want to have a look at them.

> BTW, for the record, Windows 7 also appears to have had (and maybe still
> have) this issue as far as I can tell. Almost every user report of
> excessive unclean power off alerts (and also of SSD bricking) to be
> found on SSD vendor forums come from Windows users.

Interesting.

Thanks,
--
Martin