From: Wenchao Chen <[email protected]>
Summary
=======
These patches[1] supports the host to turn off FUA.
About FUA, roughly deal with the following two parts:
1) FUA(Forced Unit Access):
- The REQ_FUA flag can be OR ed into the r/w flags of a bio submitted from the
filesystem and will make sure that I/O completion for this request is only
signaled after the data has been committed to non-volatile storage.
2) In emmc, FUA is represented as Reliable write. code show as below:
static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
int recovery_mode, bool *do_rel_wr_p, bool *do_data_tag_p)
{
...
/*
* Reliable writes are used to implement Forced Unit Access and
* are supported only on MMCs.
*/
do_rel_wr = (req->cmd_flags & REQ_FUA) &&
rq_data_dir(req) == WRITE &&
(md->flags & MMC_BLK_REL_WR);
...
}
Patch structure
===============
patch#1: for block
patch#2: for sdhci-sprd
Tests
=====
Ran 'AndroBench' to evaluate the performance:
1. fua_disable = 1
/sys/block/mmcblk0/queue # cat fua 0
I tested 5 times for each case and output a average speed.
1) Sequential read:
Speed: 266.8MiB/s, 265.1MiB/s, 262.9MiB/s, 268.7MiB/s, 265.2MiB/s
Average speed: 265.74MiB/s
2) Random read:
Speed: 98.75MiB/s, 98.7MiB/s, 98.5MiB/s, 99.4MiB/s, 98.7MiB/s
Average speed: 98.81MiB/s
3) Sequential write:
Speed: 199.94MiB/s, 199.1MiB/s, 205.5MiB/s, 206.5MiB/s, 191.5MiB/s
Average speed: 200.5MiB/s
4) Random write:
Speed: 68.6MiB/s, 71.8MiB/s, 77.1MiB/s, 64.8MiB/s, 69.3MiB/s
Average speed: 70.32MiB/s
2. fua_disable = 0 (default 0)
/sys/block/mmcblk0/queue # cat fua 1
I tested 5 times for each case and output a average speed.
1) Sequential read:
Speed: 259.3MiB/s, 258.8MiB/s, 258.2MiB/s, 259.5MiB/s, 253.5MiB/s
Average speed: 257.86MiB/s
2) Random read:
Speed: 98.9MiB/s, 101MiB/s, 101MiB/s, 99MiB/s, 101.1MiB/s
Average speed: 100.2MiB/s
3) Sequential write:
Speed: 153.7MiB/s, 146.2MiB/s, 151.2MiB/s, 148.8MiB/s, 147.5MiB/s
Average speed: 149.48MiB/s
4) Random write:
Speed: 12.9MiB/s, 12.3MiB/s, 12.6MiB/s, 12.8MiB/s, 12.8MiB/s
Average speed: 12.68MiB/s
According to the above data, disable FUA (fua_disable = 1) improves the
performance:
1)Sequential read improved by 3%.
2)Random read were down 1%.
3)Sequential write improved by 34%.
4)Random write improved by 454%.
Therefore, it is recommended to support the host to control FUA.
Reference
=========
[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/block/writeback_cache_control.rst
[2] Embedded Multi-Media Card (e•MMC) Electrical Standard (5.1)''
Wenchao Chen (2):
mmc: block: Support Host to control FUA
mmc: sdhci-sprd: enable fua_disable for SPRDSDHCI
drivers/mmc/core/block.c | 3 ++-
drivers/mmc/host/sdhci-sprd.c | 2 ++
include/linux/mmc/host.h | 3 +++
3 files changed, 7 insertions(+), 1 deletion(-)
--
2.17.1
On 21/10/22 10:30, Wenchao Chen wrote:
> From: Wenchao Chen <[email protected]>
>
> Summary
> =======
> These patches[1] supports the host to turn off FUA.
>
> About FUA, roughly deal with the following two parts:
> 1) FUA(Forced Unit Access):
> - The REQ_FUA flag can be OR ed into the r/w flags of a bio submitted from the
> filesystem and will make sure that I/O completion for this request is only
> signaled after the data has been committed to non-volatile storage.
>
> 2) In emmc, FUA is represented as Reliable write. code show as below:
> static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
> int recovery_mode, bool *do_rel_wr_p, bool *do_data_tag_p)
> {
> ...
> /*
> * Reliable writes are used to implement Forced Unit Access and
> * are supported only on MMCs.
> */
> do_rel_wr = (req->cmd_flags & REQ_FUA) &&
> rq_data_dir(req) == WRITE &&
> (md->flags & MMC_BLK_REL_WR);
> ...
> }
>
> Patch structure
> ===============
> patch#1: for block
> patch#2: for sdhci-sprd
>
> Tests
> =====
> Ran 'AndroBench' to evaluate the performance:
It would be good to have more details e.g.
What file system? What block size? What journal size?
What file size? What record size?
> 1. fua_disable = 1
> /sys/block/mmcblk0/queue # cat fua 0
> I tested 5 times for each case and output a average speed.
>
> 1) Sequential read:
> Speed: 266.8MiB/s, 265.1MiB/s, 262.9MiB/s, 268.7MiB/s, 265.2MiB/s
> Average speed: 265.74MiB/s
>
> 2) Random read:
> Speed: 98.75MiB/s, 98.7MiB/s, 98.5MiB/s, 99.4MiB/s, 98.7MiB/s
> Average speed: 98.81MiB/s
>
> 3) Sequential write:
> Speed: 199.94MiB/s, 199.1MiB/s, 205.5MiB/s, 206.5MiB/s, 191.5MiB/s
> Average speed: 200.5MiB/s
>
> 4) Random write:
> Speed: 68.6MiB/s, 71.8MiB/s, 77.1MiB/s, 64.8MiB/s, 69.3MiB/s
> Average speed: 70.32MiB/s
>
> 2. fua_disable = 0 (default 0)
> /sys/block/mmcblk0/queue # cat fua 1
> I tested 5 times for each case and output a average speed.
>
> 1) Sequential read:
> Speed: 259.3MiB/s, 258.8MiB/s, 258.2MiB/s, 259.5MiB/s, 253.5MiB/s
> Average speed: 257.86MiB/s
>
> 2) Random read:
> Speed: 98.9MiB/s, 101MiB/s, 101MiB/s, 99MiB/s, 101.1MiB/s
> Average speed: 100.2MiB/s
>
> 3) Sequential write:
> Speed: 153.7MiB/s, 146.2MiB/s, 151.2MiB/s, 148.8MiB/s, 147.5MiB/s
> Average speed: 149.48MiB/s
>
> 4) Random write:
> Speed: 12.9MiB/s, 12.3MiB/s, 12.6MiB/s, 12.8MiB/s, 12.8MiB/s
> Average speed: 12.68MiB/s
Is every write being sync'ed of just sync at the end?
>
> According to the above data, disable FUA (fua_disable = 1) improves the
> performance:
> 1)Sequential read improved by 3%.
> 2)Random read were down 1%.
FUA should not affect reads. If it is, you may want to investigate how.
> 3)Sequential write improved by 34%.
> 4)Random write improved by 454%.
> Therefore, it is recommended to support the host to control FUA.
>
> Reference
> =========
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/block/writeback_cache_control.rst
> [2] Embedded Multi-Media Card (e•MMC) Electrical Standard (5.1)''
You do not seem to have considered data integrity.
Regular disks are assumed to provide atomic sector writes. That is, a sector has either the old data or the new data, but not some corrupt mixture.
mmc does not have that assumption, which is presumably why Reliable Write has been used instead. Although that idea appears to have been thrown away for devices with no cache by commit 08ebf903af57 ("mmc: core: Fixup support for writeback-cache for eMMC and SD").
File systems can use FUA to mark a successful journal flush. Whether or not getting a torn sector at that point will corrupt the file system recovery is presumably file system specific, and maybe specific to file system options e.g. the use of checksums.
It may well be that a file system can survive a torn sector at that point, or that user space would prefer to take the risk in order to get better performance. In either of those cases, it is not really a decision for the host controller driver.
>
> Wenchao Chen (2):
> mmc: block: Support Host to control FUA
> mmc: sdhci-sprd: enable fua_disable for SPRDSDHCI
>
> drivers/mmc/core/block.c | 3 ++-
> drivers/mmc/host/sdhci-sprd.c | 2 ++
> include/linux/mmc/host.h | 3 +++
> 3 files changed, 7 insertions(+), 1 deletion(-)
>
> These patches[1] supports the host to turn off FUA.
>
> About FUA, roughly deal with the following two parts:
> 1) FUA(Forced Unit Access):
> - The REQ_FUA flag can be OR ed into the r/w flags of a bio submitted from the
> filesystem and will make sure that I/O completion for this request is only
> signaled after the data has been committed to non-volatile storage.
>
> 2) In emmc, FUA is represented as Reliable write. code show as below:
> static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
> int recovery_mode, bool *do_rel_wr_p, bool *do_data_tag_p) {
> ...
> /*
> * Reliable writes are used to implement Forced Unit Access and
> * are supported only on MMCs.
> */
> do_rel_wr = (req->cmd_flags & REQ_FUA) &&
> rq_data_dir(req) == WRITE &&
> (md->flags & MMC_BLK_REL_WR);
> ...
> }
So reliable writes are only issed when FUA flag is set, so as it should be?
>
> Patch structure
> ===============
> patch#1: for block
> patch#2: for sdhci-sprd
>
> Tests
> =====
> Ran 'AndroBench' to evaluate the performance:
> 1. fua_disable = 1
> /sys/block/mmcblk0/queue # cat fua 0
> I tested 5 times for each case and output a average speed.
>
> 1) Sequential read:
> Speed: 266.8MiB/s, 265.1MiB/s, 262.9MiB/s, 268.7MiB/s, 265.2MiB/s Average speed: 265.74MiB/s
>
> 2) Random read:
> Speed: 98.75MiB/s, 98.7MiB/s, 98.5MiB/s, 99.4MiB/s, 98.7MiB/s Average speed: 98.81MiB/s
>
> 3) Sequential write:
> Speed: 199.94MiB/s, 199.1MiB/s, 205.5MiB/s, 206.5MiB/s, 191.5MiB/s Average speed: 200.5MiB/s
>
> 4) Random write:
> Speed: 68.6MiB/s, 71.8MiB/s, 77.1MiB/s, 64.8MiB/s, 69.3MiB/s Average speed: 70.32MiB/s
>
Unless there is something special / wrong with sdhci-sprd (unlikely? Its just a flag) or your eMMC (maybe more likely?)
then Id expect this or similar performance degradation for any host controller and eMMC.
I can redo some measurement if you provide your workload.
But I'd say if you don’t want to pay the price of reliable write then make sure to not issue them, by not issuing
FUA?
Maybe I'm misunderstanding, but why would the host controller driver control FUA?
> 2. fua_disable = 0 (default 0)
> /sys/block/mmcblk0/queue # cat fua 1
> I tested 5 times for each case and output a average speed.
>
> 1) Sequential read:
> Speed: 259.3MiB/s, 258.8MiB/s, 258.2MiB/s, 259.5MiB/s, 253.5MiB/s Average speed: 257.86MiB/s
>
> 2) Random read:
> Speed: 98.9MiB/s, 101MiB/s, 101MiB/s, 99MiB/s, 101.1MiB/s Average speed: 100.2MiB/s
>
> 3) Sequential write:
> Speed: 153.7MiB/s, 146.2MiB/s, 151.2MiB/s, 148.8MiB/s, 147.5MiB/s Average speed: 149.48MiB/s
>
> 4) Random write:
> Speed: 12.9MiB/s, 12.3MiB/s, 12.6MiB/s, 12.8MiB/s, 12.8MiB/s Average speed: 12.68MiB/s
>
> According to the above data, disable FUA (fua_disable = 1) improves the
> performance:
> 1)Sequential read improved by 3%.
> 2)Random read were down 1%.
> 3)Sequential write improved by 34%.
> 4)Random write improved by 454%.
> Therefore, it is recommended to support the host to control FUA.
>
>
Hyperstone GmbH | Reichenaustr. 39a | 78467 Konstanz
Managing Director: Dr. Jan Peter Berns.
Commercial register of local courts: Freiburg HRB381782
>
> Unless there is something special / wrong with sdhci-sprd (unlikely? Its just a flag) or your eMMC (maybe more likely?) then Id expect this or similar performance degradation for any host controller and eMMC.
> I can redo some measurement if you provide your workload.
> But I'd say if you don’t want to pay the price of reliable write then make sure to not issue them, by not issuing FUA?
> Maybe I'm misunderstanding, but why would the host controller driver control FUA?
>
Maybe one more point on that: Compare performance with fua disabled but cache off, should be comparable unless there is an actual reliable write problem.
If reliable write is performing horribly on that eMMC maybe a quirk could be thought of.
Anyway your 13MB/s random with fua, if a reliable write is basically always active for a given cache size,
doesn't sound totally unreasonable (over 3000 IOPS, assuming 4k writes).
Hyperstone GmbH | Reichenaustr. 39a | 78467 Konstanz
Managing Director: Dr. Jan Peter Berns.
Commercial register of local courts: Freiburg HRB381782
Hi Hunter
Thank you for your review!
I'm sorry to reply you so late because I've been too busy lately.
On Fri, Oct 21, 2022 at 11:50 PM Adrian Hunter <[email protected]> wrote:
>
> On 21/10/22 10:30, Wenchao Chen wrote:
> > From: Wenchao Chen <[email protected]>
> >
> > Summary
> > =======
> > These patches[1] supports the host to turn off FUA.
> >
> > About FUA, roughly deal with the following two parts:
> > 1) FUA(Forced Unit Access):
> > - The REQ_FUA flag can be OR ed into the r/w flags of a bio submitted from the
> > filesystem and will make sure that I/O completion for this request is only
> > signaled after the data has been committed to non-volatile storage.
> >
> > 2) In emmc, FUA is represented as Reliable write. code show as below:
> > static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
> > int recovery_mode, bool *do_rel_wr_p, bool *do_data_tag_p)
> > {
> > ...
> > /*
> > * Reliable writes are used to implement Forced Unit Access and
> > * are supported only on MMCs.
> > */
> > do_rel_wr = (req->cmd_flags & REQ_FUA) &&
> > rq_data_dir(req) == WRITE &&
> > (md->flags & MMC_BLK_REL_WR);
> > ...
> > }
> >
> > Patch structure
> > ===============
> > patch#1: for block
> > patch#2: for sdhci-sprd
> >
> > Tests
> > =====
> > Ran 'AndroBench' to evaluate the performance:
>
> It would be good to have more details e.g.
> What file system? What block size? What journal size?
> What file size? What record size?
>
What file system?
F2FS
What block size?
Sequential: 32768KB, Random: 4KB
What file size?
64MB
> > 1. fua_disable = 1
> > /sys/block/mmcblk0/queue # cat fua 0
> > I tested 5 times for each case and output a average speed.
> >
> > 1) Sequential read:
> > Speed: 266.8MiB/s, 265.1MiB/s, 262.9MiB/s, 268.7MiB/s, 265.2MiB/s
> > Average speed: 265.74MiB/s
> >
> > 2) Random read:
> > Speed: 98.75MiB/s, 98.7MiB/s, 98.5MiB/s, 99.4MiB/s, 98.7MiB/s
> > Average speed: 98.81MiB/s
> >
> > 3) Sequential write:
> > Speed: 199.94MiB/s, 199.1MiB/s, 205.5MiB/s, 206.5MiB/s, 191.5MiB/s
> > Average speed: 200.5MiB/s
> >
> > 4) Random write:
> > Speed: 68.6MiB/s, 71.8MiB/s, 77.1MiB/s, 64.8MiB/s, 69.3MiB/s
> > Average speed: 70.32MiB/s
> >
> > 2. fua_disable = 0 (default 0)
> > /sys/block/mmcblk0/queue # cat fua 1
> > I tested 5 times for each case and output a average speed.
> >
> > 1) Sequential read:
> > Speed: 259.3MiB/s, 258.8MiB/s, 258.2MiB/s, 259.5MiB/s, 253.5MiB/s
> > Average speed: 257.86MiB/s
> >
> > 2) Random read:
> > Speed: 98.9MiB/s, 101MiB/s, 101MiB/s, 99MiB/s, 101.1MiB/s
> > Average speed: 100.2MiB/s
> >
> > 3) Sequential write:
> > Speed: 153.7MiB/s, 146.2MiB/s, 151.2MiB/s, 148.8MiB/s, 147.5MiB/s
> > Average speed: 149.48MiB/s
> >
> > 4) Random write:
> > Speed: 12.9MiB/s, 12.3MiB/s, 12.6MiB/s, 12.8MiB/s, 12.8MiB/s
> > Average speed: 12.68MiB/s
>
> Is every write being sync'ed of just sync at the end?
>
/*
* Reliable writes are used to implement Forced Unit Access and
* are supported only on MMCs.
*/
do_rel_wr = (req->cmd_flags & REQ_FUA) &&
rq_data_dir(req) == WRITE &&
(md->flags & MMC_BLK_REL_WR);
A Reliable Write access shall force the data to be written to the
nonvolatile storage。
It will consume more time.
> >
> > According to the above data, disable FUA (fua_disable = 1) improves the
> > performance:
> > 1)Sequential read improved by 3%.
> > 2)Random read were down 1%.
>
> FUA should not affect reads. If it is, you may want to investigate how.
>
> > 3)Sequential write improved by 34%.
> > 4)Random write improved by 454%.
> > Therefore, it is recommended to support the host to control FUA.
> >
> > Reference
> > =========
> > [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/block/writeback_cache_control.rst
> > [2] Embedded Multi-Media Card (e•MMC) Electrical Standard (5.1)''
>
> You do not seem to have considered data integrity.
>
> Regular disks are assumed to provide atomic sector writes. That is, a sector has either the old data or the new data, but not some corrupt mixture.
>
> mmc does not have that assumption, which is presumably why Reliable Write has been used instead. Although that idea appears to have been thrown away for devices with no cache by commit 08ebf903af57 ("mmc: core: Fixup support for writeback-cache for eMMC and SD").
>
> File systems can use FUA to mark a successful journal flush. Whether or not getting a torn sector at that point will corrupt the file system recovery is presumably file system specific, and maybe specific to file system options e.g. the use of checksums.
>
> It may well be that a file system can survive a torn sector at that point, or that user space would prefer to take the risk in order to get better performance. In either of those cases, it is not really a decision for the host controller driver.
>
Considering the data integrity, we did a random power-down test, and
the experimental results were good.
FUA can only reduce data loss under abnormal conditions, but cannot
prevent data loss under abnormal conditions.
I think there should be a balance between FUA and NO FUA, but
filesystems seem to favor FUA.
FUA brings a drop in random write performance. If enough tests are
done, NO FUA is acceptable.
I found a discussion about FUA:
https://lore.kernel.org/linux-f2fs-devel/[email protected]/
UFS reference:
https://lore.kernel.org/linux-scsi/[email protected]/
> >
> > Wenchao Chen (2):
> > mmc: block: Support Host to control FUA
> > mmc: sdhci-sprd: enable fua_disable for SPRDSDHCI
> >
> > drivers/mmc/core/block.c | 3 ++-
> > drivers/mmc/host/sdhci-sprd.c | 2 ++
> > include/linux/mmc/host.h | 3 +++
> > 3 files changed, 7 insertions(+), 1 deletion(-)
> >
>
On Fri, 11 Nov 2022 at 08:59, Wenchao Chen <[email protected]> wrote:
>
> Hi Hunter
> Thank you for your review!
> I'm sorry to reply you so late because I've been too busy lately.
>
> On Fri, Oct 21, 2022 at 11:50 PM Adrian Hunter <[email protected]> wrote:
> >
> > On 21/10/22 10:30, Wenchao Chen wrote:
> > > From: Wenchao Chen <[email protected]>
> > >
> > > Summary
> > > =======
> > > These patches[1] supports the host to turn off FUA.
> > >
> > > About FUA, roughly deal with the following two parts:
> > > 1) FUA(Forced Unit Access):
> > > - The REQ_FUA flag can be OR ed into the r/w flags of a bio submitted from the
> > > filesystem and will make sure that I/O completion for this request is only
> > > signaled after the data has been committed to non-volatile storage.
> > >
> > > 2) In emmc, FUA is represented as Reliable write. code show as below:
> > > static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
> > > int recovery_mode, bool *do_rel_wr_p, bool *do_data_tag_p)
> > > {
> > > ...
> > > /*
> > > * Reliable writes are used to implement Forced Unit Access and
> > > * are supported only on MMCs.
> > > */
> > > do_rel_wr = (req->cmd_flags & REQ_FUA) &&
> > > rq_data_dir(req) == WRITE &&
> > > (md->flags & MMC_BLK_REL_WR);
> > > ...
> > > }
> > >
> > > Patch structure
> > > ===============
> > > patch#1: for block
> > > patch#2: for sdhci-sprd
> > >
> > > Tests
> > > =====
> > > Ran 'AndroBench' to evaluate the performance:
> >
> > It would be good to have more details e.g.
> > What file system? What block size? What journal size?
> > What file size? What record size?
> >
>
> What file system?
> F2FS
> What block size?
> Sequential: 32768KB, Random: 4KB
> What file size?
> 64MB
>
> > > 1. fua_disable = 1
> > > /sys/block/mmcblk0/queue # cat fua 0
> > > I tested 5 times for each case and output a average speed.
> > >
> > > 1) Sequential read:
> > > Speed: 266.8MiB/s, 265.1MiB/s, 262.9MiB/s, 268.7MiB/s, 265.2MiB/s
> > > Average speed: 265.74MiB/s
> > >
> > > 2) Random read:
> > > Speed: 98.75MiB/s, 98.7MiB/s, 98.5MiB/s, 99.4MiB/s, 98.7MiB/s
> > > Average speed: 98.81MiB/s
> > >
> > > 3) Sequential write:
> > > Speed: 199.94MiB/s, 199.1MiB/s, 205.5MiB/s, 206.5MiB/s, 191.5MiB/s
> > > Average speed: 200.5MiB/s
> > >
> > > 4) Random write:
> > > Speed: 68.6MiB/s, 71.8MiB/s, 77.1MiB/s, 64.8MiB/s, 69.3MiB/s
> > > Average speed: 70.32MiB/s
> > >
> > > 2. fua_disable = 0 (default 0)
> > > /sys/block/mmcblk0/queue # cat fua 1
> > > I tested 5 times for each case and output a average speed.
> > >
> > > 1) Sequential read:
> > > Speed: 259.3MiB/s, 258.8MiB/s, 258.2MiB/s, 259.5MiB/s, 253.5MiB/s
> > > Average speed: 257.86MiB/s
> > >
> > > 2) Random read:
> > > Speed: 98.9MiB/s, 101MiB/s, 101MiB/s, 99MiB/s, 101.1MiB/s
> > > Average speed: 100.2MiB/s
> > >
> > > 3) Sequential write:
> > > Speed: 153.7MiB/s, 146.2MiB/s, 151.2MiB/s, 148.8MiB/s, 147.5MiB/s
> > > Average speed: 149.48MiB/s
> > >
> > > 4) Random write:
> > > Speed: 12.9MiB/s, 12.3MiB/s, 12.6MiB/s, 12.8MiB/s, 12.8MiB/s
> > > Average speed: 12.68MiB/s
> >
> > Is every write being sync'ed of just sync at the end?
> >
>
> /*
> * Reliable writes are used to implement Forced Unit Access and
> * are supported only on MMCs.
> */
> do_rel_wr = (req->cmd_flags & REQ_FUA) &&
> rq_data_dir(req) == WRITE &&
> (md->flags & MMC_BLK_REL_WR);
>
> A Reliable Write access shall force the data to be written to the
> nonvolatile storage。
> It will consume more time.
My apologies for side-stepping the discussion.
Yes, REQ_FUA is per definition a write and a flush of the
write-back-cache to non-volatile storage. So, this is indeed the
correct behaviour, even if it consumes more time to complete the
operation.
>
> > >
> > > According to the above data, disable FUA (fua_disable = 1) improves the
> > > performance:
> > > 1)Sequential read improved by 3%.
> > > 2)Random read were down 1%.
> >
> > FUA should not affect reads. If it is, you may want to investigate how.
> >
> > > 3)Sequential write improved by 34%.
> > > 4)Random write improved by 454%.
> > > Therefore, it is recommended to support the host to control FUA.
> > >
> > > Reference
> > > =========
> > > [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/block/writeback_cache_control.rst
> > > [2] Embedded Multi-Media Card (e•MMC) Electrical Standard (5.1)''
> >
> > You do not seem to have considered data integrity.
> >
> > Regular disks are assumed to provide atomic sector writes. That is, a sector has either the old data or the new data, but not some corrupt mixture.
> >
> > mmc does not have that assumption, which is presumably why Reliable Write has been used instead. Although that idea appears to have been thrown away for devices with no cache by commit 08ebf903af57 ("mmc: core: Fixup support for writeback-cache for eMMC and SD").
> >
> > File systems can use FUA to mark a successful journal flush. Whether or not getting a torn sector at that point will corrupt the file system recovery is presumably file system specific, and maybe specific to file system options e.g. the use of checksums.
> >
> > It may well be that a file system can survive a torn sector at that point, or that user space would prefer to take the risk in order to get better performance. In either of those cases, it is not really a decision for the host controller driver.
> >
>
> Considering the data integrity, we did a random power-down test, and
> the experimental results were good.
>
> FUA can only reduce data loss under abnormal conditions, but cannot
> prevent data loss under abnormal conditions.
>
> I think there should be a balance between FUA and NO FUA, but
> filesystems seem to favor FUA.
>
> FUA brings a drop in random write performance. If enough tests are
> done, NO FUA is acceptable.
Testing this isn't entirely easy. It requires you to hook up
electrical switches to allow you to automate the powering on/off of
the platform(s). Then at each cycle, really make sure to stress test
the data integrity of the flash memory. Is that what the tests did -
or can you elaborate a bit on what was really tested?
In any case, the performance impact boils down to how each eMMC/SD
card internally manages reliable writes vs regular writes. Some
vendors may treat them very similarly, while others do not.
That said, trying to disable REQ_FUA from an mmc host driver is the
wrong approach, as also pointed out by Adrian above. These types of
decisions belong solely in the mmc core layer.
Instead of what the $subject series proposes, I would rather suggest
we discuss (and test) whether it could make sense to disable REQ_FUA -
*if* the eMMC/SD card supports a write-back-cache (REQ_OP_FLUSH) too.
Hence, the mmc core could then announce only REQ_OP_FLUSH.
Kind regards
Uffe
[...]
> >
> > Considering the data integrity, we did a random power-down test, and
> > the experimental results were good.
> >
> > FUA can only reduce data loss under abnormal conditions, but cannot
> > prevent data loss under abnormal conditions.
> >
> > I think there should be a balance between FUA and NO FUA, but
> > filesystems seem to favor FUA.
> >
> > FUA brings a drop in random write performance. If enough tests are
> > done, NO FUA is acceptable.
>
> Testing this isn't entirely easy. It requires you to hook up
> electrical switches to allow you to automate the powering on/off of
> the platform(s). Then at each cycle, really make sure to stress test
> the data integrity of the flash memory. Is that what the tests did -
> or can you elaborate a bit on what was really tested?
>
> In any case, the performance impact boils down to how each eMMC/SD
> card internally manages reliable writes vs regular writes. Some
> vendors may treat them very similarly, while others do not.
>
> That said, trying to disable REQ_FUA from an mmc host driver is the
> wrong approach, as also pointed out by Adrian above. These types of
> decisions belong solely in the mmc core layer.
>
> Instead of what the $subject series proposes, I would rather suggest
> we discuss (and test) whether it could make sense to disable REQ_FUA -
> *if* the eMMC/SD card supports a write-back-cache (REQ_OP_FLUSH) too.
> Hence, the mmc core could then announce only REQ_OP_FLUSH.
>
Below is a simple patch that does the above. We may not want to enable
this for *all* eMMC/SD cards, but it works fine for testing and to
continue the discussions here.
From: Ulf Hansson <[email protected]>
Date: Fri, 11 Nov 2022 12:48:02 +0100
Subject: [PATCH] mmc: core: Disable REQ_FUA if the card supports an internal
cache
!!!! This is not for merge, but only for test and discussions!!!
It has been reported that REQ_FUA can be costly on some eMMC devices. A
potential option that could mitigate this problem, is to rely solely on
REQ_OP_FLUSH instead, but that requires the eMMC/SD to support an internal
cache. This is an attempt to try this out to see how it behaves.
Signed-off-by: Ulf Hansson <[email protected]>
---
drivers/mmc/core/block.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index db6d8a099910..197e9f6cdaad 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -2494,15 +2494,15 @@ static struct mmc_blk_data
*mmc_blk_alloc_req(struct mmc_card *card,
md->flags |= MMC_BLK_CMD23;
}
- if (md->flags & MMC_BLK_CMD23 &&
- ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) ||
- card->ext_csd.rel_sectors)) {
+ if (mmc_cache_enabled(card->host)) {
+ cache_enabled = true;
+ } else if (md->flags & MMC_BLK_CMD23 &&
+ (card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN ||
+ card->ext_csd.rel_sectors)) {
md->flags |= MMC_BLK_REL_WR;
fua_enabled = true;
cache_enabled = true;
}
- if (mmc_cache_enabled(card->host))
- cache_enabled = true;
blk_queue_write_cache(md->queue.queue, cache_enabled, fua_enabled);
--
2.34.1
Kind regards
Uffe
On Fri, Nov 18, 2022 at 6:12 PM Adrian Hunter <[email protected]> wrote:
>
> On 11/11/22 09:58, Wenchao Chen wrote:
> > Hi Hunter
> > Thank you for your review!
> > I'm sorry to reply you so late because I've been too busy lately.
> >
> > On Fri, Oct 21, 2022 at 11:50 PM Adrian Hunter <[email protected]> wrote:
> >>
> >> On 21/10/22 10:30, Wenchao Chen wrote:
> >>> From: Wenchao Chen <[email protected]>
> >>>
> >>> Summary
> >>> =======
> >>> These patches[1] supports the host to turn off FUA.
> >>>
> >>> About FUA, roughly deal with the following two parts:
> >>> 1) FUA(Forced Unit Access):
> >>> - The REQ_FUA flag can be OR ed into the r/w flags of a bio submitted from the
> >>> filesystem and will make sure that I/O completion for this request is only
> >>> signaled after the data has been committed to non-volatile storage.
> >>>
> >>> 2) In emmc, FUA is represented as Reliable write. code show as below:
> >>> static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
> >>> int recovery_mode, bool *do_rel_wr_p, bool *do_data_tag_p)
> >>> {
> >>> ...
> >>> /*
> >>> * Reliable writes are used to implement Forced Unit Access and
> >>> * are supported only on MMCs.
> >>> */
> >>> do_rel_wr = (req->cmd_flags & REQ_FUA) &&
> >>> rq_data_dir(req) == WRITE &&
> >>> (md->flags & MMC_BLK_REL_WR);
> >>> ...
> >>> }
> >>>
> >>> Patch structure
> >>> ===============
> >>> patch#1: for block
> >>> patch#2: for sdhci-sprd
> >>>
> >>> Tests
> >>> =====
> >>> Ran 'AndroBench' to evaluate the performance:
> >>
> >> It would be good to have more details e.g.
> >> What file system? What block size? What journal size?
> >> What file size? What record size?
> >>
> >
> > What file system?
> > F2FS
> > What block size?
> > Sequential: 32768KB, Random: 4KB
> > What file size?
> > 64MB
> >
> >>> 1. fua_disable = 1
> >>> /sys/block/mmcblk0/queue # cat fua 0
> >>> I tested 5 times for each case and output a average speed.
> >>>
> >>> 1) Sequential read:
> >>> Speed: 266.8MiB/s, 265.1MiB/s, 262.9MiB/s, 268.7MiB/s, 265.2MiB/s
> >>> Average speed: 265.74MiB/s
> >>>
> >>> 2) Random read:
> >>> Speed: 98.75MiB/s, 98.7MiB/s, 98.5MiB/s, 99.4MiB/s, 98.7MiB/s
> >>> Average speed: 98.81MiB/s
> >>>
> >>> 3) Sequential write:
> >>> Speed: 199.94MiB/s, 199.1MiB/s, 205.5MiB/s, 206.5MiB/s, 191.5MiB/s
> >>> Average speed: 200.5MiB/s
> >>>
> >>> 4) Random write:
> >>> Speed: 68.6MiB/s, 71.8MiB/s, 77.1MiB/s, 64.8MiB/s, 69.3MiB/s
> >>> Average speed: 70.32MiB/s
> >>>
> >>> 2. fua_disable = 0 (default 0)
> >>> /sys/block/mmcblk0/queue # cat fua 1
> >>> I tested 5 times for each case and output a average speed.
> >>>
> >>> 1) Sequential read:
> >>> Speed: 259.3MiB/s, 258.8MiB/s, 258.2MiB/s, 259.5MiB/s, 253.5MiB/s
> >>> Average speed: 257.86MiB/s
> >>>
> >>> 2) Random read:
> >>> Speed: 98.9MiB/s, 101MiB/s, 101MiB/s, 99MiB/s, 101.1MiB/s
> >>> Average speed: 100.2MiB/s
> >>>
> >>> 3) Sequential write:
> >>> Speed: 153.7MiB/s, 146.2MiB/s, 151.2MiB/s, 148.8MiB/s, 147.5MiB/s
> >>> Average speed: 149.48MiB/s
> >>>
> >>> 4) Random write:
> >>> Speed: 12.9MiB/s, 12.3MiB/s, 12.6MiB/s, 12.8MiB/s, 12.8MiB/s
> >>> Average speed: 12.68MiB/s
> >>
> >> Is every write being sync'ed of just sync at the end?
> >>
> >
> > /*
> > * Reliable writes are used to implement Forced Unit Access and
> > * are supported only on MMCs.
> > */
> > do_rel_wr = (req->cmd_flags & REQ_FUA) &&
> > rq_data_dir(req) == WRITE &&
> > (md->flags & MMC_BLK_REL_WR);
> >
> > A Reliable Write access shall force the data to be written to the
> > nonvolatile storage。
> > It will consume more time.
>
> Reliable write is slow because it guarantees not to tear the write.
> The issue is torn writes, not just FUA.
>
If you'd like, could you introduce Reliable write that doesn't tear writes?
> >
> >>>
> >>> According to the above data, disable FUA (fua_disable = 1) improves the
> >>> performance:
> >>> 1)Sequential read improved by 3%.
> >>> 2)Random read were down 1%.
> >>
> >> FUA should not affect reads. If it is, you may want to investigate how.
> >>
> >>> 3)Sequential write improved by 34%.
> >>> 4)Random write improved by 454%.
> >>> Therefore, it is recommended to support the host to control FUA.
> >>>
> >>> Reference
> >>> =========
> >>> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/block/writeback_cache_control.rst
> >>> [2] Embedded Multi-Media Card (e•MMC) Electrical Standard (5.1)''
> >>
> >> You do not seem to have considered data integrity.
> >>
> >> Regular disks are assumed to provide atomic sector writes. That is, a sector has either the old data or the new data, but not some corrupt mixture.
> >>
> >> mmc does not have that assumption, which is presumably why Reliable Write has been used instead. Although that idea appears to have been thrown away for devices with no cache by commit 08ebf903af57 ("mmc: core: Fixup support for writeback-cache for eMMC and SD").
> >>
> >> File systems can use FUA to mark a successful journal flush. Whether or not getting a torn sector at that point will corrupt the file system recovery is presumably file system specific, and maybe specific to file system options e.g. the use of checksums.
> >>
> >> It may well be that a file system can survive a torn sector at that point, or that user space would prefer to take the risk in order to get better performance. In either of those cases, it is not really a decision for the host controller driver.
> >>
> >
> > Considering the data integrity, we did a random power-down test, and
> > the experimental results were good.
> >
> > FUA can only reduce data loss under abnormal conditions, but cannot
> > prevent data loss under abnormal conditions.
> >
> > I think there should be a balance between FUA and NO FUA, but
> > filesystems seem to favor FUA.
> >
> > FUA brings a drop in random write performance. If enough tests are
> > done, NO FUA is acceptable.
> >
> > I found a discussion about FUA:
> > https://lore.kernel.org/linux-f2fs-devel/[email protected]/
> >
> > UFS reference:
> > https://lore.kernel.org/linux-scsi/[email protected]/
> >
>
> You really need buy-in from more people, especially file system
> developers. I suggest you try F2FS people to start with.
> Please be clear though: Reliable Write protects against torn
> writes. If enough stakeholders agree that file systems can
> handle the torn writes anyway, then we could presumably drop
> using Reliable Write for FUA.
>
> >>>
> >>> Wenchao Chen (2):
> >>> mmc: block: Support Host to control FUA
> >>> mmc: sdhci-sprd: enable fua_disable for SPRDSDHCI
> >>>
> >>> drivers/mmc/core/block.c | 3 ++-
> >>> drivers/mmc/host/sdhci-sprd.c | 2 ++
> >>> include/linux/mmc/host.h | 3 +++
> >>> 3 files changed, 7 insertions(+), 1 deletion(-)
> >>>
> >>
>
On 11/11/22 09:58, Wenchao Chen wrote:
> Hi Hunter
> Thank you for your review!
> I'm sorry to reply you so late because I've been too busy lately.
>
> On Fri, Oct 21, 2022 at 11:50 PM Adrian Hunter <[email protected]> wrote:
>>
>> On 21/10/22 10:30, Wenchao Chen wrote:
>>> From: Wenchao Chen <[email protected]>
>>>
>>> Summary
>>> =======
>>> These patches[1] supports the host to turn off FUA.
>>>
>>> About FUA, roughly deal with the following two parts:
>>> 1) FUA(Forced Unit Access):
>>> - The REQ_FUA flag can be OR ed into the r/w flags of a bio submitted from the
>>> filesystem and will make sure that I/O completion for this request is only
>>> signaled after the data has been committed to non-volatile storage.
>>>
>>> 2) In emmc, FUA is represented as Reliable write. code show as below:
>>> static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
>>> int recovery_mode, bool *do_rel_wr_p, bool *do_data_tag_p)
>>> {
>>> ...
>>> /*
>>> * Reliable writes are used to implement Forced Unit Access and
>>> * are supported only on MMCs.
>>> */
>>> do_rel_wr = (req->cmd_flags & REQ_FUA) &&
>>> rq_data_dir(req) == WRITE &&
>>> (md->flags & MMC_BLK_REL_WR);
>>> ...
>>> }
>>>
>>> Patch structure
>>> ===============
>>> patch#1: for block
>>> patch#2: for sdhci-sprd
>>>
>>> Tests
>>> =====
>>> Ran 'AndroBench' to evaluate the performance:
>>
>> It would be good to have more details e.g.
>> What file system? What block size? What journal size?
>> What file size? What record size?
>>
>
> What file system?
> F2FS
> What block size?
> Sequential: 32768KB, Random: 4KB
> What file size?
> 64MB
>
>>> 1. fua_disable = 1
>>> /sys/block/mmcblk0/queue # cat fua 0
>>> I tested 5 times for each case and output a average speed.
>>>
>>> 1) Sequential read:
>>> Speed: 266.8MiB/s, 265.1MiB/s, 262.9MiB/s, 268.7MiB/s, 265.2MiB/s
>>> Average speed: 265.74MiB/s
>>>
>>> 2) Random read:
>>> Speed: 98.75MiB/s, 98.7MiB/s, 98.5MiB/s, 99.4MiB/s, 98.7MiB/s
>>> Average speed: 98.81MiB/s
>>>
>>> 3) Sequential write:
>>> Speed: 199.94MiB/s, 199.1MiB/s, 205.5MiB/s, 206.5MiB/s, 191.5MiB/s
>>> Average speed: 200.5MiB/s
>>>
>>> 4) Random write:
>>> Speed: 68.6MiB/s, 71.8MiB/s, 77.1MiB/s, 64.8MiB/s, 69.3MiB/s
>>> Average speed: 70.32MiB/s
>>>
>>> 2. fua_disable = 0 (default 0)
>>> /sys/block/mmcblk0/queue # cat fua 1
>>> I tested 5 times for each case and output a average speed.
>>>
>>> 1) Sequential read:
>>> Speed: 259.3MiB/s, 258.8MiB/s, 258.2MiB/s, 259.5MiB/s, 253.5MiB/s
>>> Average speed: 257.86MiB/s
>>>
>>> 2) Random read:
>>> Speed: 98.9MiB/s, 101MiB/s, 101MiB/s, 99MiB/s, 101.1MiB/s
>>> Average speed: 100.2MiB/s
>>>
>>> 3) Sequential write:
>>> Speed: 153.7MiB/s, 146.2MiB/s, 151.2MiB/s, 148.8MiB/s, 147.5MiB/s
>>> Average speed: 149.48MiB/s
>>>
>>> 4) Random write:
>>> Speed: 12.9MiB/s, 12.3MiB/s, 12.6MiB/s, 12.8MiB/s, 12.8MiB/s
>>> Average speed: 12.68MiB/s
>>
>> Is every write being sync'ed of just sync at the end?
>>
>
> /*
> * Reliable writes are used to implement Forced Unit Access and
> * are supported only on MMCs.
> */
> do_rel_wr = (req->cmd_flags & REQ_FUA) &&
> rq_data_dir(req) == WRITE &&
> (md->flags & MMC_BLK_REL_WR);
>
> A Reliable Write access shall force the data to be written to the
> nonvolatile storage。
> It will consume more time.
Reliable write is slow because it guarantees not to tear the write.
The issue is torn writes, not just FUA.
>
>>>
>>> According to the above data, disable FUA (fua_disable = 1) improves the
>>> performance:
>>> 1)Sequential read improved by 3%.
>>> 2)Random read were down 1%.
>>
>> FUA should not affect reads. If it is, you may want to investigate how.
>>
>>> 3)Sequential write improved by 34%.
>>> 4)Random write improved by 454%.
>>> Therefore, it is recommended to support the host to control FUA.
>>>
>>> Reference
>>> =========
>>> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/block/writeback_cache_control.rst
>>> [2] Embedded Multi-Media Card (e•MMC) Electrical Standard (5.1)''
>>
>> You do not seem to have considered data integrity.
>>
>> Regular disks are assumed to provide atomic sector writes. That is, a sector has either the old data or the new data, but not some corrupt mixture.
>>
>> mmc does not have that assumption, which is presumably why Reliable Write has been used instead. Although that idea appears to have been thrown away for devices with no cache by commit 08ebf903af57 ("mmc: core: Fixup support for writeback-cache for eMMC and SD").
>>
>> File systems can use FUA to mark a successful journal flush. Whether or not getting a torn sector at that point will corrupt the file system recovery is presumably file system specific, and maybe specific to file system options e.g. the use of checksums.
>>
>> It may well be that a file system can survive a torn sector at that point, or that user space would prefer to take the risk in order to get better performance. In either of those cases, it is not really a decision for the host controller driver.
>>
>
> Considering the data integrity, we did a random power-down test, and
> the experimental results were good.
>
> FUA can only reduce data loss under abnormal conditions, but cannot
> prevent data loss under abnormal conditions.
>
> I think there should be a balance between FUA and NO FUA, but
> filesystems seem to favor FUA.
>
> FUA brings a drop in random write performance. If enough tests are
> done, NO FUA is acceptable.
>
> I found a discussion about FUA:
> https://lore.kernel.org/linux-f2fs-devel/[email protected]/
>
> UFS reference:
> https://lore.kernel.org/linux-scsi/[email protected]/
>
You really need buy-in from more people, especially file system
developers. I suggest you try F2FS people to start with.
Please be clear though: Reliable Write protects against torn
writes. If enough stakeholders agree that file systems can
handle the torn writes anyway, then we could presumably drop
using Reliable Write for FUA.
>>>
>>> Wenchao Chen (2):
>>> mmc: block: Support Host to control FUA
>>> mmc: sdhci-sprd: enable fua_disable for SPRDSDHCI
>>>
>>> drivers/mmc/core/block.c | 3 ++-
>>> drivers/mmc/host/sdhci-sprd.c | 2 ++
>>> include/linux/mmc/host.h | 3 +++
>>> 3 files changed, 7 insertions(+), 1 deletion(-)
>>>
>>
On 18/11/22 12:54, Wenchao Chen wrote:
> On Fri, Nov 18, 2022 at 6:12 PM Adrian Hunter <[email protected]> wrote:
>>
>> On 11/11/22 09:58, Wenchao Chen wrote:
>>> Hi Hunter
>>> Thank you for your review!
>>> I'm sorry to reply you so late because I've been too busy lately.
>>>
>>> On Fri, Oct 21, 2022 at 11:50 PM Adrian Hunter <[email protected]> wrote:
>>>>
>>>> On 21/10/22 10:30, Wenchao Chen wrote:
>>>>> From: Wenchao Chen <[email protected]>
>>>>>
>>>>> Summary
>>>>> =======
>>>>> These patches[1] supports the host to turn off FUA.
>>>>>
>>>>> About FUA, roughly deal with the following two parts:
>>>>> 1) FUA(Forced Unit Access):
>>>>> - The REQ_FUA flag can be OR ed into the r/w flags of a bio submitted from the
>>>>> filesystem and will make sure that I/O completion for this request is only
>>>>> signaled after the data has been committed to non-volatile storage.
>>>>>
>>>>> 2) In emmc, FUA is represented as Reliable write. code show as below:
>>>>> static void mmc_blk_data_prep(struct mmc_queue *mq, struct mmc_queue_req *mqrq,
>>>>> int recovery_mode, bool *do_rel_wr_p, bool *do_data_tag_p)
>>>>> {
>>>>> ...
>>>>> /*
>>>>> * Reliable writes are used to implement Forced Unit Access and
>>>>> * are supported only on MMCs.
>>>>> */
>>>>> do_rel_wr = (req->cmd_flags & REQ_FUA) &&
>>>>> rq_data_dir(req) == WRITE &&
>>>>> (md->flags & MMC_BLK_REL_WR);
>>>>> ...
>>>>> }
>>>>>
>>>>> Patch structure
>>>>> ===============
>>>>> patch#1: for block
>>>>> patch#2: for sdhci-sprd
>>>>>
>>>>> Tests
>>>>> =====
>>>>> Ran 'AndroBench' to evaluate the performance:
>>>>
>>>> It would be good to have more details e.g.
>>>> What file system? What block size? What journal size?
>>>> What file size? What record size?
>>>>
>>>
>>> What file system?
>>> F2FS
>>> What block size?
>>> Sequential: 32768KB, Random: 4KB
>>> What file size?
>>> 64MB
>>>
>>>>> 1. fua_disable = 1
>>>>> /sys/block/mmcblk0/queue # cat fua 0
>>>>> I tested 5 times for each case and output a average speed.
>>>>>
>>>>> 1) Sequential read:
>>>>> Speed: 266.8MiB/s, 265.1MiB/s, 262.9MiB/s, 268.7MiB/s, 265.2MiB/s
>>>>> Average speed: 265.74MiB/s
>>>>>
>>>>> 2) Random read:
>>>>> Speed: 98.75MiB/s, 98.7MiB/s, 98.5MiB/s, 99.4MiB/s, 98.7MiB/s
>>>>> Average speed: 98.81MiB/s
>>>>>
>>>>> 3) Sequential write:
>>>>> Speed: 199.94MiB/s, 199.1MiB/s, 205.5MiB/s, 206.5MiB/s, 191.5MiB/s
>>>>> Average speed: 200.5MiB/s
>>>>>
>>>>> 4) Random write:
>>>>> Speed: 68.6MiB/s, 71.8MiB/s, 77.1MiB/s, 64.8MiB/s, 69.3MiB/s
>>>>> Average speed: 70.32MiB/s
>>>>>
>>>>> 2. fua_disable = 0 (default 0)
>>>>> /sys/block/mmcblk0/queue # cat fua 1
>>>>> I tested 5 times for each case and output a average speed.
>>>>>
>>>>> 1) Sequential read:
>>>>> Speed: 259.3MiB/s, 258.8MiB/s, 258.2MiB/s, 259.5MiB/s, 253.5MiB/s
>>>>> Average speed: 257.86MiB/s
>>>>>
>>>>> 2) Random read:
>>>>> Speed: 98.9MiB/s, 101MiB/s, 101MiB/s, 99MiB/s, 101.1MiB/s
>>>>> Average speed: 100.2MiB/s
>>>>>
>>>>> 3) Sequential write:
>>>>> Speed: 153.7MiB/s, 146.2MiB/s, 151.2MiB/s, 148.8MiB/s, 147.5MiB/s
>>>>> Average speed: 149.48MiB/s
>>>>>
>>>>> 4) Random write:
>>>>> Speed: 12.9MiB/s, 12.3MiB/s, 12.6MiB/s, 12.8MiB/s, 12.8MiB/s
>>>>> Average speed: 12.68MiB/s
>>>>
>>>> Is every write being sync'ed of just sync at the end?
>>>>
>>>
>>> /*
>>> * Reliable writes are used to implement Forced Unit Access and
>>> * are supported only on MMCs.
>>> */
>>> do_rel_wr = (req->cmd_flags & REQ_FUA) &&
>>> rq_data_dir(req) == WRITE &&
>>> (md->flags & MMC_BLK_REL_WR);
>>>
>>> A Reliable Write access shall force the data to be written to the
>>> nonvolatile storage。
>>> It will consume more time.
>>
>> Reliable write is slow because it guarantees not to tear the write.
>> The issue is torn writes, not just FUA.
>>
>
> If you'd like, could you introduce Reliable write that doesn't tear writes?
Not following you. Reliable Write doesn't tear writes, that is why it is used.
>
>>>
>>>>>
>>>>> According to the above data, disable FUA (fua_disable = 1) improves the
>>>>> performance:
>>>>> 1)Sequential read improved by 3%.
>>>>> 2)Random read were down 1%.
>>>>
>>>> FUA should not affect reads. If it is, you may want to investigate how.
>>>>
>>>>> 3)Sequential write improved by 34%.
>>>>> 4)Random write improved by 454%.
>>>>> Therefore, it is recommended to support the host to control FUA.
>>>>>
>>>>> Reference
>>>>> =========
>>>>> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/block/writeback_cache_control.rst
>>>>> [2] Embedded Multi-Media Card (e•MMC) Electrical Standard (5.1)''
>>>>
>>>> You do not seem to have considered data integrity.
>>>>
>>>> Regular disks are assumed to provide atomic sector writes. That is, a sector has either the old data or the new data, but not some corrupt mixture.
>>>>
>>>> mmc does not have that assumption, which is presumably why Reliable Write has been used instead. Although that idea appears to have been thrown away for devices with no cache by commit 08ebf903af57 ("mmc: core: Fixup support for writeback-cache for eMMC and SD").
>>>>
>>>> File systems can use FUA to mark a successful journal flush. Whether or not getting a torn sector at that point will corrupt the file system recovery is presumably file system specific, and maybe specific to file system options e.g. the use of checksums.
>>>>
>>>> It may well be that a file system can survive a torn sector at that point, or that user space would prefer to take the risk in order to get better performance. In either of those cases, it is not really a decision for the host controller driver.
>>>>
>>>
>>> Considering the data integrity, we did a random power-down test, and
>>> the experimental results were good.
>>>
>>> FUA can only reduce data loss under abnormal conditions, but cannot
>>> prevent data loss under abnormal conditions.
>>>
>>> I think there should be a balance between FUA and NO FUA, but
>>> filesystems seem to favor FUA.
>>>
>>> FUA brings a drop in random write performance. If enough tests are
>>> done, NO FUA is acceptable.
>>>
>>> I found a discussion about FUA:
>>> https://lore.kernel.org/linux-f2fs-devel/[email protected]/
>>>
>>> UFS reference:
>>> https://lore.kernel.org/linux-scsi/[email protected]/
>>>
>>
>> You really need buy-in from more people, especially file system
>> developers. I suggest you try F2FS people to start with.
>> Please be clear though: Reliable Write protects against torn
>> writes. If enough stakeholders agree that file systems can
>> handle the torn writes anyway, then we could presumably drop
>> using Reliable Write for FUA.
>>
>>>>>
>>>>> Wenchao Chen (2):
>>>>> mmc: block: Support Host to control FUA
>>>>> mmc: sdhci-sprd: enable fua_disable for SPRDSDHCI
>>>>>
>>>>> drivers/mmc/core/block.c | 3 ++-
>>>>> drivers/mmc/host/sdhci-sprd.c | 2 ++
>>>>> include/linux/mmc/host.h | 3 +++
>>>>> 3 files changed, 7 insertions(+), 1 deletion(-)
>>>>>
>>>>
>>
On Fri, 11 Nov 2022 at 13:04, Ulf Hansson <[email protected]> wrote:
>
> [...]
>
> > >
> > > Considering the data integrity, we did a random power-down test, and
> > > the experimental results were good.
> > >
> > > FUA can only reduce data loss under abnormal conditions, but cannot
> > > prevent data loss under abnormal conditions.
> > >
> > > I think there should be a balance between FUA and NO FUA, but
> > > filesystems seem to favor FUA.
> > >
> > > FUA brings a drop in random write performance. If enough tests are
> > > done, NO FUA is acceptable.
> >
> > Testing this isn't entirely easy. It requires you to hook up
> > electrical switches to allow you to automate the powering on/off of
> > the platform(s). Then at each cycle, really make sure to stress test
> > the data integrity of the flash memory. Is that what the tests did -
> > or can you elaborate a bit on what was really tested?
> >
> > In any case, the performance impact boils down to how each eMMC/SD
> > card internally manages reliable writes vs regular writes. Some
> > vendors may treat them very similarly, while others do not.
> >
> > That said, trying to disable REQ_FUA from an mmc host driver is the
> > wrong approach, as also pointed out by Adrian above. These types of
> > decisions belong solely in the mmc core layer.
> >
> > Instead of what the $subject series proposes, I would rather suggest
> > we discuss (and test) whether it could make sense to disable REQ_FUA -
> > *if* the eMMC/SD card supports a write-back-cache (REQ_OP_FLUSH) too.
> > Hence, the mmc core could then announce only REQ_OP_FLUSH.
> >
>
> Below is a simple patch that does the above. We may not want to enable
> this for *all* eMMC/SD cards, but it works fine for testing and to
> continue the discussions here.
>
>
> From: Ulf Hansson <[email protected]>
> Date: Fri, 11 Nov 2022 12:48:02 +0100
> Subject: [PATCH] mmc: core: Disable REQ_FUA if the card supports an internal
> cache
>
> !!!! This is not for merge, but only for test and discussions!!!
>
> It has been reported that REQ_FUA can be costly on some eMMC devices. A
> potential option that could mitigate this problem, is to rely solely on
> REQ_OP_FLUSH instead, but that requires the eMMC/SD to support an internal
> cache. This is an attempt to try this out to see how it behaves.
>
> Signed-off-by: Ulf Hansson <[email protected]>
> ---
> drivers/mmc/core/block.c | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
> index db6d8a099910..197e9f6cdaad 100644
> --- a/drivers/mmc/core/block.c
> +++ b/drivers/mmc/core/block.c
> @@ -2494,15 +2494,15 @@ static struct mmc_blk_data
> *mmc_blk_alloc_req(struct mmc_card *card,
> md->flags |= MMC_BLK_CMD23;
> }
>
> - if (md->flags & MMC_BLK_CMD23 &&
> - ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) ||
> - card->ext_csd.rel_sectors)) {
> + if (mmc_cache_enabled(card->host)) {
> + cache_enabled = true;
> + } else if (md->flags & MMC_BLK_CMD23 &&
> + (card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN ||
> + card->ext_csd.rel_sectors)) {
> md->flags |= MMC_BLK_REL_WR;
> fua_enabled = true;
> cache_enabled = true;
> }
> - if (mmc_cache_enabled(card->host))
> - cache_enabled = true;
>
> blk_queue_write_cache(md->queue.queue, cache_enabled, fua_enabled);
>
> --
> 2.34.1
Wenchao,
Did you manage to try the above patch to see if that could improve the
situation?
Kind regards
Uffe
On Thu, Feb 9, 2023 at 10:51 PM Ulf Hansson <[email protected]> wrote:
>
> On Fri, 11 Nov 2022 at 13:04, Ulf Hansson <[email protected]> wrote:
> >
> > [...]
> >
> > > >
> > > > Considering the data integrity, we did a random power-down test, and
> > > > the experimental results were good.
> > > >
> > > > FUA can only reduce data loss under abnormal conditions, but cannot
> > > > prevent data loss under abnormal conditions.
> > > >
> > > > I think there should be a balance between FUA and NO FUA, but
> > > > filesystems seem to favor FUA.
> > > >
> > > > FUA brings a drop in random write performance. If enough tests are
> > > > done, NO FUA is acceptable.
> > >
> > > Testing this isn't entirely easy. It requires you to hook up
> > > electrical switches to allow you to automate the powering on/off of
> > > the platform(s). Then at each cycle, really make sure to stress test
> > > the data integrity of the flash memory. Is that what the tests did -
> > > or can you elaborate a bit on what was really tested?
> > >
> > > In any case, the performance impact boils down to how each eMMC/SD
> > > card internally manages reliable writes vs regular writes. Some
> > > vendors may treat them very similarly, while others do not.
> > >
> > > That said, trying to disable REQ_FUA from an mmc host driver is the
> > > wrong approach, as also pointed out by Adrian above. These types of
> > > decisions belong solely in the mmc core layer.
> > >
> > > Instead of what the $subject series proposes, I would rather suggest
> > > we discuss (and test) whether it could make sense to disable REQ_FUA -
> > > *if* the eMMC/SD card supports a write-back-cache (REQ_OP_FLUSH) too.
> > > Hence, the mmc core could then announce only REQ_OP_FLUSH.
> > >
> >
> > Below is a simple patch that does the above. We may not want to enable
> > this for *all* eMMC/SD cards, but it works fine for testing and to
> > continue the discussions here.
> >
> >
> > From: Ulf Hansson <[email protected]>
> > Date: Fri, 11 Nov 2022 12:48:02 +0100
> > Subject: [PATCH] mmc: core: Disable REQ_FUA if the card supports an internal
> > cache
> >
> > !!!! This is not for merge, but only for test and discussions!!!
> >
> > It has been reported that REQ_FUA can be costly on some eMMC devices. A
> > potential option that could mitigate this problem, is to rely solely on
> > REQ_OP_FLUSH instead, but that requires the eMMC/SD to support an internal
> > cache. This is an attempt to try this out to see how it behaves.
> >
> > Signed-off-by: Ulf Hansson <[email protected]>
> > ---
> > drivers/mmc/core/block.c | 10 +++++-----
> > 1 file changed, 5 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
> > index db6d8a099910..197e9f6cdaad 100644
> > --- a/drivers/mmc/core/block.c
> > +++ b/drivers/mmc/core/block.c
> > @@ -2494,15 +2494,15 @@ static struct mmc_blk_data
> > *mmc_blk_alloc_req(struct mmc_card *card,
> > md->flags |= MMC_BLK_CMD23;
> > }
> >
> > - if (md->flags & MMC_BLK_CMD23 &&
> > - ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) ||
> > - card->ext_csd.rel_sectors)) {
> > + if (mmc_cache_enabled(card->host)) {
> > + cache_enabled = true;
> > + } else if (md->flags & MMC_BLK_CMD23 &&
> > + (card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN ||
> > + card->ext_csd.rel_sectors)) {
> > md->flags |= MMC_BLK_REL_WR;
> > fua_enabled = true;
> > cache_enabled = true;
> > }
> > - if (mmc_cache_enabled(card->host))
> > - cache_enabled = true;
> >
> > blk_queue_write_cache(md->queue.queue, cache_enabled, fua_enabled);
> >
> > --
> > 2.34.1
>
> Wenchao,
>
> Did you manage to try the above patch to see if that could improve the
> situation?
>
Hi Uffe,
Yes, it can solve my problem. Thank you very much.
> Kind regards
> Uffe
On Mon, 13 Feb 2023 at 09:21, Wenchao Chen <[email protected]> wrote:
>
> On Thu, Feb 9, 2023 at 10:51 PM Ulf Hansson <[email protected]> wrote:
> >
> > On Fri, 11 Nov 2022 at 13:04, Ulf Hansson <[email protected]> wrote:
> > >
> > > [...]
> > >
> > > > >
> > > > > Considering the data integrity, we did a random power-down test, and
> > > > > the experimental results were good.
> > > > >
> > > > > FUA can only reduce data loss under abnormal conditions, but cannot
> > > > > prevent data loss under abnormal conditions.
> > > > >
> > > > > I think there should be a balance between FUA and NO FUA, but
> > > > > filesystems seem to favor FUA.
> > > > >
> > > > > FUA brings a drop in random write performance. If enough tests are
> > > > > done, NO FUA is acceptable.
> > > >
> > > > Testing this isn't entirely easy. It requires you to hook up
> > > > electrical switches to allow you to automate the powering on/off of
> > > > the platform(s). Then at each cycle, really make sure to stress test
> > > > the data integrity of the flash memory. Is that what the tests did -
> > > > or can you elaborate a bit on what was really tested?
> > > >
> > > > In any case, the performance impact boils down to how each eMMC/SD
> > > > card internally manages reliable writes vs regular writes. Some
> > > > vendors may treat them very similarly, while others do not.
> > > >
> > > > That said, trying to disable REQ_FUA from an mmc host driver is the
> > > > wrong approach, as also pointed out by Adrian above. These types of
> > > > decisions belong solely in the mmc core layer.
> > > >
> > > > Instead of what the $subject series proposes, I would rather suggest
> > > > we discuss (and test) whether it could make sense to disable REQ_FUA -
> > > > *if* the eMMC/SD card supports a write-back-cache (REQ_OP_FLUSH) too.
> > > > Hence, the mmc core could then announce only REQ_OP_FLUSH.
> > > >
> > >
> > > Below is a simple patch that does the above. We may not want to enable
> > > this for *all* eMMC/SD cards, but it works fine for testing and to
> > > continue the discussions here.
> > >
> > >
> > > From: Ulf Hansson <[email protected]>
> > > Date: Fri, 11 Nov 2022 12:48:02 +0100
> > > Subject: [PATCH] mmc: core: Disable REQ_FUA if the card supports an internal
> > > cache
> > >
> > > !!!! This is not for merge, but only for test and discussions!!!
> > >
> > > It has been reported that REQ_FUA can be costly on some eMMC devices. A
> > > potential option that could mitigate this problem, is to rely solely on
> > > REQ_OP_FLUSH instead, but that requires the eMMC/SD to support an internal
> > > cache. This is an attempt to try this out to see how it behaves.
> > >
> > > Signed-off-by: Ulf Hansson <[email protected]>
> > > ---
> > > drivers/mmc/core/block.c | 10 +++++-----
> > > 1 file changed, 5 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
> > > index db6d8a099910..197e9f6cdaad 100644
> > > --- a/drivers/mmc/core/block.c
> > > +++ b/drivers/mmc/core/block.c
> > > @@ -2494,15 +2494,15 @@ static struct mmc_blk_data
> > > *mmc_blk_alloc_req(struct mmc_card *card,
> > > md->flags |= MMC_BLK_CMD23;
> > > }
> > >
> > > - if (md->flags & MMC_BLK_CMD23 &&
> > > - ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) ||
> > > - card->ext_csd.rel_sectors)) {
> > > + if (mmc_cache_enabled(card->host)) {
> > > + cache_enabled = true;
> > > + } else if (md->flags & MMC_BLK_CMD23 &&
> > > + (card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN ||
> > > + card->ext_csd.rel_sectors)) {
> > > md->flags |= MMC_BLK_REL_WR;
> > > fua_enabled = true;
> > > cache_enabled = true;
> > > }
> > > - if (mmc_cache_enabled(card->host))
> > > - cache_enabled = true;
> > >
> > > blk_queue_write_cache(md->queue.queue, cache_enabled, fua_enabled);
> > >
> > > --
> > > 2.34.1
> >
> > Wenchao,
> >
> > Did you manage to try the above patch to see if that could improve the
> > situation?
> >
>
> Hi Uffe,
> Yes, it can solve my problem. Thank you very much.
Okay, that's very interesting news. I will prepare a formal patch and
make a new submission soon. Let's continue the discussion then.
Kind regards
Uffe