Requesting a retune before switching to the RPMB partition has been
observed to cause CRC errors on the RPMB reads (-EILSEQ).
Since RPMB reads can not be retried, the clients would be directly
affected by the errors.
This commit disables the request prior to RPMB switching while allowing
the pause interface to still request a retune before the pause for other
use cases.
This was verified with the sdhci-of-arasan driver (ZynqMP) configured
for HS200 using two separate eMMC cards (DG4064 and 064GB2). In both
cases, the error was easy to reproduce triggering every few tenths of
reads.
Signed-off-by: Jorge Ramirez-Ortiz <[email protected]>
Acked-by: Avri Altman <[email protected]>
---
drivers/mmc/core/block.c | 2 +-
drivers/mmc/core/host.c | 7 ++++---
drivers/mmc/core/host.h | 2 +-
3 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index f9a5cffa64b1..1d69078ad9b2 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -859,7 +859,7 @@ static int mmc_blk_part_switch_pre(struct mmc_card *card,
if (ret)
return ret;
}
- mmc_retune_pause(card->host);
+ mmc_retune_pause(card->host, false);
}
return ret;
diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
index 096093f7be00..a9b95aaa2235 100644
--- a/drivers/mmc/core/host.c
+++ b/drivers/mmc/core/host.c
@@ -119,13 +119,14 @@ void mmc_retune_enable(struct mmc_host *host)
/*
* Pause re-tuning for a small set of operations. The pause begins after the
- * next command and after first doing re-tuning.
+ * next command and, if retune is set, after first doing re-tuning.
*/
-void mmc_retune_pause(struct mmc_host *host)
+void mmc_retune_pause(struct mmc_host *host, bool retune)
{
if (!host->retune_paused) {
host->retune_paused = 1;
- mmc_retune_needed(host);
+ if (retune)
+ mmc_retune_needed(host);
mmc_retune_hold(host);
}
}
diff --git a/drivers/mmc/core/host.h b/drivers/mmc/core/host.h
index 48c4952512a5..321776b52270 100644
--- a/drivers/mmc/core/host.h
+++ b/drivers/mmc/core/host.h
@@ -18,7 +18,7 @@ void mmc_retune_disable(struct mmc_host *host);
void mmc_retune_hold(struct mmc_host *host);
void mmc_retune_release(struct mmc_host *host);
int mmc_retune(struct mmc_host *host);
-void mmc_retune_pause(struct mmc_host *host);
+void mmc_retune_pause(struct mmc_host *host, bool retune);
void mmc_retune_unpause(struct mmc_host *host);
static inline void mmc_retune_clear(struct mmc_host *host)
--
2.34.1
On 04/12/2023 17:22, Jorge Ramirez-Ortiz wrote:
> Requesting a retune before switching to the RPMB partition has been
> observed to cause CRC errors on the RPMB reads (-EILSEQ).
>
> Since RPMB reads can not be retried, the clients would be directly
> affected by the errors.
>
> This commit disables the request prior to RPMB switching while allowing
> the pause interface to still request a retune before the pause for other
> use cases.
>
> This was verified with the sdhci-of-arasan driver (ZynqMP) configured
> for HS200 using two separate eMMC cards (DG4064 and 064GB2). In both
> cases, the error was easy to reproduce triggering every few tenths of
> reads.
>
> Signed-off-by: Jorge Ramirez-Ortiz <[email protected]>
> Acked-by: Avri Altman <[email protected]>
Is the difference between this mail and the previous one just Avri's Ack?
No need to resend for every tag you receive.
>
> ---
> drivers/mmc/core/block.c | 2 +-
> drivers/mmc/core/host.c | 7 ++++---
> drivers/mmc/core/host.h | 2 +-
> 3 files changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
> index f9a5cffa64b1..1d69078ad9b2 100644
> --- a/drivers/mmc/core/block.c
> +++ b/drivers/mmc/core/block.c
> @@ -859,7 +859,7 @@ static int mmc_blk_part_switch_pre(struct mmc_card *card,
> if (ret)
> return ret;
> }
> - mmc_retune_pause(card->host);
> + mmc_retune_pause(card->host, false);
> }
>
> return ret;
> diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
> index 096093f7be00..a9b95aaa2235 100644
> --- a/drivers/mmc/core/host.c
> +++ b/drivers/mmc/core/host.c
> @@ -119,13 +119,14 @@ void mmc_retune_enable(struct mmc_host *host)
>
> /*
> * Pause re-tuning for a small set of operations. The pause begins after the
> - * next command and after first doing re-tuning.
> + * next command and, if retune is set, after first doing re-tuning.
> */
> -void mmc_retune_pause(struct mmc_host *host)
> +void mmc_retune_pause(struct mmc_host *host, bool retune)
> {
> if (!host->retune_paused) {
> host->retune_paused = 1;
> - mmc_retune_needed(host);
> + if (retune)
> + mmc_retune_needed(host);
> mmc_retune_hold(host);
> }
> }
> diff --git a/drivers/mmc/core/host.h b/drivers/mmc/core/host.h
> index 48c4952512a5..321776b52270 100644
> --- a/drivers/mmc/core/host.h
> +++ b/drivers/mmc/core/host.h
> @@ -18,7 +18,7 @@ void mmc_retune_disable(struct mmc_host *host);
> void mmc_retune_hold(struct mmc_host *host);
> void mmc_retune_release(struct mmc_host *host);
> int mmc_retune(struct mmc_host *host);
> -void mmc_retune_pause(struct mmc_host *host);
> +void mmc_retune_pause(struct mmc_host *host, bool retune);
> void mmc_retune_unpause(struct mmc_host *host);
>
> static inline void mmc_retune_clear(struct mmc_host *host)
> --
> 2.34.1
On 04/12/23 17:52:24, Christian Loehle wrote:
> On 04/12/2023 17:22, Jorge Ramirez-Ortiz wrote:
> > Requesting a retune before switching to the RPMB partition has been
> > observed to cause CRC errors on the RPMB reads (-EILSEQ).
> >
> > Since RPMB reads can not be retried, the clients would be directly
> > affected by the errors.
> >
> > This commit disables the request prior to RPMB switching while allowing
> > the pause interface to still request a retune before the pause for other
> > use cases.
> >
> > This was verified with the sdhci-of-arasan driver (ZynqMP) configured
> > for HS200 using two separate eMMC cards (DG4064 and 064GB2). In both
> > cases, the error was easy to reproduce triggering every few tenths of
> > reads.
> >
> > Signed-off-by: Jorge Ramirez-Ortiz <[email protected]>
> > Acked-by: Avri Altman <[email protected]>
>
> Is the difference between this mail and the previous one just Avri's Ack?
> No need to resend for every tag you receive.
oops, every project with its own little idiosyncrasies sometimes I get
confused (u-boot, optee, ssl, tf-a..) yes, sorry about it, no differences.
On 06/12/23 07:02:43, Avri Altman wrote:
> >
> > On 4/12/23 17:01, Jorge Ramirez-Ortiz wrote:
> > > Requesting a retune before switching to the RPMB partition has been
> > > observed to cause CRC errors on the RPMB reads (-EILSEQ).
> >
> > There are still 2 concerns:
> > 1) We don't really know the root cause. Have you determined if here are
> > CRC errors in the main partition also?
right, and I don't disagree with that.
As a test I created a 4GB file from /dev/random which I then copied
several times (dd if= ....)
root@uz3cg-dwg-sec:/sys/kernel/debug/mmc0# cat err_stats
# Command Timeout Occurred: 0
# Command CRC Errors Occurred: 0
# Data Timeout Occurred: 0
# Data CRC Errors Occurred: 0
# Auto-Cmd Error Occurred: 0
# ADMA Error Occurred: 0
# Tuning Error Occurred: 0
# CMDQ RED Errors: 0
# CMDQ GCE Errors: 0
# CMDQ ICCE Errors: 0
# Request Timedout: 0
# CMDQ Request Timedout: 0
# ICE Config Errors: 0
# Controller Timedout errors: 0
# Unexpected IRQ errors: 0
However as soon as I access RPMB and fails (it takes just a few tries) I see:
I/TC: RPMB: Using generated key
[ 86.902118] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
E/TC:? 0
E/TC:? 0 TA panicked with code 0xffff0000
E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
E/LD: arch: aarch64
E/LD: region 0: va 0xc0004000 pa 0x7e200000 size 0x002000 flags rw-s (ldelf)
E/LD: region 1: va 0xc0006000 pa 0x7e202000 size 0x008000 flags r-xs (ldelf)
E/LD: region 2: va 0xc000e000 pa 0x7e20a000 size 0x001000 flags rw-s (ldelf)
E/LD: region 3: va 0xc000f000 pa 0x7e20b000 size 0x004000 flags rw-s (ldelf)
E/LD: region 4: va 0xc0013000 pa 0x7e20f000 size 0x001000 flags r--s
E/LD: region 5: va 0xc0014000 pa 0x7e22c000 size 0x005000 flags rw-s (stack)
E/LD: region 6: va 0xc0019000 pa 0x818ea9ba8 size 0x002000 flags rw-- (param)
E/LD: region 7: va 0xc001b000 pa 0x818e97ba8 size 0x001000 flags rw-- (param)
E/LD: region 8: va 0xc004f000 pa 0x00001000 size 0x014000 flags r-xs [0]
E/LD: region 9: va 0xc0063000 pa 0x00015000 size 0x008000 flags rw-s [0]
E/LD: [0] 22250a54-0bf1-48fe-8002-7b20f1c9c9b1 @ 0xc004f000
E/LD: Call stack:
E/LD: 0xc0051a14
E/LD: 0xc004f31c
E/LD: 0xc0052d40
E/LD: 0xc004f624
root@uz3cg-dwg-sec:/var/rootdirs/home/fio# cat /sys/kernel/debug/mmc0/err_stats
# Command Timeout Occurred: 0
# Command CRC Errors Occurred: 0
# Data Timeout Occurred: 0
# Data CRC Errors Occurred: 1
# Auto-Cmd Error Occurred: 0
# ADMA Error Occurred: 0
# Tuning Error Occurred: 0
# CMDQ RED Errors: 0
# CMDQ GCE Errors: 0
# CMDQ ICCE Errors: 0
# Request Timedout: 0
# CMDQ Request Timedout: 0
# ICE Config Errors: 0
# Controller Timedout errors: 0
# Unexpected IRQ errors: 0
> > 2) Forcing this on everyone
> >
> > The original idea was that because re-tuning cannot be done in RPMB, the
> > need to re-rune in RPMB could be avoided by always re-tuning before
> > switching to RPMB and then switching straight back. IIRC re-tuning should
> > guarantee at least 4MB more I/O without issue.
> Performance is hardly an issue in the context of RPMB access -
> For most cases it’s a single frame.
Yes, the security use case typically stores hashes, variables
(bootcount, upgrade_available, versions, that sort of thing) and
certificates in RPMB.
Since you mentioned, I am seeing that tuning before switching to RPMB
has an impact on performance. As a practical test, just reading a 6 byte
variable incurs in 50ms penalty in kernel space due to the need to
retune 5 times. Not great since the request is coming from a Trusted
Application via OP-TEE through the supplicant meaning this TEE thread
(they are statically allocated CFG_NUM_THREADS) will be reserved for
quite a bit of time.
Roughly:
TA --> OP-TEE (core) --> TEE-supplicant --> Kernel (>50ms) --> OP-TEE --> TA
Adrian, I couldn't find the original performance justification for
enabling this feature globally. At which point do you think it becomes
beneficial to retune before accessing RPMB?
>
> Thanks,
> Avri
>
> >
> > The alternative to dropping re-tuning in this case could be to add a retry loop
> > for MMC_DRV_OP_IOCTL_RPMB if the error is -EILSEQ
For the security use case I mentioned above - even if it didn't end up in
the occasional CRC errors - I honestly see little value: dropping the
feature - or controlling it via CFG_ - seems more logical to me. Would you
agree?
> >
> >
> > >
> > > Since RPMB reads can not be retried, the clients would be directly
> > > affected by the errors.
> > >
> > > This commit disables the request prior to RPMB switching while
> > > allowing the pause interface to still request a retune before the
> > > pause for other use cases.
> > >
> > > This was verified with the sdhci-of-arasan driver (ZynqMP) configured
> > > for HS200 using two separate eMMC cards (DG4064 and 064GB2). In both
> > > cases, the error was easy to reproduce triggering every few tenths of
> > > reads.
> > >
> > > Signed-off-by: Jorge Ramirez-Ortiz <[email protected]>
> > > ---
> > > drivers/mmc/core/block.c | 2 +-
> > > drivers/mmc/core/host.c | 7 ++++---
> > > drivers/mmc/core/host.h | 2 +-
> > > 3 files changed, 6 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index
> > > f9a5cffa64b1..1d69078ad9b2 100644
> > > --- a/drivers/mmc/core/block.c
> > > +++ b/drivers/mmc/core/block.c
> > > @@ -859,7 +859,7 @@ static int mmc_blk_part_switch_pre(struct
> > mmc_card *card,
> > > if (ret)
> > > return ret;
> > > }
> > > - mmc_retune_pause(card->host);
> > > + mmc_retune_pause(card->host, false);
> > > }
> > >
> > > return ret;
> > > diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c index
> > > 096093f7be00..a9b95aaa2235 100644
> > > --- a/drivers/mmc/core/host.c
> > > +++ b/drivers/mmc/core/host.c
> > > @@ -119,13 +119,14 @@ void mmc_retune_enable(struct mmc_host
> > *host)
> > >
> > > /*
> > > * Pause re-tuning for a small set of operations. The pause begins
> > > after the
> > > - * next command and after first doing re-tuning.
> > > + * next command and, if retune is set, after first doing re-tuning.
> > > */
> > > -void mmc_retune_pause(struct mmc_host *host)
> > > +void mmc_retune_pause(struct mmc_host *host, bool retune)
> > > {
> > > if (!host->retune_paused) {
> > > host->retune_paused = 1;
> > > - mmc_retune_needed(host);
> > > + if (retune)
> > > + mmc_retune_needed(host);
> >
> > Better to just drop mmc_retune_needed(host);
> >
> > > mmc_retune_hold(host);
> >
> > There is still a small chance that re-tuning is needed anyway in which case it
> > will still be done.
> >
> > > }
> > > }
> > > diff --git a/drivers/mmc/core/host.h b/drivers/mmc/core/host.h index
> > > 48c4952512a5..321776b52270 100644
> > > --- a/drivers/mmc/core/host.h
> > > +++ b/drivers/mmc/core/host.h
> > > @@ -18,7 +18,7 @@ void mmc_retune_disable(struct mmc_host *host);
> > > void mmc_retune_hold(struct mmc_host *host); void
> > > mmc_retune_release(struct mmc_host *host); int mmc_retune(struct
> > > mmc_host *host); -void mmc_retune_pause(struct mmc_host *host);
> > > +void mmc_retune_pause(struct mmc_host *host, bool retune);
> > > void mmc_retune_unpause(struct mmc_host *host);
> > >
> > > static inline void mmc_retune_clear(struct mmc_host *host)
>
On 06/12/23 11:00:47, Jorge Ramirez-Ortiz, Foundries wrote:
> On 06/12/23 07:02:43, Avri Altman wrote:
> > >
> > > On 4/12/23 17:01, Jorge Ramirez-Ortiz wrote:
> > > > Requesting a retune before switching to the RPMB partition has been
> > > > observed to cause CRC errors on the RPMB reads (-EILSEQ).
> > >
> > > There are still 2 concerns:
> > > 1) We don't really know the root cause. Have you determined if here are
> > > CRC errors in the main partition also?
>
> right, and I don't disagree with that.
>
> As a test I created a 4GB file from /dev/random which I then copied
> several times (dd if= ....)
>
> root@uz3cg-dwg-sec:/sys/kernel/debug/mmc0# cat err_stats
> # Command Timeout Occurred: 0
> # Command CRC Errors Occurred: 0
> # Data Timeout Occurred: 0
> # Data CRC Errors Occurred: 0
> # Auto-Cmd Error Occurred: 0
> # ADMA Error Occurred: 0
> # Tuning Error Occurred: 0
> # CMDQ RED Errors: 0
> # CMDQ GCE Errors: 0
> # CMDQ ICCE Errors: 0
> # Request Timedout: 0
> # CMDQ Request Timedout: 0
> # ICE Config Errors: 0
> # Controller Timedout errors: 0
> # Unexpected IRQ errors: 0
>
> However as soon as I access RPMB and fails (it takes just a few tries) I see:
>
> I/TC: RPMB: Using generated key
> [ 86.902118] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
> E/TC:? 0
> E/TC:? 0 TA panicked with code 0xffff0000
> E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
> E/LD: arch: aarch64
> E/LD: region 0: va 0xc0004000 pa 0x7e200000 size 0x002000 flags rw-s (ldelf)
> E/LD: region 1: va 0xc0006000 pa 0x7e202000 size 0x008000 flags r-xs (ldelf)
> E/LD: region 2: va 0xc000e000 pa 0x7e20a000 size 0x001000 flags rw-s (ldelf)
> E/LD: region 3: va 0xc000f000 pa 0x7e20b000 size 0x004000 flags rw-s (ldelf)
> E/LD: region 4: va 0xc0013000 pa 0x7e20f000 size 0x001000 flags r--s
> E/LD: region 5: va 0xc0014000 pa 0x7e22c000 size 0x005000 flags rw-s (stack)
> E/LD: region 6: va 0xc0019000 pa 0x818ea9ba8 size 0x002000 flags rw-- (param)
> E/LD: region 7: va 0xc001b000 pa 0x818e97ba8 size 0x001000 flags rw-- (param)
> E/LD: region 8: va 0xc004f000 pa 0x00001000 size 0x014000 flags r-xs [0]
> E/LD: region 9: va 0xc0063000 pa 0x00015000 size 0x008000 flags rw-s [0]
> E/LD: [0] 22250a54-0bf1-48fe-8002-7b20f1c9c9b1 @ 0xc004f000
> E/LD: Call stack:
> E/LD: 0xc0051a14
> E/LD: 0xc004f31c
> E/LD: 0xc0052d40
> E/LD: 0xc004f624
>
> root@uz3cg-dwg-sec:/var/rootdirs/home/fio# cat /sys/kernel/debug/mmc0/err_stats
> # Command Timeout Occurred: 0
> # Command CRC Errors Occurred: 0
> # Data Timeout Occurred: 0
> # Data CRC Errors Occurred: 1
> # Auto-Cmd Error Occurred: 0
> # ADMA Error Occurred: 0
> # Tuning Error Occurred: 0
> # CMDQ RED Errors: 0
> # CMDQ GCE Errors: 0
> # CMDQ ICCE Errors: 0
> # Request Timedout: 0
> # CMDQ Request Timedout: 0
> # ICE Config Errors: 0
> # Controller Timedout errors: 0
> # Unexpected IRQ errors: 0
>
> > > 2) Forcing this on everyone
> > >
> > > The original idea was that because re-tuning cannot be done in RPMB, the
> > > need to re-rune in RPMB could be avoided by always re-tuning before
> > > switching to RPMB and then switching straight back. IIRC re-tuning should
> > > guarantee at least 4MB more I/O without issue.
> > Performance is hardly an issue in the context of RPMB access -
> > For most cases it’s a single frame.
>
> Yes, the security use case typically stores hashes, variables
> (bootcount, upgrade_available, versions, that sort of thing) and
> certificates in RPMB.
>
> Since you mentioned, I am seeing that tuning before switching to RPMB
> has an impact on performance. As a practical test, just reading a 6 byte
> variable incurs in 50ms penalty in kernel space due to the need to
> retune 5 times. Not great since the request is coming from a Trusted
> Application via OP-TEE through the supplicant meaning this TEE thread
> (they are statically allocated CFG_NUM_THREADS) will be reserved for
> quite a bit of time.
>
> Roughly:
> TA --> OP-TEE (core) --> TEE-supplicant --> Kernel (>50ms) --> OP-TEE --> TA
To add more detail to the timing above, when using RPMB, OP-TEE stores
the secure filesystem on RPMB as well, so accessing one of the variables
stored in the filesystem consists on a number (~5) of individual RPMB
requests (each one forcing a retune, each retune taking around 10ms).
BTW, I also tried delaying the timing between those consecutive retunes
(up to 1 second), but the issue still persisted.
>
> Adrian, I couldn't find the original performance justification for
> enabling this feature globally. At which point do you think it becomes
> beneficial to retune before accessing RPMB?
How should we proceed with this patch then? can it be merged as I
proposed? should I rewrite it differently? not sure what is next
TIA
Jorge
On 6/12/23 09:02, Avri Altman wrote:
>>
>> On 4/12/23 17:01, Jorge Ramirez-Ortiz wrote:
>>> Requesting a retune before switching to the RPMB partition has been
>>> observed to cause CRC errors on the RPMB reads (-EILSEQ).
>>
>> There are still 2 concerns:
>> 1) We don't really know the root cause. Have you determined if here are
>> CRC errors in the main partition also?
>> 2) Forcing this on everyone
>>
>> The original idea was that because re-tuning cannot be done in RPMB, the
>> need to re-rune in RPMB could be avoided by always re-tuning before
>> switching to RPMB and then switching straight back. IIRC re-tuning should
>> guarantee at least 4MB more I/O without issue.
> Performance is hardly an issue in the context of RPMB access -
> For most cases it’s a single frame.
Not sure why you bring up performance.
SDHCI spec says:
"To enable inserting the re-tuning procedure during data transfers,
the data length per read/write command shall be limited up to 4MB."
Which implies re-tuning is needed at most every 4MB, so re-tuning
before RPMB switch and switching straight back, means re-tuning
should never be needed in RPMB.
>
> Thanks,
> Avri
>
>>
>> The alternative to dropping re-tuning in this case could be to add a retry loop
>> for MMC_DRV_OP_IOCTL_RPMB if the error is -EILSEQ
>>
>>
>>>
>>> Since RPMB reads can not be retried, the clients would be directly
>>> affected by the errors.
>>>
>>> This commit disables the request prior to RPMB switching while
>>> allowing the pause interface to still request a retune before the
>>> pause for other use cases.
>>>
>>> This was verified with the sdhci-of-arasan driver (ZynqMP) configured
>>> for HS200 using two separate eMMC cards (DG4064 and 064GB2). In both
>>> cases, the error was easy to reproduce triggering every few tenths of
>>> reads.
>>>
>>> Signed-off-by: Jorge Ramirez-Ortiz <[email protected]>
>>> ---
>>> drivers/mmc/core/block.c | 2 +-
>>> drivers/mmc/core/host.c | 7 ++++---
>>> drivers/mmc/core/host.h | 2 +-
>>> 3 files changed, 6 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c index
>>> f9a5cffa64b1..1d69078ad9b2 100644
>>> --- a/drivers/mmc/core/block.c
>>> +++ b/drivers/mmc/core/block.c
>>> @@ -859,7 +859,7 @@ static int mmc_blk_part_switch_pre(struct
>> mmc_card *card,
>>> if (ret)
>>> return ret;
>>> }
>>> - mmc_retune_pause(card->host);
>>> + mmc_retune_pause(card->host, false);
>>> }
>>>
>>> return ret;
>>> diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c index
>>> 096093f7be00..a9b95aaa2235 100644
>>> --- a/drivers/mmc/core/host.c
>>> +++ b/drivers/mmc/core/host.c
>>> @@ -119,13 +119,14 @@ void mmc_retune_enable(struct mmc_host
>> *host)
>>>
>>> /*
>>> * Pause re-tuning for a small set of operations. The pause begins
>>> after the
>>> - * next command and after first doing re-tuning.
>>> + * next command and, if retune is set, after first doing re-tuning.
>>> */
>>> -void mmc_retune_pause(struct mmc_host *host)
>>> +void mmc_retune_pause(struct mmc_host *host, bool retune)
>>> {
>>> if (!host->retune_paused) {
>>> host->retune_paused = 1;
>>> - mmc_retune_needed(host);
>>> + if (retune)
>>> + mmc_retune_needed(host);
>>
>> Better to just drop mmc_retune_needed(host);
>>
>>> mmc_retune_hold(host);
>>
>> There is still a small chance that re-tuning is needed anyway in which case it
>> will still be done.
>>
>>> }
>>> }
>>> diff --git a/drivers/mmc/core/host.h b/drivers/mmc/core/host.h index
>>> 48c4952512a5..321776b52270 100644
>>> --- a/drivers/mmc/core/host.h
>>> +++ b/drivers/mmc/core/host.h
>>> @@ -18,7 +18,7 @@ void mmc_retune_disable(struct mmc_host *host);
>>> void mmc_retune_hold(struct mmc_host *host); void
>>> mmc_retune_release(struct mmc_host *host); int mmc_retune(struct
>>> mmc_host *host); -void mmc_retune_pause(struct mmc_host *host);
>>> +void mmc_retune_pause(struct mmc_host *host, bool retune);
>>> void mmc_retune_unpause(struct mmc_host *host);
>>>
>>> static inline void mmc_retune_clear(struct mmc_host *host)
>
On 11/12/23 10:00, Jorge Ramirez-Ortiz, Foundries wrote:
> On 06/12/23 11:00:47, Jorge Ramirez-Ortiz, Foundries wrote:
>> On 06/12/23 07:02:43, Avri Altman wrote:
>>>>
>>>> On 4/12/23 17:01, Jorge Ramirez-Ortiz wrote:
>>>>> Requesting a retune before switching to the RPMB partition has been
>>>>> observed to cause CRC errors on the RPMB reads (-EILSEQ).
>>>>
>>>> There are still 2 concerns:
>>>> 1) We don't really know the root cause. Have you determined if here are
>>>> CRC errors in the main partition also?
>>
>> right, and I don't disagree with that.
>>
>> As a test I created a 4GB file from /dev/random which I then copied
>> several times (dd if= ....)
>>
>> root@uz3cg-dwg-sec:/sys/kernel/debug/mmc0# cat err_stats
>> # Command Timeout Occurred: 0
>> # Command CRC Errors Occurred: 0
>> # Data Timeout Occurred: 0
>> # Data CRC Errors Occurred: 0
>> # Auto-Cmd Error Occurred: 0
>> # ADMA Error Occurred: 0
>> # Tuning Error Occurred: 0
>> # CMDQ RED Errors: 0
>> # CMDQ GCE Errors: 0
>> # CMDQ ICCE Errors: 0
>> # Request Timedout: 0
>> # CMDQ Request Timedout: 0
>> # ICE Config Errors: 0
>> # Controller Timedout errors: 0
>> # Unexpected IRQ errors: 0
>>
>> However as soon as I access RPMB and fails (it takes just a few tries) I see:
>>
>> I/TC: RPMB: Using generated key
>> [ 86.902118] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
>> E/TC:? 0
>> E/TC:? 0 TA panicked with code 0xffff0000
>> E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
>> E/LD: arch: aarch64
>> E/LD: region 0: va 0xc0004000 pa 0x7e200000 size 0x002000 flags rw-s (ldelf)
>> E/LD: region 1: va 0xc0006000 pa 0x7e202000 size 0x008000 flags r-xs (ldelf)
>> E/LD: region 2: va 0xc000e000 pa 0x7e20a000 size 0x001000 flags rw-s (ldelf)
>> E/LD: region 3: va 0xc000f000 pa 0x7e20b000 size 0x004000 flags rw-s (ldelf)
>> E/LD: region 4: va 0xc0013000 pa 0x7e20f000 size 0x001000 flags r--s
>> E/LD: region 5: va 0xc0014000 pa 0x7e22c000 size 0x005000 flags rw-s (stack)
>> E/LD: region 6: va 0xc0019000 pa 0x818ea9ba8 size 0x002000 flags rw-- (param)
>> E/LD: region 7: va 0xc001b000 pa 0x818e97ba8 size 0x001000 flags rw-- (param)
>> E/LD: region 8: va 0xc004f000 pa 0x00001000 size 0x014000 flags r-xs [0]
>> E/LD: region 9: va 0xc0063000 pa 0x00015000 size 0x008000 flags rw-s [0]
>> E/LD: [0] 22250a54-0bf1-48fe-8002-7b20f1c9c9b1 @ 0xc004f000
>> E/LD: Call stack:
>> E/LD: 0xc0051a14
>> E/LD: 0xc004f31c
>> E/LD: 0xc0052d40
>> E/LD: 0xc004f624
>>
>> root@uz3cg-dwg-sec:/var/rootdirs/home/fio# cat /sys/kernel/debug/mmc0/err_stats
>> # Command Timeout Occurred: 0
>> # Command CRC Errors Occurred: 0
>> # Data Timeout Occurred: 0
>> # Data CRC Errors Occurred: 1
>> # Auto-Cmd Error Occurred: 0
>> # ADMA Error Occurred: 0
>> # Tuning Error Occurred: 0
>> # CMDQ RED Errors: 0
>> # CMDQ GCE Errors: 0
>> # CMDQ ICCE Errors: 0
>> # Request Timedout: 0
>> # CMDQ Request Timedout: 0
>> # ICE Config Errors: 0
>> # Controller Timedout errors: 0
>> # Unexpected IRQ errors: 0
>>
>>>> 2) Forcing this on everyone
>>>>
>>>> The original idea was that because re-tuning cannot be done in RPMB, the
>>>> need to re-rune in RPMB could be avoided by always re-tuning before
>>>> switching to RPMB and then switching straight back. IIRC re-tuning should
>>>> guarantee at least 4MB more I/O without issue.
>>> Performance is hardly an issue in the context of RPMB access -
>>> For most cases it’s a single frame.
>>
>> Yes, the security use case typically stores hashes, variables
>> (bootcount, upgrade_available, versions, that sort of thing) and
>> certificates in RPMB.
>>
>> Since you mentioned, I am seeing that tuning before switching to RPMB
>> has an impact on performance. As a practical test, just reading a 6 byte
>> variable incurs in 50ms penalty in kernel space due to the need to
>> retune 5 times. Not great since the request is coming from a Trusted
>> Application via OP-TEE through the supplicant meaning this TEE thread
>> (they are statically allocated CFG_NUM_THREADS) will be reserved for
>> quite a bit of time.
>>
>> Roughly:
>> TA --> OP-TEE (core) --> TEE-supplicant --> Kernel (>50ms) --> OP-TEE --> TA
>
> To add more detail to the timing above, when using RPMB, OP-TEE stores
> the secure filesystem on RPMB as well, so accessing one of the variables
> stored in the filesystem consists on a number (~5) of individual RPMB
> requests (each one forcing a retune, each retune taking around 10ms).
>
> BTW, I also tried delaying the timing between those consecutive retunes
> (up to 1 second), but the issue still persisted.
>
>>
>> Adrian, I couldn't find the original performance justification for
>> enabling this feature globally. At which point do you think it becomes
>> beneficial to retune before accessing RPMB?
>
> How should we proceed with this patch then? can it be merged as I
> proposed? should I rewrite it differently? not sure what is next
It would be good to try to determine if the error happens when the
switch command comes immediately after tuning. For example, add
a delay after tuning and see if that makes any difference. e.g.
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index c79f73459915..6b168659282a 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -2867,8 +2867,10 @@ int __sdhci_execute_tuning(struct sdhci_host *host, u32 opcode)
ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);
if (!(ctrl & SDHCI_CTRL_EXEC_TUNING)) {
- if (ctrl & SDHCI_CTRL_TUNED_CLK)
+ if (ctrl & SDHCI_CTRL_TUNED_CLK) {
+ msleep(10);
return 0; /* Success! */
+ }
break;
}
On 11/12/23 12:25:19, Adrian Hunter wrote:
> On 11/12/23 10:00, Jorge Ramirez-Ortiz, Foundries wrote:
> > On 06/12/23 11:00:47, Jorge Ramirez-Ortiz, Foundries wrote:
> >> On 06/12/23 07:02:43, Avri Altman wrote:
> >>>>
> >>>> On 4/12/23 17:01, Jorge Ramirez-Ortiz wrote:
> >>>>> Requesting a retune before switching to the RPMB partition has been
> >>>>> observed to cause CRC errors on the RPMB reads (-EILSEQ).
> >>>>
> >>>> There are still 2 concerns:
> >>>> 1) We don't really know the root cause. Have you determined if here are
> >>>> CRC errors in the main partition also?
> >>
> >> right, and I don't disagree with that.
> >>
> >> As a test I created a 4GB file from /dev/random which I then copied
> >> several times (dd if= ....)
> >>
> >> root@uz3cg-dwg-sec:/sys/kernel/debug/mmc0# cat err_stats
> >> # Command Timeout Occurred: 0
> >> # Command CRC Errors Occurred: 0
> >> # Data Timeout Occurred: 0
> >> # Data CRC Errors Occurred: 0
> >> # Auto-Cmd Error Occurred: 0
> >> # ADMA Error Occurred: 0
> >> # Tuning Error Occurred: 0
> >> # CMDQ RED Errors: 0
> >> # CMDQ GCE Errors: 0
> >> # CMDQ ICCE Errors: 0
> >> # Request Timedout: 0
> >> # CMDQ Request Timedout: 0
> >> # ICE Config Errors: 0
> >> # Controller Timedout errors: 0
> >> # Unexpected IRQ errors: 0
> >>
> >> However as soon as I access RPMB and fails (it takes just a few tries) I see:
> >>
> >> I/TC: RPMB: Using generated key
> >> [ 86.902118] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
> >> E/TC:? 0
> >> E/TC:? 0 TA panicked with code 0xffff0000
> >> E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
> >> E/LD: arch: aarch64
> >> E/LD: region 0: va 0xc0004000 pa 0x7e200000 size 0x002000 flags rw-s (ldelf)
> >> E/LD: region 1: va 0xc0006000 pa 0x7e202000 size 0x008000 flags r-xs (ldelf)
> >> E/LD: region 2: va 0xc000e000 pa 0x7e20a000 size 0x001000 flags rw-s (ldelf)
> >> E/LD: region 3: va 0xc000f000 pa 0x7e20b000 size 0x004000 flags rw-s (ldelf)
> >> E/LD: region 4: va 0xc0013000 pa 0x7e20f000 size 0x001000 flags r--s
> >> E/LD: region 5: va 0xc0014000 pa 0x7e22c000 size 0x005000 flags rw-s (stack)
> >> E/LD: region 6: va 0xc0019000 pa 0x818ea9ba8 size 0x002000 flags rw-- (param)
> >> E/LD: region 7: va 0xc001b000 pa 0x818e97ba8 size 0x001000 flags rw-- (param)
> >> E/LD: region 8: va 0xc004f000 pa 0x00001000 size 0x014000 flags r-xs [0]
> >> E/LD: region 9: va 0xc0063000 pa 0x00015000 size 0x008000 flags rw-s [0]
> >> E/LD: [0] 22250a54-0bf1-48fe-8002-7b20f1c9c9b1 @ 0xc004f000
> >> E/LD: Call stack:
> >> E/LD: 0xc0051a14
> >> E/LD: 0xc004f31c
> >> E/LD: 0xc0052d40
> >> E/LD: 0xc004f624
> >>
> >> root@uz3cg-dwg-sec:/var/rootdirs/home/fio# cat /sys/kernel/debug/mmc0/err_stats
> >> # Command Timeout Occurred: 0
> >> # Command CRC Errors Occurred: 0
> >> # Data Timeout Occurred: 0
> >> # Data CRC Errors Occurred: 1
> >> # Auto-Cmd Error Occurred: 0
> >> # ADMA Error Occurred: 0
> >> # Tuning Error Occurred: 0
> >> # CMDQ RED Errors: 0
> >> # CMDQ GCE Errors: 0
> >> # CMDQ ICCE Errors: 0
> >> # Request Timedout: 0
> >> # CMDQ Request Timedout: 0
> >> # ICE Config Errors: 0
> >> # Controller Timedout errors: 0
> >> # Unexpected IRQ errors: 0
> >>
> >>>> 2) Forcing this on everyone
> >>>>
> >>>> The original idea was that because re-tuning cannot be done in RPMB, the
> >>>> need to re-rune in RPMB could be avoided by always re-tuning before
> >>>> switching to RPMB and then switching straight back. IIRC re-tuning should
> >>>> guarantee at least 4MB more I/O without issue.
> >>> Performance is hardly an issue in the context of RPMB access -
> >>> For most cases it’s a single frame.
> >>
> >> Yes, the security use case typically stores hashes, variables
> >> (bootcount, upgrade_available, versions, that sort of thing) and
> >> certificates in RPMB.
> >>
> >> Since you mentioned, I am seeing that tuning before switching to RPMB
> >> has an impact on performance. As a practical test, just reading a 6 byte
> >> variable incurs in 50ms penalty in kernel space due to the need to
> >> retune 5 times. Not great since the request is coming from a Trusted
> >> Application via OP-TEE through the supplicant meaning this TEE thread
> >> (they are statically allocated CFG_NUM_THREADS) will be reserved for
> >> quite a bit of time.
> >>
> >> Roughly:
> >> TA --> OP-TEE (core) --> TEE-supplicant --> Kernel (>50ms) --> OP-TEE --> TA
> >
> > To add more detail to the timing above, when using RPMB, OP-TEE stores
> > the secure filesystem on RPMB as well, so accessing one of the variables
> > stored in the filesystem consists on a number (~5) of individual RPMB
> > requests (each one forcing a retune, each retune taking around 10ms).
> >
> > BTW, I also tried delaying the timing between those consecutive retunes
> > (up to 1 second), but the issue still persisted.
> >
> >>
> >> Adrian, I couldn't find the original performance justification for
> >> enabling this feature globally. At which point do you think it becomes
> >> beneficial to retune before accessing RPMB?
> >
> > How should we proceed with this patch then? can it be merged as I
> > proposed? should I rewrite it differently? not sure what is next
>
> It would be good to try to determine if the error happens when the
> switch command comes immediately after tuning. For example, add
> a delay after tuning and see if that makes any difference. e.g.
>
> diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
> index c79f73459915..6b168659282a 100644
> --- a/drivers/mmc/host/sdhci.c
> +++ b/drivers/mmc/host/sdhci.c
> @@ -2867,8 +2867,10 @@ int __sdhci_execute_tuning(struct sdhci_host *host, u32 opcode)
>
> ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);
> if (!(ctrl & SDHCI_CTRL_EXEC_TUNING)) {
> - if (ctrl & SDHCI_CTRL_TUNED_CLK)
> + if (ctrl & SDHCI_CTRL_TUNED_CLK) {
> + msleep(10);
> return 0; /* Success! */
> + }
> break;
> }
>
>
Thanks Adrian.
The issue sill triggers (in this case on the last but one access to
retrieve the 6 byte variable).
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -2781,6 +2781,7 @@ static int __sdhci_execute_tuning(struct sdhci_host *host, u32 opcode)
* Issue opcode repeatedly till Execute Tuning is set to 0 or the number
* of loops reaches tuning loop count.
*/
+ printk(KERN_ERR "tune starts[loop %d]\n", host->tuning_loop_count);
for (i = 0; i < host->tuning_loop_count; i++) {
u16 ctrl;
@@ -2799,8 +2800,11 @@ static int __sdhci_execute_tuning(struct sdhci_host *host, u32 opcode)
ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);
if (!(ctrl & SDHCI_CTRL_EXEC_TUNING)) {
- if (ctrl & SDHCI_CTRL_TUNED_CLK)
- return 0; /* Success! */
+ if (ctrl & SDHCI_CTRL_TUNED_CLK) {
+ printk(KERN_ERR "tune success[%d]\n", i);
+ msleep(10);
+ return 0; /* Success! */
+ }
break;
}
[ 164.919090] tune starts[loop 40]
[ 164.923152] tune success[39]
[ 164.952106] tune starts[loop 40]
[ 164.956611] tune success[39]
[ 165.084402] tune starts[loop 40]
[ 165.089350] tune success[39]
[ 165.116491] tune starts[loop 40]
[ 165.120806] tune success[39]
[ 165.145016] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
E/TC:? 0
E/TC:? 0 TA panicked with code 0xffff0000
E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
E/LD: arch: aarch64
E/LD: region 0: va 0xc0004000 pa 0x7e200000 size 0x002000 flags rw-s (ldelf)
E/LD: region 1: va 0xc0006000 pa 0x7e202000 size 0x008000 flags r-xs (ldelf)
E/LD: region 2: va 0xc000e000 pa 0x7e20a000 size 0x001000 flags rw-s (ldelf)
E/LD: region 3: va 0xc000f000 pa 0x7e20b000 size 0x004000 flags rw-s (ldelf)
E/LD: region 4: va 0xc0013000 pa 0x7e20f000 size 0x001000 flags r--s
E/LD: region 5: va 0xc0014000 pa 0x7e22c000 size 0x005000 flags rw-s (stack)
E/LD: region 6: va 0xc0019000 pa 0x818ac38e8 size 0x001000 flags rw-- (param)
E/LD: region 7: va 0xc001a000 pa 0x818ac36e8 size 0x001000 flags rw-- (param)
E/LD: region 8: va 0xc0022000 pa 0x00001000 size 0x014000 flags r-xs [0]
E/LD: region 9: va 0xc0036000 pa 0x00015000 size 0x008000 flags rw-s [0]
E/LD: [0] 22250a54-0bf1-48fe-8002-7b20f1c9c9b1 @ 0xc0022000
E/LD: Call stack:
E/LD: 0xc0024e58
E/LD: 0xc0022388
E/LD: 0xc0025d40
E/LD: 0xc0022624
[ 166.119598] tune starts[loop 40]
[ 166.125700] tune success[39]
On 11/12/23 13:06, Jorge Ramirez-Ortiz, Foundries wrote:
> On 11/12/23 12:25:19, Adrian Hunter wrote:
>> On 11/12/23 10:00, Jorge Ramirez-Ortiz, Foundries wrote:
>>> On 06/12/23 11:00:47, Jorge Ramirez-Ortiz, Foundries wrote:
>>>> On 06/12/23 07:02:43, Avri Altman wrote:
>>>>>>
>>>>>> On 4/12/23 17:01, Jorge Ramirez-Ortiz wrote:
>>>>>>> Requesting a retune before switching to the RPMB partition has been
>>>>>>> observed to cause CRC errors on the RPMB reads (-EILSEQ).
>>>>>>
>>>>>> There are still 2 concerns:
>>>>>> 1) We don't really know the root cause. Have you determined if here are
>>>>>> CRC errors in the main partition also?
>>>>
>>>> right, and I don't disagree with that.
>>>>
>>>> As a test I created a 4GB file from /dev/random which I then copied
>>>> several times (dd if= ....)
>>>>
>>>> root@uz3cg-dwg-sec:/sys/kernel/debug/mmc0# cat err_stats
>>>> # Command Timeout Occurred: 0
>>>> # Command CRC Errors Occurred: 0
>>>> # Data Timeout Occurred: 0
>>>> # Data CRC Errors Occurred: 0
>>>> # Auto-Cmd Error Occurred: 0
>>>> # ADMA Error Occurred: 0
>>>> # Tuning Error Occurred: 0
>>>> # CMDQ RED Errors: 0
>>>> # CMDQ GCE Errors: 0
>>>> # CMDQ ICCE Errors: 0
>>>> # Request Timedout: 0
>>>> # CMDQ Request Timedout: 0
>>>> # ICE Config Errors: 0
>>>> # Controller Timedout errors: 0
>>>> # Unexpected IRQ errors: 0
>>>>
>>>> However as soon as I access RPMB and fails (it takes just a few tries) I see:
>>>>
>>>> I/TC: RPMB: Using generated key
>>>> [ 86.902118] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
>>>> E/TC:? 0
>>>> E/TC:? 0 TA panicked with code 0xffff0000
>>>> E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
>>>> E/LD: arch: aarch64
>>>> E/LD: region 0: va 0xc0004000 pa 0x7e200000 size 0x002000 flags rw-s (ldelf)
>>>> E/LD: region 1: va 0xc0006000 pa 0x7e202000 size 0x008000 flags r-xs (ldelf)
>>>> E/LD: region 2: va 0xc000e000 pa 0x7e20a000 size 0x001000 flags rw-s (ldelf)
>>>> E/LD: region 3: va 0xc000f000 pa 0x7e20b000 size 0x004000 flags rw-s (ldelf)
>>>> E/LD: region 4: va 0xc0013000 pa 0x7e20f000 size 0x001000 flags r--s
>>>> E/LD: region 5: va 0xc0014000 pa 0x7e22c000 size 0x005000 flags rw-s (stack)
>>>> E/LD: region 6: va 0xc0019000 pa 0x818ea9ba8 size 0x002000 flags rw-- (param)
>>>> E/LD: region 7: va 0xc001b000 pa 0x818e97ba8 size 0x001000 flags rw-- (param)
>>>> E/LD: region 8: va 0xc004f000 pa 0x00001000 size 0x014000 flags r-xs [0]
>>>> E/LD: region 9: va 0xc0063000 pa 0x00015000 size 0x008000 flags rw-s [0]
>>>> E/LD: [0] 22250a54-0bf1-48fe-8002-7b20f1c9c9b1 @ 0xc004f000
>>>> E/LD: Call stack:
>>>> E/LD: 0xc0051a14
>>>> E/LD: 0xc004f31c
>>>> E/LD: 0xc0052d40
>>>> E/LD: 0xc004f624
>>>>
>>>> root@uz3cg-dwg-sec:/var/rootdirs/home/fio# cat /sys/kernel/debug/mmc0/err_stats
>>>> # Command Timeout Occurred: 0
>>>> # Command CRC Errors Occurred: 0
>>>> # Data Timeout Occurred: 0
>>>> # Data CRC Errors Occurred: 1
>>>> # Auto-Cmd Error Occurred: 0
>>>> # ADMA Error Occurred: 0
>>>> # Tuning Error Occurred: 0
>>>> # CMDQ RED Errors: 0
>>>> # CMDQ GCE Errors: 0
>>>> # CMDQ ICCE Errors: 0
>>>> # Request Timedout: 0
>>>> # CMDQ Request Timedout: 0
>>>> # ICE Config Errors: 0
>>>> # Controller Timedout errors: 0
>>>> # Unexpected IRQ errors: 0
>>>>
>>>>>> 2) Forcing this on everyone
>>>>>>
>>>>>> The original idea was that because re-tuning cannot be done in RPMB, the
>>>>>> need to re-rune in RPMB could be avoided by always re-tuning before
>>>>>> switching to RPMB and then switching straight back. IIRC re-tuning should
>>>>>> guarantee at least 4MB more I/O without issue.
>>>>> Performance is hardly an issue in the context of RPMB access -
>>>>> For most cases it’s a single frame.
>>>>
>>>> Yes, the security use case typically stores hashes, variables
>>>> (bootcount, upgrade_available, versions, that sort of thing) and
>>>> certificates in RPMB.
>>>>
>>>> Since you mentioned, I am seeing that tuning before switching to RPMB
>>>> has an impact on performance. As a practical test, just reading a 6 byte
>>>> variable incurs in 50ms penalty in kernel space due to the need to
>>>> retune 5 times. Not great since the request is coming from a Trusted
>>>> Application via OP-TEE through the supplicant meaning this TEE thread
>>>> (they are statically allocated CFG_NUM_THREADS) will be reserved for
>>>> quite a bit of time.
>>>>
>>>> Roughly:
>>>> TA --> OP-TEE (core) --> TEE-supplicant --> Kernel (>50ms) --> OP-TEE --> TA
>>>
>>> To add more detail to the timing above, when using RPMB, OP-TEE stores
>>> the secure filesystem on RPMB as well, so accessing one of the variables
>>> stored in the filesystem consists on a number (~5) of individual RPMB
>>> requests (each one forcing a retune, each retune taking around 10ms).
>>>
>>> BTW, I also tried delaying the timing between those consecutive retunes
>>> (up to 1 second), but the issue still persisted.
>>>
>>>>
>>>> Adrian, I couldn't find the original performance justification for
>>>> enabling this feature globally. At which point do you think it becomes
>>>> beneficial to retune before accessing RPMB?
>>>
>>> How should we proceed with this patch then? can it be merged as I
>>> proposed? should I rewrite it differently? not sure what is next
>>
>> It would be good to try to determine if the error happens when the
>> switch command comes immediately after tuning. For example, add
>> a delay after tuning and see if that makes any difference. e.g.
>>
>> diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
>> index c79f73459915..6b168659282a 100644
>> --- a/drivers/mmc/host/sdhci.c
>> +++ b/drivers/mmc/host/sdhci.c
>> @@ -2867,8 +2867,10 @@ int __sdhci_execute_tuning(struct sdhci_host *host, u32 opcode)
>>
>> ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);
>> if (!(ctrl & SDHCI_CTRL_EXEC_TUNING)) {
>> - if (ctrl & SDHCI_CTRL_TUNED_CLK)
>> + if (ctrl & SDHCI_CTRL_TUNED_CLK) {
>> + msleep(10);
>> return 0; /* Success! */
>> + }
>> break;
>> }
>>
>>
>
> Thanks Adrian.
>
> The issue sill triggers (in this case on the last but one access to
> retrieve the 6 byte variable).
>
> --- a/drivers/mmc/host/sdhci.c
> +++ b/drivers/mmc/host/sdhci.c
> @@ -2781,6 +2781,7 @@ static int __sdhci_execute_tuning(struct sdhci_host *host, u32 opcode)
> * Issue opcode repeatedly till Execute Tuning is set to 0 or the number
> * of loops reaches tuning loop count.
> */
> + printk(KERN_ERR "tune starts[loop %d]\n", host->tuning_loop_count);
> for (i = 0; i < host->tuning_loop_count; i++) {
> u16 ctrl;
>
> @@ -2799,8 +2800,11 @@ static int __sdhci_execute_tuning(struct sdhci_host *host, u32 opcode)
>
> ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);
> if (!(ctrl & SDHCI_CTRL_EXEC_TUNING)) {
> - if (ctrl & SDHCI_CTRL_TUNED_CLK)
> - return 0; /* Success! */
> + if (ctrl & SDHCI_CTRL_TUNED_CLK) {
> + printk(KERN_ERR "tune success[%d]\n", i);
> + msleep(10);
> + return 0; /* Success! */
> + }
> break;
> }
>
>
> [ 164.919090] tune starts[loop 40]
> [ 164.923152] tune success[39]
> [ 164.952106] tune starts[loop 40]
> [ 164.956611] tune success[39]
> [ 165.084402] tune starts[loop 40]
> [ 165.089350] tune success[39]
> [ 165.116491] tune starts[loop 40]
> [ 165.120806] tune success[39]
> [ 165.145016] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
> E/TC:? 0
> E/TC:? 0 TA panicked with code 0xffff0000
> E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
> E/LD: arch: aarch64
> E/LD: region 0: va 0xc0004000 pa 0x7e200000 size 0x002000 flags rw-s (ldelf)
> E/LD: region 1: va 0xc0006000 pa 0x7e202000 size 0x008000 flags r-xs (ldelf)
> E/LD: region 2: va 0xc000e000 pa 0x7e20a000 size 0x001000 flags rw-s (ldelf)
> E/LD: region 3: va 0xc000f000 pa 0x7e20b000 size 0x004000 flags rw-s (ldelf)
> E/LD: region 4: va 0xc0013000 pa 0x7e20f000 size 0x001000 flags r--s
> E/LD: region 5: va 0xc0014000 pa 0x7e22c000 size 0x005000 flags rw-s (stack)
> E/LD: region 6: va 0xc0019000 pa 0x818ac38e8 size 0x001000 flags rw-- (param)
> E/LD: region 7: va 0xc001a000 pa 0x818ac36e8 size 0x001000 flags rw-- (param)
> E/LD: region 8: va 0xc0022000 pa 0x00001000 size 0x014000 flags r-xs [0]
> E/LD: region 9: va 0xc0036000 pa 0x00015000 size 0x008000 flags rw-s [0]
> E/LD: [0] 22250a54-0bf1-48fe-8002-7b20f1c9c9b1 @ 0xc0022000
> E/LD: Call stack:
> E/LD: 0xc0024e58
> E/LD: 0xc0022388
> E/LD: 0xc0025d40
> E/LD: 0xc0022624
> [ 166.119598] tune starts[loop 40]
> [ 166.125700] tune success[39]
It might make a difference if there is another command before the
switch command, e.g. add a CMD13 (could keep the tune print to tell
that the CMD13 comes after re-tuning)
diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 892e74e611a0..3265049f4132 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -875,6 +875,7 @@ static int mmc_blk_part_switch_pre(struct mmc_card *card,
unsigned int part_type)
{
int ret = 0;
+ u32 status;
if (part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) {
if (card->ext_csd.cmdq_en) {
@@ -883,6 +884,9 @@ static int mmc_blk_part_switch_pre(struct mmc_card *card,
return ret;
}
mmc_retune_pause(card->host);
+ ret = mmc_send_status(card, &status);
+ pr_info("%s: %s: status %#x, ret %d\n", mmc_hostname(card->host), __func__, status, ret);
+ ret = 0;
}
return ret;
And another thing to try is SDHCI_QUIRK2_TUNING_WORK_AROUND
On 11/12/23 13:32:57, Adrian Hunter wrote:
> On 11/12/23 13:06, Jorge Ramirez-Ortiz, Foundries wrote:
> > On 11/12/23 12:25:19, Adrian Hunter wrote:
> >> On 11/12/23 10:00, Jorge Ramirez-Ortiz, Foundries wrote:
> >>> On 06/12/23 11:00:47, Jorge Ramirez-Ortiz, Foundries wrote:
> >>>> On 06/12/23 07:02:43, Avri Altman wrote:
> >>>>>>
> >>>>>> On 4/12/23 17:01, Jorge Ramirez-Ortiz wrote:
> >>>>>>> Requesting a retune before switching to the RPMB partition has been
> >>>>>>> observed to cause CRC errors on the RPMB reads (-EILSEQ).
> >>>>>>
> >>>>>> There are still 2 concerns:
> >>>>>> 1) We don't really know the root cause. Have you determined if here are
> >>>>>> CRC errors in the main partition also?
> >>>>
> >>>> right, and I don't disagree with that.
> >>>>
> >>>> As a test I created a 4GB file from /dev/random which I then copied
> >>>> several times (dd if= ....)
> >>>>
> >>>> root@uz3cg-dwg-sec:/sys/kernel/debug/mmc0# cat err_stats
> >>>> # Command Timeout Occurred: 0
> >>>> # Command CRC Errors Occurred: 0
> >>>> # Data Timeout Occurred: 0
> >>>> # Data CRC Errors Occurred: 0
> >>>> # Auto-Cmd Error Occurred: 0
> >>>> # ADMA Error Occurred: 0
> >>>> # Tuning Error Occurred: 0
> >>>> # CMDQ RED Errors: 0
> >>>> # CMDQ GCE Errors: 0
> >>>> # CMDQ ICCE Errors: 0
> >>>> # Request Timedout: 0
> >>>> # CMDQ Request Timedout: 0
> >>>> # ICE Config Errors: 0
> >>>> # Controller Timedout errors: 0
> >>>> # Unexpected IRQ errors: 0
> >>>>
> >>>> However as soon as I access RPMB and fails (it takes just a few tries) I see:
> >>>>
> >>>> I/TC: RPMB: Using generated key
> >>>> [ 86.902118] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
> >>>> E/TC:? 0
> >>>> E/TC:? 0 TA panicked with code 0xffff0000
> >>>> E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
> >>>> E/LD: arch: aarch64
> >>>> E/LD: region 0: va 0xc0004000 pa 0x7e200000 size 0x002000 flags rw-s (ldelf)
> >>>> E/LD: region 1: va 0xc0006000 pa 0x7e202000 size 0x008000 flags r-xs (ldelf)
> >>>> E/LD: region 2: va 0xc000e000 pa 0x7e20a000 size 0x001000 flags rw-s (ldelf)
> >>>> E/LD: region 3: va 0xc000f000 pa 0x7e20b000 size 0x004000 flags rw-s (ldelf)
> >>>> E/LD: region 4: va 0xc0013000 pa 0x7e20f000 size 0x001000 flags r--s
> >>>> E/LD: region 5: va 0xc0014000 pa 0x7e22c000 size 0x005000 flags rw-s (stack)
> >>>> E/LD: region 6: va 0xc0019000 pa 0x818ea9ba8 size 0x002000 flags rw-- (param)
> >>>> E/LD: region 7: va 0xc001b000 pa 0x818e97ba8 size 0x001000 flags rw-- (param)
> >>>> E/LD: region 8: va 0xc004f000 pa 0x00001000 size 0x014000 flags r-xs [0]
> >>>> E/LD: region 9: va 0xc0063000 pa 0x00015000 size 0x008000 flags rw-s [0]
> >>>> E/LD: [0] 22250a54-0bf1-48fe-8002-7b20f1c9c9b1 @ 0xc004f000
> >>>> E/LD: Call stack:
> >>>> E/LD: 0xc0051a14
> >>>> E/LD: 0xc004f31c
> >>>> E/LD: 0xc0052d40
> >>>> E/LD: 0xc004f624
> >>>>
> >>>> root@uz3cg-dwg-sec:/var/rootdirs/home/fio# cat /sys/kernel/debug/mmc0/err_stats
> >>>> # Command Timeout Occurred: 0
> >>>> # Command CRC Errors Occurred: 0
> >>>> # Data Timeout Occurred: 0
> >>>> # Data CRC Errors Occurred: 1
> >>>> # Auto-Cmd Error Occurred: 0
> >>>> # ADMA Error Occurred: 0
> >>>> # Tuning Error Occurred: 0
> >>>> # CMDQ RED Errors: 0
> >>>> # CMDQ GCE Errors: 0
> >>>> # CMDQ ICCE Errors: 0
> >>>> # Request Timedout: 0
> >>>> # CMDQ Request Timedout: 0
> >>>> # ICE Config Errors: 0
> >>>> # Controller Timedout errors: 0
> >>>> # Unexpected IRQ errors: 0
> >>>>
> >>>>>> 2) Forcing this on everyone
> >>>>>>
> >>>>>> The original idea was that because re-tuning cannot be done in RPMB, the
> >>>>>> need to re-rune in RPMB could be avoided by always re-tuning before
> >>>>>> switching to RPMB and then switching straight back. IIRC re-tuning should
> >>>>>> guarantee at least 4MB more I/O without issue.
> >>>>> Performance is hardly an issue in the context of RPMB access -
> >>>>> For most cases it’s a single frame.
> >>>>
> >>>> Yes, the security use case typically stores hashes, variables
> >>>> (bootcount, upgrade_available, versions, that sort of thing) and
> >>>> certificates in RPMB.
> >>>>
> >>>> Since you mentioned, I am seeing that tuning before switching to RPMB
> >>>> has an impact on performance. As a practical test, just reading a 6 byte
> >>>> variable incurs in 50ms penalty in kernel space due to the need to
> >>>> retune 5 times. Not great since the request is coming from a Trusted
> >>>> Application via OP-TEE through the supplicant meaning this TEE thread
> >>>> (they are statically allocated CFG_NUM_THREADS) will be reserved for
> >>>> quite a bit of time.
> >>>>
> >>>> Roughly:
> >>>> TA --> OP-TEE (core) --> TEE-supplicant --> Kernel (>50ms) --> OP-TEE --> TA
> >>>
> >>> To add more detail to the timing above, when using RPMB, OP-TEE stores
> >>> the secure filesystem on RPMB as well, so accessing one of the variables
> >>> stored in the filesystem consists on a number (~5) of individual RPMB
> >>> requests (each one forcing a retune, each retune taking around 10ms).
> >>>
> >>> BTW, I also tried delaying the timing between those consecutive retunes
> >>> (up to 1 second), but the issue still persisted.
> >>>
> >>>>
> >>>> Adrian, I couldn't find the original performance justification for
> >>>> enabling this feature globally. At which point do you think it becomes
> >>>> beneficial to retune before accessing RPMB?
> >>>
> >>> How should we proceed with this patch then? can it be merged as I
> >>> proposed? should I rewrite it differently? not sure what is next
> >>
> >> It would be good to try to determine if the error happens when the
> >> switch command comes immediately after tuning. For example, add
> >> a delay after tuning and see if that makes any difference. e.g.
> >>
> >> diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
> >> index c79f73459915..6b168659282a 100644
> >> --- a/drivers/mmc/host/sdhci.c
> >> +++ b/drivers/mmc/host/sdhci.c
> >> @@ -2867,8 +2867,10 @@ int __sdhci_execute_tuning(struct sdhci_host *host, u32 opcode)
> >>
> >> ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);
> >> if (!(ctrl & SDHCI_CTRL_EXEC_TUNING)) {
> >> - if (ctrl & SDHCI_CTRL_TUNED_CLK)
> >> + if (ctrl & SDHCI_CTRL_TUNED_CLK) {
> >> + msleep(10);
> >> return 0; /* Success! */
> >> + }
> >> break;
> >> }
> >>
> >>
> >
> > Thanks Adrian.
> >
> > The issue sill triggers (in this case on the last but one access to
> > retrieve the 6 byte variable).
> >
> > --- a/drivers/mmc/host/sdhci.c
> > +++ b/drivers/mmc/host/sdhci.c
> > @@ -2781,6 +2781,7 @@ static int __sdhci_execute_tuning(struct sdhci_host *host, u32 opcode)
> > * Issue opcode repeatedly till Execute Tuning is set to 0 or the number
> > * of loops reaches tuning loop count.
> > */
> > + printk(KERN_ERR "tune starts[loop %d]\n", host->tuning_loop_count);
> > for (i = 0; i < host->tuning_loop_count; i++) {
> > u16 ctrl;
> >
> > @@ -2799,8 +2800,11 @@ static int __sdhci_execute_tuning(struct sdhci_host *host, u32 opcode)
> >
> > ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);
> > if (!(ctrl & SDHCI_CTRL_EXEC_TUNING)) {
> > - if (ctrl & SDHCI_CTRL_TUNED_CLK)
> > - return 0; /* Success! */
> > + if (ctrl & SDHCI_CTRL_TUNED_CLK) {
> > + printk(KERN_ERR "tune success[%d]\n", i);
> > + msleep(10);
> > + return 0; /* Success! */
> > + }
> > break;
> > }
> >
> >
> > [ 164.919090] tune starts[loop 40]
> > [ 164.923152] tune success[39]
> > [ 164.952106] tune starts[loop 40]
> > [ 164.956611] tune success[39]
> > [ 165.084402] tune starts[loop 40]
> > [ 165.089350] tune success[39]
> > [ 165.116491] tune starts[loop 40]
> > [ 165.120806] tune success[39]
> > [ 165.145016] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
> > E/TC:? 0
> > E/TC:? 0 TA panicked with code 0xffff0000
> > E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
> > E/LD: arch: aarch64
> > E/LD: region 0: va 0xc0004000 pa 0x7e200000 size 0x002000 flags rw-s (ldelf)
> > E/LD: region 1: va 0xc0006000 pa 0x7e202000 size 0x008000 flags r-xs (ldelf)
> > E/LD: region 2: va 0xc000e000 pa 0x7e20a000 size 0x001000 flags rw-s (ldelf)
> > E/LD: region 3: va 0xc000f000 pa 0x7e20b000 size 0x004000 flags rw-s (ldelf)
> > E/LD: region 4: va 0xc0013000 pa 0x7e20f000 size 0x001000 flags r--s
> > E/LD: region 5: va 0xc0014000 pa 0x7e22c000 size 0x005000 flags rw-s (stack)
> > E/LD: region 6: va 0xc0019000 pa 0x818ac38e8 size 0x001000 flags rw-- (param)
> > E/LD: region 7: va 0xc001a000 pa 0x818ac36e8 size 0x001000 flags rw-- (param)
> > E/LD: region 8: va 0xc0022000 pa 0x00001000 size 0x014000 flags r-xs [0]
> > E/LD: region 9: va 0xc0036000 pa 0x00015000 size 0x008000 flags rw-s [0]
> > E/LD: [0] 22250a54-0bf1-48fe-8002-7b20f1c9c9b1 @ 0xc0022000
> > E/LD: Call stack:
> > E/LD: 0xc0024e58
> > E/LD: 0xc0022388
> > E/LD: 0xc0025d40
> > E/LD: 0xc0022624
> > [ 166.119598] tune starts[loop 40]
> > [ 166.125700] tune success[39]
>
> It might make a difference if there is another command before the
> switch command, e.g. add a CMD13 (could keep the tune print to tell
> that the CMD13 comes after re-tuning)
>
> diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
> index 892e74e611a0..3265049f4132 100644
> --- a/drivers/mmc/core/block.c
> +++ b/drivers/mmc/core/block.c
> @@ -875,6 +875,7 @@ static int mmc_blk_part_switch_pre(struct mmc_card *card,
> unsigned int part_type)
> {
> int ret = 0;
> + u32 status;
>
> if (part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) {
> if (card->ext_csd.cmdq_en) {
> @@ -883,6 +884,9 @@ static int mmc_blk_part_switch_pre(struct mmc_card *card,
> return ret;
> }
> mmc_retune_pause(card->host);
> + ret = mmc_send_status(card, &status);
> + pr_info("%s: %s: status %#x, ret %d\n", mmc_hostname(card->host), __func__, status, ret);
> + ret = 0;
> }
>
> return ret;
>
>
>
> And another thing to try is SDHCI_QUIRK2_TUNING_WORK_AROUND
>
I run it in a loop with all the proposed changes and after 6 or 7 seconds the error triggered:
$ while true; do if ! read_rpmb variable ;then exit 0; fi; done
[....]
[ 146.686482] ldts: mmc0: mmc_blk_part_switch_pre: status 0x900, ret 0
[ 146.699092] ldts: tune starts[loop 40]
[ 146.704032] ldts: tune success[39]
[ 146.726442] ldts: mmc0: mmc_blk_part_switch_pre: status 0x900, ret 0
[ 146.745891] ldts: tune starts[loop 40]
[ 146.750858] ldts: tune success[39]
[ 146.774506] ldts: mmc0: mmc_blk_part_switch_pre: status 0x900, ret 0
[ 146.787191] ldts: tune starts[loop 40]
[ 146.792087] ldts: tune success[39]
[ 146.814544] ldts: mmc0: mmc_blk_part_switch_pre: status 0x900, ret 0
[ 146.826383] ldts: tune starts[loop 40]
[ 146.830942] ldts: tune success[39]
[ 146.854500] ldts: mmc0: mmc_blk_part_switch_pre: status 0x900, ret 0
[ 146.863421] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
E/TC:? 0
E/TC:? 0 TA panicked with code 0xffff0000
E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
E/LD: arch: aarch64
E/LD: region 0: va 0xc0004000 pa 0x7e200000 size 0x002000 flags rw-s (ldelf)
E/LD: region 1: va 0xc0006000 pa 0x7e202000 size 0x008000 flags r-xs (ldelf)
E/LD: region 2: va 0xc000e000 pa 0x7e20a000 size 0x001000 flags rw-s (ldelf)
E/LD: region 3: va 0xc000f000 pa 0x7e20b000 size 0x004000 flags rw-s (ldelf)
E/LD: region 4: va 0xc0013000 pa 0x7e20f000 size 0x001000 flags r--s
E/LD: region 5: va 0xc0014000 pa 0x7e22c000 size 0x005000 flags rw-s (stack)
E/LD: region 6: va 0xc0019000 pa 0x818c17b48 size 0x001000 flags rw-- (param)
E/LD: region 7: va 0xc001a000 pa 0x818c17d48 size 0x001000 flags rw-- (param)
E/LD: region 8: va 0xc003c000 pa 0x00001000 size 0x014000 flags r-xs [0]
E/LD: region 9: va 0xc0050000 pa 0x00015000 size 0x008000 flags rw-s [0]
E/LD: [0] 22250a54-0bf1-48fe-8002-7b20f1c9c9b1 @ 0xc003c000
E/LD: Call stack:
E/LD: 0xc003ea14
E/LD: 0xc003c31c
E/LD: 0xc003fd40
On 4/12/23 19:22, Jorge Ramirez-Ortiz wrote:
> Requesting a retune before switching to the RPMB partition has been
> observed to cause CRC errors on the RPMB reads (-EILSEQ).
>
> Since RPMB reads can not be retried, the clients would be directly
> affected by the errors.
>
> This commit disables the request prior to RPMB switching while allowing
> the pause interface to still request a retune before the pause for other
> use cases.
>
> This was verified with the sdhci-of-arasan driver (ZynqMP) configured
> for HS200 using two separate eMMC cards (DG4064 and 064GB2). In both
> cases, the error was easy to reproduce triggering every few tenths of
> reads.
>
> Signed-off-by: Jorge Ramirez-Ortiz <[email protected]>
> Acked-by: Avri Altman <[email protected]>
>
> ---
> drivers/mmc/core/block.c | 2 +-
> drivers/mmc/core/host.c | 7 ++++---
> drivers/mmc/core/host.h | 2 +-
> 3 files changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
> index f9a5cffa64b1..1d69078ad9b2 100644
> --- a/drivers/mmc/core/block.c
> +++ b/drivers/mmc/core/block.c
> @@ -859,7 +859,7 @@ static int mmc_blk_part_switch_pre(struct mmc_card *card,
> if (ret)
> return ret;
> }
> - mmc_retune_pause(card->host);
> + mmc_retune_pause(card->host, false);
> }
>
> return ret;
> diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
> index 096093f7be00..a9b95aaa2235 100644
> --- a/drivers/mmc/core/host.c
> +++ b/drivers/mmc/core/host.c
> @@ -119,13 +119,14 @@ void mmc_retune_enable(struct mmc_host *host)
>
> /*
> * Pause re-tuning for a small set of operations. The pause begins after the
> - * next command and after first doing re-tuning.
> + * next command and, if retune is set, after first doing re-tuning.
> */
> -void mmc_retune_pause(struct mmc_host *host)
> +void mmc_retune_pause(struct mmc_host *host, bool retune)
> {
> if (!host->retune_paused) {
> host->retune_paused = 1;
> - mmc_retune_needed(host);
> + if (retune)
> + mmc_retune_needed(host);
Better to just drop mmc_retune_needed(host);
> mmc_retune_hold(host);
> }
> }
> diff --git a/drivers/mmc/core/host.h b/drivers/mmc/core/host.h
> index 48c4952512a5..321776b52270 100644
> --- a/drivers/mmc/core/host.h
> +++ b/drivers/mmc/core/host.h
> @@ -18,7 +18,7 @@ void mmc_retune_disable(struct mmc_host *host);
> void mmc_retune_hold(struct mmc_host *host);
> void mmc_retune_release(struct mmc_host *host);
> int mmc_retune(struct mmc_host *host);
> -void mmc_retune_pause(struct mmc_host *host);
> +void mmc_retune_pause(struct mmc_host *host, bool retune);
> void mmc_retune_unpause(struct mmc_host *host);
>
> static inline void mmc_retune_clear(struct mmc_host *host)
> --
> 2.34.1
On 11/12/23 17:05, Jorge Ramirez-Ortiz, Foundries wrote:
> On 11/12/23 13:32:57, Adrian Hunter wrote:
>> On 11/12/23 13:06, Jorge Ramirez-Ortiz, Foundries wrote:
>>> On 11/12/23 12:25:19, Adrian Hunter wrote:
>>>> On 11/12/23 10:00, Jorge Ramirez-Ortiz, Foundries wrote:
>>>>> On 06/12/23 11:00:47, Jorge Ramirez-Ortiz, Foundries wrote:
>>>>>> On 06/12/23 07:02:43, Avri Altman wrote:
>>>>>>>>
>>>>>>>> On 4/12/23 17:01, Jorge Ramirez-Ortiz wrote:
>>>>>>>>> Requesting a retune before switching to the RPMB partition has been
>>>>>>>>> observed to cause CRC errors on the RPMB reads (-EILSEQ).
>>>>>>>>
>>>>>>>> There are still 2 concerns:
>>>>>>>> 1) We don't really know the root cause. Have you determined if here are
>>>>>>>> CRC errors in the main partition also?
>>>>>>
>>>>>> right, and I don't disagree with that.
>>>>>>
>>>>>> As a test I created a 4GB file from /dev/random which I then copied
>>>>>> several times (dd if= ....)
>>>>>>
>>>>>> root@uz3cg-dwg-sec:/sys/kernel/debug/mmc0# cat err_stats
>>>>>> # Command Timeout Occurred: 0
>>>>>> # Command CRC Errors Occurred: 0
>>>>>> # Data Timeout Occurred: 0
>>>>>> # Data CRC Errors Occurred: 0
>>>>>> # Auto-Cmd Error Occurred: 0
>>>>>> # ADMA Error Occurred: 0
>>>>>> # Tuning Error Occurred: 0
>>>>>> # CMDQ RED Errors: 0
>>>>>> # CMDQ GCE Errors: 0
>>>>>> # CMDQ ICCE Errors: 0
>>>>>> # Request Timedout: 0
>>>>>> # CMDQ Request Timedout: 0
>>>>>> # ICE Config Errors: 0
>>>>>> # Controller Timedout errors: 0
>>>>>> # Unexpected IRQ errors: 0
>>>>>>
>>>>>> However as soon as I access RPMB and fails (it takes just a few tries) I see:
>>>>>>
>>>>>> I/TC: RPMB: Using generated key
>>>>>> [ 86.902118] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
>>>>>> E/TC:? 0
>>>>>> E/TC:? 0 TA panicked with code 0xffff0000
>>>>>> E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
>>>>>> E/LD: arch: aarch64
>>>>>> E/LD: region 0: va 0xc0004000 pa 0x7e200000 size 0x002000 flags rw-s (ldelf)
>>>>>> E/LD: region 1: va 0xc0006000 pa 0x7e202000 size 0x008000 flags r-xs (ldelf)
>>>>>> E/LD: region 2: va 0xc000e000 pa 0x7e20a000 size 0x001000 flags rw-s (ldelf)
>>>>>> E/LD: region 3: va 0xc000f000 pa 0x7e20b000 size 0x004000 flags rw-s (ldelf)
>>>>>> E/LD: region 4: va 0xc0013000 pa 0x7e20f000 size 0x001000 flags r--s
>>>>>> E/LD: region 5: va 0xc0014000 pa 0x7e22c000 size 0x005000 flags rw-s (stack)
>>>>>> E/LD: region 6: va 0xc0019000 pa 0x818ea9ba8 size 0x002000 flags rw-- (param)
>>>>>> E/LD: region 7: va 0xc001b000 pa 0x818e97ba8 size 0x001000 flags rw-- (param)
>>>>>> E/LD: region 8: va 0xc004f000 pa 0x00001000 size 0x014000 flags r-xs [0]
>>>>>> E/LD: region 9: va 0xc0063000 pa 0x00015000 size 0x008000 flags rw-s [0]
>>>>>> E/LD: [0] 22250a54-0bf1-48fe-8002-7b20f1c9c9b1 @ 0xc004f000
>>>>>> E/LD: Call stack:
>>>>>> E/LD: 0xc0051a14
>>>>>> E/LD: 0xc004f31c
>>>>>> E/LD: 0xc0052d40
>>>>>> E/LD: 0xc004f624
>>>>>>
>>>>>> root@uz3cg-dwg-sec:/var/rootdirs/home/fio# cat /sys/kernel/debug/mmc0/err_stats
>>>>>> # Command Timeout Occurred: 0
>>>>>> # Command CRC Errors Occurred: 0
>>>>>> # Data Timeout Occurred: 0
>>>>>> # Data CRC Errors Occurred: 1
>>>>>> # Auto-Cmd Error Occurred: 0
>>>>>> # ADMA Error Occurred: 0
>>>>>> # Tuning Error Occurred: 0
>>>>>> # CMDQ RED Errors: 0
>>>>>> # CMDQ GCE Errors: 0
>>>>>> # CMDQ ICCE Errors: 0
>>>>>> # Request Timedout: 0
>>>>>> # CMDQ Request Timedout: 0
>>>>>> # ICE Config Errors: 0
>>>>>> # Controller Timedout errors: 0
>>>>>> # Unexpected IRQ errors: 0
>>>>>>
>>>>>>>> 2) Forcing this on everyone
>>>>>>>>
>>>>>>>> The original idea was that because re-tuning cannot be done in RPMB, the
>>>>>>>> need to re-rune in RPMB could be avoided by always re-tuning before
>>>>>>>> switching to RPMB and then switching straight back. IIRC re-tuning should
>>>>>>>> guarantee at least 4MB more I/O without issue.
>>>>>>> Performance is hardly an issue in the context of RPMB access -
>>>>>>> For most cases it’s a single frame.
>>>>>>
>>>>>> Yes, the security use case typically stores hashes, variables
>>>>>> (bootcount, upgrade_available, versions, that sort of thing) and
>>>>>> certificates in RPMB.
>>>>>>
>>>>>> Since you mentioned, I am seeing that tuning before switching to RPMB
>>>>>> has an impact on performance. As a practical test, just reading a 6 byte
>>>>>> variable incurs in 50ms penalty in kernel space due to the need to
>>>>>> retune 5 times. Not great since the request is coming from a Trusted
>>>>>> Application via OP-TEE through the supplicant meaning this TEE thread
>>>>>> (they are statically allocated CFG_NUM_THREADS) will be reserved for
>>>>>> quite a bit of time.
>>>>>>
>>>>>> Roughly:
>>>>>> TA --> OP-TEE (core) --> TEE-supplicant --> Kernel (>50ms) --> OP-TEE --> TA
>>>>>
>>>>> To add more detail to the timing above, when using RPMB, OP-TEE stores
>>>>> the secure filesystem on RPMB as well, so accessing one of the variables
>>>>> stored in the filesystem consists on a number (~5) of individual RPMB
>>>>> requests (each one forcing a retune, each retune taking around 10ms).
>>>>>
>>>>> BTW, I also tried delaying the timing between those consecutive retunes
>>>>> (up to 1 second), but the issue still persisted.
>>>>>
>>>>>>
>>>>>> Adrian, I couldn't find the original performance justification for
>>>>>> enabling this feature globally. At which point do you think it becomes
>>>>>> beneficial to retune before accessing RPMB?
>>>>>
>>>>> How should we proceed with this patch then? can it be merged as I
>>>>> proposed? should I rewrite it differently? not sure what is next
>>>>
>>>> It would be good to try to determine if the error happens when the
>>>> switch command comes immediately after tuning. For example, add
>>>> a delay after tuning and see if that makes any difference. e.g.
>>>>
>>>> diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
>>>> index c79f73459915..6b168659282a 100644
>>>> --- a/drivers/mmc/host/sdhci.c
>>>> +++ b/drivers/mmc/host/sdhci.c
>>>> @@ -2867,8 +2867,10 @@ int __sdhci_execute_tuning(struct sdhci_host *host, u32 opcode)
>>>>
>>>> ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);
>>>> if (!(ctrl & SDHCI_CTRL_EXEC_TUNING)) {
>>>> - if (ctrl & SDHCI_CTRL_TUNED_CLK)
>>>> + if (ctrl & SDHCI_CTRL_TUNED_CLK) {
>>>> + msleep(10);
>>>> return 0; /* Success! */
>>>> + }
>>>> break;
>>>> }
>>>>
>>>>
>>>
>>> Thanks Adrian.
>>>
>>> The issue sill triggers (in this case on the last but one access to
>>> retrieve the 6 byte variable).
>>>
>>> --- a/drivers/mmc/host/sdhci.c
>>> +++ b/drivers/mmc/host/sdhci.c
>>> @@ -2781,6 +2781,7 @@ static int __sdhci_execute_tuning(struct sdhci_host *host, u32 opcode)
>>> * Issue opcode repeatedly till Execute Tuning is set to 0 or the number
>>> * of loops reaches tuning loop count.
>>> */
>>> + printk(KERN_ERR "tune starts[loop %d]\n", host->tuning_loop_count);
>>> for (i = 0; i < host->tuning_loop_count; i++) {
>>> u16 ctrl;
>>>
>>> @@ -2799,8 +2800,11 @@ static int __sdhci_execute_tuning(struct sdhci_host *host, u32 opcode)
>>>
>>> ctrl = sdhci_readw(host, SDHCI_HOST_CONTROL2);
>>> if (!(ctrl & SDHCI_CTRL_EXEC_TUNING)) {
>>> - if (ctrl & SDHCI_CTRL_TUNED_CLK)
>>> - return 0; /* Success! */
>>> + if (ctrl & SDHCI_CTRL_TUNED_CLK) {
>>> + printk(KERN_ERR "tune success[%d]\n", i);
>>> + msleep(10);
>>> + return 0; /* Success! */
>>> + }
>>> break;
>>> }
>>>
>>>
>>> [ 164.919090] tune starts[loop 40]
>>> [ 164.923152] tune success[39]
>>> [ 164.952106] tune starts[loop 40]
>>> [ 164.956611] tune success[39]
>>> [ 165.084402] tune starts[loop 40]
>>> [ 165.089350] tune success[39]
>>> [ 165.116491] tune starts[loop 40]
>>> [ 165.120806] tune success[39]
>>> [ 165.145016] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
>>> E/TC:? 0
>>> E/TC:? 0 TA panicked with code 0xffff0000
>>> E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
>>> E/LD: arch: aarch64
>>> E/LD: region 0: va 0xc0004000 pa 0x7e200000 size 0x002000 flags rw-s (ldelf)
>>> E/LD: region 1: va 0xc0006000 pa 0x7e202000 size 0x008000 flags r-xs (ldelf)
>>> E/LD: region 2: va 0xc000e000 pa 0x7e20a000 size 0x001000 flags rw-s (ldelf)
>>> E/LD: region 3: va 0xc000f000 pa 0x7e20b000 size 0x004000 flags rw-s (ldelf)
>>> E/LD: region 4: va 0xc0013000 pa 0x7e20f000 size 0x001000 flags r--s
>>> E/LD: region 5: va 0xc0014000 pa 0x7e22c000 size 0x005000 flags rw-s (stack)
>>> E/LD: region 6: va 0xc0019000 pa 0x818ac38e8 size 0x001000 flags rw-- (param)
>>> E/LD: region 7: va 0xc001a000 pa 0x818ac36e8 size 0x001000 flags rw-- (param)
>>> E/LD: region 8: va 0xc0022000 pa 0x00001000 size 0x014000 flags r-xs [0]
>>> E/LD: region 9: va 0xc0036000 pa 0x00015000 size 0x008000 flags rw-s [0]
>>> E/LD: [0] 22250a54-0bf1-48fe-8002-7b20f1c9c9b1 @ 0xc0022000
>>> E/LD: Call stack:
>>> E/LD: 0xc0024e58
>>> E/LD: 0xc0022388
>>> E/LD: 0xc0025d40
>>> E/LD: 0xc0022624
>>> [ 166.119598] tune starts[loop 40]
>>> [ 166.125700] tune success[39]
>>
>> It might make a difference if there is another command before the
>> switch command, e.g. add a CMD13 (could keep the tune print to tell
>> that the CMD13 comes after re-tuning)
>>
>> diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
>> index 892e74e611a0..3265049f4132 100644
>> --- a/drivers/mmc/core/block.c
>> +++ b/drivers/mmc/core/block.c
>> @@ -875,6 +875,7 @@ static int mmc_blk_part_switch_pre(struct mmc_card *card,
>> unsigned int part_type)
>> {
>> int ret = 0;
>> + u32 status;
>>
>> if (part_type == EXT_CSD_PART_CONFIG_ACC_RPMB) {
>> if (card->ext_csd.cmdq_en) {
>> @@ -883,6 +884,9 @@ static int mmc_blk_part_switch_pre(struct mmc_card *card,
>> return ret;
>> }
>> mmc_retune_pause(card->host);
>> + ret = mmc_send_status(card, &status);
>> + pr_info("%s: %s: status %#x, ret %d\n", mmc_hostname(card->host), __func__, status, ret);
>> + ret = 0;
>> }
>>
>> return ret;
>>
>>
>>
>> And another thing to try is SDHCI_QUIRK2_TUNING_WORK_AROUND
>>
>
> I run it in a loop with all the proposed changes and after 6 or 7 seconds the error triggered:
>
> $ while true; do if ! read_rpmb variable ;then exit 0; fi; done
> [....]
>
>
> [ 146.686482] ldts: mmc0: mmc_blk_part_switch_pre: status 0x900, ret 0
> [ 146.699092] ldts: tune starts[loop 40]
> [ 146.704032] ldts: tune success[39]
> [ 146.726442] ldts: mmc0: mmc_blk_part_switch_pre: status 0x900, ret 0
> [ 146.745891] ldts: tune starts[loop 40]
> [ 146.750858] ldts: tune success[39]
> [ 146.774506] ldts: mmc0: mmc_blk_part_switch_pre: status 0x900, ret 0
> [ 146.787191] ldts: tune starts[loop 40]
> [ 146.792087] ldts: tune success[39]
> [ 146.814544] ldts: mmc0: mmc_blk_part_switch_pre: status 0x900, ret 0
> [ 146.826383] ldts: tune starts[loop 40]
> [ 146.830942] ldts: tune success[39]
> [ 146.854500] ldts: mmc0: mmc_blk_part_switch_pre: status 0x900, ret 0
> [ 146.863421] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
> E/TC:? 0
> E/TC:? 0 TA panicked with code 0xffff0000
> E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
> E/LD: arch: aarch64
> E/LD: region 0: va 0xc0004000 pa 0x7e200000 size 0x002000 flags rw-s (ldelf)
> E/LD: region 1: va 0xc0006000 pa 0x7e202000 size 0x008000 flags r-xs (ldelf)
> E/LD: region 2: va 0xc000e000 pa 0x7e20a000 size 0x001000 flags rw-s (ldelf)
> E/LD: region 3: va 0xc000f000 pa 0x7e20b000 size 0x004000 flags rw-s (ldelf)
> E/LD: region 4: va 0xc0013000 pa 0x7e20f000 size 0x001000 flags r--s
> E/LD: region 5: va 0xc0014000 pa 0x7e22c000 size 0x005000 flags rw-s (stack)
> E/LD: region 6: va 0xc0019000 pa 0x818c17b48 size 0x001000 flags rw-- (param)
> E/LD: region 7: va 0xc001a000 pa 0x818c17d48 size 0x001000 flags rw-- (param)
> E/LD: region 8: va 0xc003c000 pa 0x00001000 size 0x014000 flags r-xs [0]
> E/LD: region 9: va 0xc0050000 pa 0x00015000 size 0x008000 flags rw-s [0]
> E/LD: [0] 22250a54-0bf1-48fe-8002-7b20f1c9c9b1 @ 0xc003c000
> E/LD: Call stack:
> E/LD: 0xc003ea14
> E/LD: 0xc003c31c
> E/LD: 0xc003fd40
I sent a patch to add a re-tuning test to mmc_test. It would be
interesting to try that too.
https://lore.kernel.org/linux-mmc/[email protected]/T/#u
On 11/12/23 09:00:06, Jorge Ramirez-Ortiz, Foundries wrote:
> On 06/12/23 11:00:47, Jorge Ramirez-Ortiz, Foundries wrote:
> > On 06/12/23 07:02:43, Avri Altman wrote:
> > > >
> > > > On 4/12/23 17:01, Jorge Ramirez-Ortiz wrote:
> > > > > Requesting a retune before switching to the RPMB partition has been
> > > > > observed to cause CRC errors on the RPMB reads (-EILSEQ).
> > > >
> > > > There are still 2 concerns:
> > > > 1) We don't really know the root cause. Have you determined if here are
> > > > CRC errors in the main partition also?
> >
> > right, and I don't disagree with that.
> >
> > As a test I created a 4GB file from /dev/random which I then copied
> > several times (dd if= ....)
> >
> > root@uz3cg-dwg-sec:/sys/kernel/debug/mmc0# cat err_stats
> > # Command Timeout Occurred: 0
> > # Command CRC Errors Occurred: 0
> > # Data Timeout Occurred: 0
> > # Data CRC Errors Occurred: 0
> > # Auto-Cmd Error Occurred: 0
> > # ADMA Error Occurred: 0
> > # Tuning Error Occurred: 0
> > # CMDQ RED Errors: 0
> > # CMDQ GCE Errors: 0
> > # CMDQ ICCE Errors: 0
> > # Request Timedout: 0
> > # CMDQ Request Timedout: 0
> > # ICE Config Errors: 0
> > # Controller Timedout errors: 0
> > # Unexpected IRQ errors: 0
> >
> > However as soon as I access RPMB and fails (it takes just a few tries) I see:
> >
> > I/TC: RPMB: Using generated key
> > [ 86.902118] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
> > E/TC:? 0
> > E/TC:? 0 TA panicked with code 0xffff0000
> > E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
> > E/LD: arch: aarch64
> > E/LD: region 0: va 0xc0004000 pa 0x7e200000 size 0x002000 flags rw-s (ldelf)
> > E/LD: region 1: va 0xc0006000 pa 0x7e202000 size 0x008000 flags r-xs (ldelf)
> > E/LD: region 2: va 0xc000e000 pa 0x7e20a000 size 0x001000 flags rw-s (ldelf)
> > E/LD: region 3: va 0xc000f000 pa 0x7e20b000 size 0x004000 flags rw-s (ldelf)
> > E/LD: region 4: va 0xc0013000 pa 0x7e20f000 size 0x001000 flags r--s
> > E/LD: region 5: va 0xc0014000 pa 0x7e22c000 size 0x005000 flags rw-s (stack)
> > E/LD: region 6: va 0xc0019000 pa 0x818ea9ba8 size 0x002000 flags rw-- (param)
> > E/LD: region 7: va 0xc001b000 pa 0x818e97ba8 size 0x001000 flags rw-- (param)
> > E/LD: region 8: va 0xc004f000 pa 0x00001000 size 0x014000 flags r-xs [0]
> > E/LD: region 9: va 0xc0063000 pa 0x00015000 size 0x008000 flags rw-s [0]
> > E/LD: [0] 22250a54-0bf1-48fe-8002-7b20f1c9c9b1 @ 0xc004f000
> > E/LD: Call stack:
> > E/LD: 0xc0051a14
> > E/LD: 0xc004f31c
> > E/LD: 0xc0052d40
> > E/LD: 0xc004f624
> >
> > root@uz3cg-dwg-sec:/var/rootdirs/home/fio# cat /sys/kernel/debug/mmc0/err_stats
> > # Command Timeout Occurred: 0
> > # Command CRC Errors Occurred: 0
> > # Data Timeout Occurred: 0
> > # Data CRC Errors Occurred: 1
> > # Auto-Cmd Error Occurred: 0
> > # ADMA Error Occurred: 0
> > # Tuning Error Occurred: 0
> > # CMDQ RED Errors: 0
> > # CMDQ GCE Errors: 0
> > # CMDQ ICCE Errors: 0
> > # Request Timedout: 0
> > # CMDQ Request Timedout: 0
> > # ICE Config Errors: 0
> > # Controller Timedout errors: 0
> > # Unexpected IRQ errors: 0
> >
> > > > 2) Forcing this on everyone
> > > >
> > > > The original idea was that because re-tuning cannot be done in RPMB, the
> > > > need to re-rune in RPMB could be avoided by always re-tuning before
> > > > switching to RPMB and then switching straight back. IIRC re-tuning should
> > > > guarantee at least 4MB more I/O without issue.
> > > Performance is hardly an issue in the context of RPMB access -
> > > For most cases it’s a single frame.
> >
> > Yes, the security use case typically stores hashes, variables
> > (bootcount, upgrade_available, versions, that sort of thing) and
> > certificates in RPMB.
> >
> > Since you mentioned, I am seeing that tuning before switching to RPMB
> > has an impact on performance. As a practical test, just reading a 6 byte
> > variable incurs in 50ms penalty in kernel space due to the need to
> > retune 5 times. Not great since the request is coming from a Trusted
> > Application via OP-TEE through the supplicant meaning this TEE thread
> > (they are statically allocated CFG_NUM_THREADS) will be reserved for
> > quite a bit of time.
> >
> > Roughly:
> > TA --> OP-TEE (core) --> TEE-supplicant --> Kernel (>50ms) --> OP-TEE --> TA
>
> To add more detail to the timing above, when using RPMB, OP-TEE stores
> the secure filesystem on RPMB as well, so accessing one of the variables
> stored in the filesystem consists on a number (~5) of individual RPMB
> requests (each one forcing a retune, each retune taking around 10ms).
Adrian, please could you comment on the above.
The current code is a performance drag for systems that implement their
secure filesystems on RPMB (i.e: OP-TEE) causing each read operation (of
variables consisting of a few bytes stored in such a filesystem) to
perform 5 consecutive retune requests.
I am just thinking whether the original use case that forces a call to
retune prior to processing the RPMB request remains valid.
Independently of the fact that not doing so fixes the problem I was
working on - and with the information I have - I dont think RPMB is
generally used to store larger files (maybe you have more information
about the average use case? are you aware of systems using RPMB to store
binaries or images?)
I still I have to execute the test you shared some weeks ago. Bit of a
pain to NFS boot this system...will try to do it this week.
TIA
>
> BTW, I also tried delaying the timing between those consecutive retunes
> (up to 1 second), but the issue still persisted.
>
> >
> > Adrian, I couldn't find the original performance justification for
> > enabling this feature globally. At which point do you think it becomes
> > beneficial to retune before accessing RPMB?
>
> How should we proceed with this patch then? can it be merged as I
> proposed? should I rewrite it differently? not sure what is next
>
> TIA
> Jorge
On 2/01/24 12:41, Jorge Ramirez-Ortiz, Foundries wrote:
> On 11/12/23 09:00:06, Jorge Ramirez-Ortiz, Foundries wrote:
>> On 06/12/23 11:00:47, Jorge Ramirez-Ortiz, Foundries wrote:
>>> On 06/12/23 07:02:43, Avri Altman wrote:
>>>>>
>>>>> On 4/12/23 17:01, Jorge Ramirez-Ortiz wrote:
>>>>>> Requesting a retune before switching to the RPMB partition has been
>>>>>> observed to cause CRC errors on the RPMB reads (-EILSEQ).
>>>>>
>>>>> There are still 2 concerns:
>>>>> 1) We don't really know the root cause. Have you determined if here are
>>>>> CRC errors in the main partition also?
>>>
>>> right, and I don't disagree with that.
>>>
>>> As a test I created a 4GB file from /dev/random which I then copied
>>> several times (dd if= ....)
>>>
>>> root@uz3cg-dwg-sec:/sys/kernel/debug/mmc0# cat err_stats
>>> # Command Timeout Occurred: 0
>>> # Command CRC Errors Occurred: 0
>>> # Data Timeout Occurred: 0
>>> # Data CRC Errors Occurred: 0
>>> # Auto-Cmd Error Occurred: 0
>>> # ADMA Error Occurred: 0
>>> # Tuning Error Occurred: 0
>>> # CMDQ RED Errors: 0
>>> # CMDQ GCE Errors: 0
>>> # CMDQ ICCE Errors: 0
>>> # Request Timedout: 0
>>> # CMDQ Request Timedout: 0
>>> # ICE Config Errors: 0
>>> # Controller Timedout errors: 0
>>> # Unexpected IRQ errors: 0
>>>
>>> However as soon as I access RPMB and fails (it takes just a few tries) I see:
>>>
>>> I/TC: RPMB: Using generated key
>>> [ 86.902118] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
>>> E/TC:? 0
>>> E/TC:? 0 TA panicked with code 0xffff0000
>>> E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
>>> E/LD: arch: aarch64
>>> E/LD: region 0: va 0xc0004000 pa 0x7e200000 size 0x002000 flags rw-s (ldelf)
>>> E/LD: region 1: va 0xc0006000 pa 0x7e202000 size 0x008000 flags r-xs (ldelf)
>>> E/LD: region 2: va 0xc000e000 pa 0x7e20a000 size 0x001000 flags rw-s (ldelf)
>>> E/LD: region 3: va 0xc000f000 pa 0x7e20b000 size 0x004000 flags rw-s (ldelf)
>>> E/LD: region 4: va 0xc0013000 pa 0x7e20f000 size 0x001000 flags r--s
>>> E/LD: region 5: va 0xc0014000 pa 0x7e22c000 size 0x005000 flags rw-s (stack)
>>> E/LD: region 6: va 0xc0019000 pa 0x818ea9ba8 size 0x002000 flags rw-- (param)
>>> E/LD: region 7: va 0xc001b000 pa 0x818e97ba8 size 0x001000 flags rw-- (param)
>>> E/LD: region 8: va 0xc004f000 pa 0x00001000 size 0x014000 flags r-xs [0]
>>> E/LD: region 9: va 0xc0063000 pa 0x00015000 size 0x008000 flags rw-s [0]
>>> E/LD: [0] 22250a54-0bf1-48fe-8002-7b20f1c9c9b1 @ 0xc004f000
>>> E/LD: Call stack:
>>> E/LD: 0xc0051a14
>>> E/LD: 0xc004f31c
>>> E/LD: 0xc0052d40
>>> E/LD: 0xc004f624
>>>
>>> root@uz3cg-dwg-sec:/var/rootdirs/home/fio# cat /sys/kernel/debug/mmc0/err_stats
>>> # Command Timeout Occurred: 0
>>> # Command CRC Errors Occurred: 0
>>> # Data Timeout Occurred: 0
>>> # Data CRC Errors Occurred: 1
>>> # Auto-Cmd Error Occurred: 0
>>> # ADMA Error Occurred: 0
>>> # Tuning Error Occurred: 0
>>> # CMDQ RED Errors: 0
>>> # CMDQ GCE Errors: 0
>>> # CMDQ ICCE Errors: 0
>>> # Request Timedout: 0
>>> # CMDQ Request Timedout: 0
>>> # ICE Config Errors: 0
>>> # Controller Timedout errors: 0
>>> # Unexpected IRQ errors: 0
>>>
>>>>> 2) Forcing this on everyone
>>>>>
>>>>> The original idea was that because re-tuning cannot be done in RPMB, the
>>>>> need to re-rune in RPMB could be avoided by always re-tuning before
>>>>> switching to RPMB and then switching straight back. IIRC re-tuning should
>>>>> guarantee at least 4MB more I/O without issue.
>>>> Performance is hardly an issue in the context of RPMB access -
>>>> For most cases it’s a single frame.
>>>
>>> Yes, the security use case typically stores hashes, variables
>>> (bootcount, upgrade_available, versions, that sort of thing) and
>>> certificates in RPMB.
>>>
>>> Since you mentioned, I am seeing that tuning before switching to RPMB
>>> has an impact on performance. As a practical test, just reading a 6 byte
>>> variable incurs in 50ms penalty in kernel space due to the need to
>>> retune 5 times. Not great since the request is coming from a Trusted
>>> Application via OP-TEE through the supplicant meaning this TEE thread
>>> (they are statically allocated CFG_NUM_THREADS) will be reserved for
>>> quite a bit of time.
>>>
>>> Roughly:
>>> TA --> OP-TEE (core) --> TEE-supplicant --> Kernel (>50ms) --> OP-TEE --> TA
>>
>> To add more detail to the timing above, when using RPMB, OP-TEE stores
>> the secure filesystem on RPMB as well, so accessing one of the variables
>> stored in the filesystem consists on a number (~5) of individual RPMB
>> requests (each one forcing a retune, each retune taking around 10ms).
>
> Adrian, please could you comment on the above.
>
> The current code is a performance drag for systems that implement their
> secure filesystems on RPMB (i.e: OP-TEE) causing each read operation (of
> variables consisting of a few bytes stored in such a filesystem) to
> perform 5 consecutive retune requests.
>
> I am just thinking whether the original use case that forces a call to
> retune prior to processing the RPMB request remains valid.
I am not sure what you are asking.
For some transfer modes, re-tuning is expected to deal with sampling
point drift over time, mainly due to temperature changes. It is done
either periodically (tuning timer) or after a CRC error.
There is no reason to assume RPMB is immune from that.
Certainly re-tuning before switching to RPMB is not optimal for
performance, and we can leave that out, but a CRC error before
or during RPMB operations will *still* result in re-tuning
after switching back from RPMB.
In your case, re-tuning makes things worse, which is a bit of a
mystery. Running the new re-tuning test would tell us whether
it makes things worse in general, or only for RPMB.
>
> Independently of the fact that not doing so fixes the problem I was
> working on - and with the information I have - I dont think RPMB is
> generally used to store larger files (maybe you have more information
> about the average use case? are you aware of systems using RPMB to store
> binaries or images?)
>
> I still I have to execute the test you shared some weeks ago. Bit of a
> pain to NFS boot this system...will try to do it this week.
Other options are to boot with an initrd only, or after boot switch
to a RAM-based file system.
I was waiting for this, since it is good to try to get closer to a
root cause, but as you point out, the change is good for performance
also, so I will Ack it.
>
> TIA
>
>>
>> BTW, I also tried delaying the timing between those consecutive retunes
>> (up to 1 second), but the issue still persisted.
>>
>>>
>>> Adrian, I couldn't find the original performance justification for
>>> enabling this feature globally. At which point do you think it becomes
>>> beneficial to retune before accessing RPMB?
>>
>> How should we proceed with this patch then? can it be merged as I
>> proposed? should I rewrite it differently? not sure what is next
>>
>> TIA
>> Jorge
On 02/01/24 21:01:52, Adrian Hunter wrote:
> On 2/01/24 12:41, Jorge Ramirez-Ortiz, Foundries wrote:
> > On 11/12/23 09:00:06, Jorge Ramirez-Ortiz, Foundries wrote:
> >> On 06/12/23 11:00:47, Jorge Ramirez-Ortiz, Foundries wrote:
> >>> On 06/12/23 07:02:43, Avri Altman wrote:
> >>>>>
> >>>>> On 4/12/23 17:01, Jorge Ramirez-Ortiz wrote:
> >>>>>> Requesting a retune before switching to the RPMB partition has been
> >>>>>> observed to cause CRC errors on the RPMB reads (-EILSEQ).
> >>>>>
> >>>>> There are still 2 concerns:
> >>>>> 1) We don't really know the root cause. Have you determined if here are
> >>>>> CRC errors in the main partition also?
> >>>
> >>> right, and I don't disagree with that.
> >>>
> >>> As a test I created a 4GB file from /dev/random which I then copied
> >>> several times (dd if= ....)
> >>>
> >>> root@uz3cg-dwg-sec:/sys/kernel/debug/mmc0# cat err_stats
> >>> # Command Timeout Occurred: 0
> >>> # Command CRC Errors Occurred: 0
> >>> # Data Timeout Occurred: 0
> >>> # Data CRC Errors Occurred: 0
> >>> # Auto-Cmd Error Occurred: 0
> >>> # ADMA Error Occurred: 0
> >>> # Tuning Error Occurred: 0
> >>> # CMDQ RED Errors: 0
> >>> # CMDQ GCE Errors: 0
> >>> # CMDQ ICCE Errors: 0
> >>> # Request Timedout: 0
> >>> # CMDQ Request Timedout: 0
> >>> # ICE Config Errors: 0
> >>> # Controller Timedout errors: 0
> >>> # Unexpected IRQ errors: 0
> >>>
> >>> However as soon as I access RPMB and fails (it takes just a few tries) I see:
> >>>
> >>> I/TC: RPMB: Using generated key
> >>> [ 86.902118] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
> >>> E/TC:? 0
> >>> E/TC:? 0 TA panicked with code 0xffff0000
> >>> E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
> >>> E/LD: arch: aarch64
> >>> E/LD: region 0: va 0xc0004000 pa 0x7e200000 size 0x002000 flags rw-s (ldelf)
> >>> E/LD: region 1: va 0xc0006000 pa 0x7e202000 size 0x008000 flags r-xs (ldelf)
> >>> E/LD: region 2: va 0xc000e000 pa 0x7e20a000 size 0x001000 flags rw-s (ldelf)
> >>> E/LD: region 3: va 0xc000f000 pa 0x7e20b000 size 0x004000 flags rw-s (ldelf)
> >>> E/LD: region 4: va 0xc0013000 pa 0x7e20f000 size 0x001000 flags r--s
> >>> E/LD: region 5: va 0xc0014000 pa 0x7e22c000 size 0x005000 flags rw-s (stack)
> >>> E/LD: region 6: va 0xc0019000 pa 0x818ea9ba8 size 0x002000 flags rw-- (param)
> >>> E/LD: region 7: va 0xc001b000 pa 0x818e97ba8 size 0x001000 flags rw-- (param)
> >>> E/LD: region 8: va 0xc004f000 pa 0x00001000 size 0x014000 flags r-xs [0]
> >>> E/LD: region 9: va 0xc0063000 pa 0x00015000 size 0x008000 flags rw-s [0]
> >>> E/LD: [0] 22250a54-0bf1-48fe-8002-7b20f1c9c9b1 @ 0xc004f000
> >>> E/LD: Call stack:
> >>> E/LD: 0xc0051a14
> >>> E/LD: 0xc004f31c
> >>> E/LD: 0xc0052d40
> >>> E/LD: 0xc004f624
> >>>
> >>> root@uz3cg-dwg-sec:/var/rootdirs/home/fio# cat /sys/kernel/debug/mmc0/err_stats
> >>> # Command Timeout Occurred: 0
> >>> # Command CRC Errors Occurred: 0
> >>> # Data Timeout Occurred: 0
> >>> # Data CRC Errors Occurred: 1
> >>> # Auto-Cmd Error Occurred: 0
> >>> # ADMA Error Occurred: 0
> >>> # Tuning Error Occurred: 0
> >>> # CMDQ RED Errors: 0
> >>> # CMDQ GCE Errors: 0
> >>> # CMDQ ICCE Errors: 0
> >>> # Request Timedout: 0
> >>> # CMDQ Request Timedout: 0
> >>> # ICE Config Errors: 0
> >>> # Controller Timedout errors: 0
> >>> # Unexpected IRQ errors: 0
> >>>
> >>>>> 2) Forcing this on everyone
> >>>>>
> >>>>> The original idea was that because re-tuning cannot be done in RPMB, the
> >>>>> need to re-rune in RPMB could be avoided by always re-tuning before
> >>>>> switching to RPMB and then switching straight back. IIRC re-tuning should
> >>>>> guarantee at least 4MB more I/O without issue.
> >>>> Performance is hardly an issue in the context of RPMB access -
> >>>> For most cases it’s a single frame.
> >>>
> >>> Yes, the security use case typically stores hashes, variables
> >>> (bootcount, upgrade_available, versions, that sort of thing) and
> >>> certificates in RPMB.
> >>>
> >>> Since you mentioned, I am seeing that tuning before switching to RPMB
> >>> has an impact on performance. As a practical test, just reading a 6 byte
> >>> variable incurs in 50ms penalty in kernel space due to the need to
> >>> retune 5 times. Not great since the request is coming from a Trusted
> >>> Application via OP-TEE through the supplicant meaning this TEE thread
> >>> (they are statically allocated CFG_NUM_THREADS) will be reserved for
> >>> quite a bit of time.
> >>>
> >>> Roughly:
> >>> TA --> OP-TEE (core) --> TEE-supplicant --> Kernel (>50ms) --> OP-TEE --> TA
> >>
> >> To add more detail to the timing above, when using RPMB, OP-TEE stores
> >> the secure filesystem on RPMB as well, so accessing one of the variables
> >> stored in the filesystem consists on a number (~5) of individual RPMB
> >> requests (each one forcing a retune, each retune taking around 10ms).
> >
> > Adrian, please could you comment on the above.
> >
> > The current code is a performance drag for systems that implement their
> > secure filesystems on RPMB (i.e: OP-TEE) causing each read operation (of
> > variables consisting of a few bytes stored in such a filesystem) to
> > perform 5 consecutive retune requests.
> >
> > I am just thinking whether the original use case that forces a call to
> > retune prior to processing the RPMB request remains valid.
>
> I am not sure what you are asking.
>
> For some transfer modes, re-tuning is expected to deal with sampling
> point drift over time, mainly due to temperature changes. It is done
> either periodically (tuning timer) or after a CRC error.
>
> There is no reason to assume RPMB is immune from that.
>
> Certainly re-tuning before switching to RPMB is not optimal for
> performance, and we can leave that out, but a CRC error before
> or during RPMB operations will *still* result in re-tuning
> after switching back from RPMB.
>
> In your case, re-tuning makes things worse, which is a bit of a
> mystery. Running the new re-tuning test would tell us whether
> it makes things worse in general, or only for RPMB.
>
> >
> > Independently of the fact that not doing so fixes the problem I was
> > working on - and with the information I have - I dont think RPMB is
> > generally used to store larger files (maybe you have more information
> > about the average use case? are you aware of systems using RPMB to store
> > binaries or images?)
> >
> > I still I have to execute the test you shared some weeks ago. Bit of a
> > pain to NFS boot this system...will try to do it this week.
>
> Other options are to boot with an initrd only, or after boot switch
> to a RAM-based file system.
>
> I was waiting for this, since it is good to try to get closer to a
> root cause, but as you point out, the change is good for performance
> also, so I will Ack it.
great, thanks!
I was finally able to nfs mount this project (was a bit of a pain since
the board is using ostree).
The test failed. See below
root@uz3cg-dwg-sec:/# uname -a
Linux uz3cg-dwg-sec 5.15.64-lmp-standard #1 SMP Thu Sep 1 02:40:19 UTC 2022 aarch64 GNU/Linux
root@uz3cg-dwg-sec:/# cat /proc/cmdline
earlycon console=ttyPS0,115200 clk_ignore_unused root=/dev/nfs nfsroot=192.168.1.9:/srv/nfs/rootfs rootwait rw nfsrootdebug ip=dhcp rootfstype=ext4 ostree=ostree/boot.1.1/lmp/5c73dc21eb70c12363747b90c04302115715fa46063a9099841cf23cc70c09a6/0
root@uz3cg-dwg-sec:/sys/bus/mmc/drivers# cd mmcblk/
root@uz3cg-dwg-sec:/sys/bus/mmc/drivers/mmcblk# ls
bind mmc0:0001 uevent unbind
root@uz3cg-dwg-sec:/sys/bus/mmc/drivers/mmcblk# echo 'mmc0:0001' > unbind
root@uz3cg-dwg-sec:/sys/bus/mmc/drivers# echo 'mmc0:0001' > mmc_test/bind
[ 284.253261] mmc_test mmc0:0001: Card claimed for testing.
root@uz3cg-dwg-sec:/sys/bus/mmc/drivers# cd ..
root@uz3cg-dwg-sec:/sys/bus/mmc# ls
devices drivers drivers_autoprobe drivers_probe uevent
root@uz3cg-dwg-sec:/sys/bus/mmc# cd /
root@uz3cg-dwg-sec:/# cat /sys/kernel/debug/mmc0/mmc0\:0001/test
test testlist
root@uz3cg-dwg-sec:/# cat /sys/kernel/debug/mmc0/mmc0\:0001/testlist | grep tuning
52: Re-tuning reliability
root@uz3cg-dwg-sec:/# echo 52 > /sys/kernel/debug/mmc0/mmc0\:0001/test
[ 352.283447] mmc0: Starting tests of card mmc0:0001...
[ 352.288597] mmc0: Test case 52. Re-tuning reliability...
[ 354.265441] mmc0: Result: ERROR (-84)
[ 354.269142] mmc0: Tests completed.
root@uz3cg-dwg-sec:/#
>
>
> >
> > TIA
> >
> >>
> >> BTW, I also tried delaying the timing between those consecutive retunes
> >> (up to 1 second), but the issue still persisted.
> >>
> >>>
> >>> Adrian, I couldn't find the original performance justification for
> >>> enabling this feature globally. At which point do you think it becomes
> >>> beneficial to retune before accessing RPMB?
> >>
> >> How should we proceed with this patch then? can it be merged as I
> >> proposed? should I rewrite it differently? not sure what is next
> >>
> >> TIA
> >> Jorge
>
On 3/01/24 00:01, Jorge Ramirez-Ortiz, Foundries wrote:
> On 02/01/24 21:01:52, Adrian Hunter wrote:
>> On 2/01/24 12:41, Jorge Ramirez-Ortiz, Foundries wrote:
>>> On 11/12/23 09:00:06, Jorge Ramirez-Ortiz, Foundries wrote:
>>>> On 06/12/23 11:00:47, Jorge Ramirez-Ortiz, Foundries wrote:
>>>>> On 06/12/23 07:02:43, Avri Altman wrote:
>>>>>>>
>>>>>>> On 4/12/23 17:01, Jorge Ramirez-Ortiz wrote:
>>>>>>>> Requesting a retune before switching to the RPMB partition has been
>>>>>>>> observed to cause CRC errors on the RPMB reads (-EILSEQ).
>>>>>>>
>>>>>>> There are still 2 concerns:
>>>>>>> 1) We don't really know the root cause. Have you determined if here are
>>>>>>> CRC errors in the main partition also?
>>>>>
>>>>> right, and I don't disagree with that.
>>>>>
>>>>> As a test I created a 4GB file from /dev/random which I then copied
>>>>> several times (dd if= ....)
>>>>>
>>>>> root@uz3cg-dwg-sec:/sys/kernel/debug/mmc0# cat err_stats
>>>>> # Command Timeout Occurred: 0
>>>>> # Command CRC Errors Occurred: 0
>>>>> # Data Timeout Occurred: 0
>>>>> # Data CRC Errors Occurred: 0
>>>>> # Auto-Cmd Error Occurred: 0
>>>>> # ADMA Error Occurred: 0
>>>>> # Tuning Error Occurred: 0
>>>>> # CMDQ RED Errors: 0
>>>>> # CMDQ GCE Errors: 0
>>>>> # CMDQ ICCE Errors: 0
>>>>> # Request Timedout: 0
>>>>> # CMDQ Request Timedout: 0
>>>>> # ICE Config Errors: 0
>>>>> # Controller Timedout errors: 0
>>>>> # Unexpected IRQ errors: 0
>>>>>
>>>>> However as soon as I access RPMB and fails (it takes just a few tries) I see:
>>>>>
>>>>> I/TC: RPMB: Using generated key
>>>>> [ 86.902118] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
>>>>> E/TC:? 0
>>>>> E/TC:? 0 TA panicked with code 0xffff0000
>>>>> E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
>>>>> E/LD: arch: aarch64
>>>>> E/LD: region 0: va 0xc0004000 pa 0x7e200000 size 0x002000 flags rw-s (ldelf)
>>>>> E/LD: region 1: va 0xc0006000 pa 0x7e202000 size 0x008000 flags r-xs (ldelf)
>>>>> E/LD: region 2: va 0xc000e000 pa 0x7e20a000 size 0x001000 flags rw-s (ldelf)
>>>>> E/LD: region 3: va 0xc000f000 pa 0x7e20b000 size 0x004000 flags rw-s (ldelf)
>>>>> E/LD: region 4: va 0xc0013000 pa 0x7e20f000 size 0x001000 flags r--s
>>>>> E/LD: region 5: va 0xc0014000 pa 0x7e22c000 size 0x005000 flags rw-s (stack)
>>>>> E/LD: region 6: va 0xc0019000 pa 0x818ea9ba8 size 0x002000 flags rw-- (param)
>>>>> E/LD: region 7: va 0xc001b000 pa 0x818e97ba8 size 0x001000 flags rw-- (param)
>>>>> E/LD: region 8: va 0xc004f000 pa 0x00001000 size 0x014000 flags r-xs [0]
>>>>> E/LD: region 9: va 0xc0063000 pa 0x00015000 size 0x008000 flags rw-s [0]
>>>>> E/LD: [0] 22250a54-0bf1-48fe-8002-7b20f1c9c9b1 @ 0xc004f000
>>>>> E/LD: Call stack:
>>>>> E/LD: 0xc0051a14
>>>>> E/LD: 0xc004f31c
>>>>> E/LD: 0xc0052d40
>>>>> E/LD: 0xc004f624
>>>>>
>>>>> root@uz3cg-dwg-sec:/var/rootdirs/home/fio# cat /sys/kernel/debug/mmc0/err_stats
>>>>> # Command Timeout Occurred: 0
>>>>> # Command CRC Errors Occurred: 0
>>>>> # Data Timeout Occurred: 0
>>>>> # Data CRC Errors Occurred: 1
>>>>> # Auto-Cmd Error Occurred: 0
>>>>> # ADMA Error Occurred: 0
>>>>> # Tuning Error Occurred: 0
>>>>> # CMDQ RED Errors: 0
>>>>> # CMDQ GCE Errors: 0
>>>>> # CMDQ ICCE Errors: 0
>>>>> # Request Timedout: 0
>>>>> # CMDQ Request Timedout: 0
>>>>> # ICE Config Errors: 0
>>>>> # Controller Timedout errors: 0
>>>>> # Unexpected IRQ errors: 0
>>>>>
>>>>>>> 2) Forcing this on everyone
>>>>>>>
>>>>>>> The original idea was that because re-tuning cannot be done in RPMB, the
>>>>>>> need to re-rune in RPMB could be avoided by always re-tuning before
>>>>>>> switching to RPMB and then switching straight back. IIRC re-tuning should
>>>>>>> guarantee at least 4MB more I/O without issue.
>>>>>> Performance is hardly an issue in the context of RPMB access -
>>>>>> For most cases it’s a single frame.
>>>>>
>>>>> Yes, the security use case typically stores hashes, variables
>>>>> (bootcount, upgrade_available, versions, that sort of thing) and
>>>>> certificates in RPMB.
>>>>>
>>>>> Since you mentioned, I am seeing that tuning before switching to RPMB
>>>>> has an impact on performance. As a practical test, just reading a 6 byte
>>>>> variable incurs in 50ms penalty in kernel space due to the need to
>>>>> retune 5 times. Not great since the request is coming from a Trusted
>>>>> Application via OP-TEE through the supplicant meaning this TEE thread
>>>>> (they are statically allocated CFG_NUM_THREADS) will be reserved for
>>>>> quite a bit of time.
>>>>>
>>>>> Roughly:
>>>>> TA --> OP-TEE (core) --> TEE-supplicant --> Kernel (>50ms) --> OP-TEE --> TA
>>>>
>>>> To add more detail to the timing above, when using RPMB, OP-TEE stores
>>>> the secure filesystem on RPMB as well, so accessing one of the variables
>>>> stored in the filesystem consists on a number (~5) of individual RPMB
>>>> requests (each one forcing a retune, each retune taking around 10ms).
>>>
>>> Adrian, please could you comment on the above.
>>>
>>> The current code is a performance drag for systems that implement their
>>> secure filesystems on RPMB (i.e: OP-TEE) causing each read operation (of
>>> variables consisting of a few bytes stored in such a filesystem) to
>>> perform 5 consecutive retune requests.
>>>
>>> I am just thinking whether the original use case that forces a call to
>>> retune prior to processing the RPMB request remains valid.
>>
>> I am not sure what you are asking.
>>
>> For some transfer modes, re-tuning is expected to deal with sampling
>> point drift over time, mainly due to temperature changes. It is done
>> either periodically (tuning timer) or after a CRC error.
>>
>> There is no reason to assume RPMB is immune from that.
>>
>> Certainly re-tuning before switching to RPMB is not optimal for
>> performance, and we can leave that out, but a CRC error before
>> or during RPMB operations will *still* result in re-tuning
>> after switching back from RPMB.
>>
>> In your case, re-tuning makes things worse, which is a bit of a
>> mystery. Running the new re-tuning test would tell us whether
>> it makes things worse in general, or only for RPMB.
>>
>>>
>>> Independently of the fact that not doing so fixes the problem I was
>>> working on - and with the information I have - I dont think RPMB is
>>> generally used to store larger files (maybe you have more information
>>> about the average use case? are you aware of systems using RPMB to store
>>> binaries or images?)
>>>
>>> I still I have to execute the test you shared some weeks ago. Bit of a
>>> pain to NFS boot this system...will try to do it this week.
>>
>> Other options are to boot with an initrd only, or after boot switch
>> to a RAM-based file system.
>>
>> I was waiting for this, since it is good to try to get closer to a
>> root cause, but as you point out, the change is good for performance
>> also, so I will Ack it.
>
> great, thanks!
> I was finally able to nfs mount this project (was a bit of a pain since
> the board is using ostree).
>
> The test failed. See below
>
> root@uz3cg-dwg-sec:/# uname -a
> Linux uz3cg-dwg-sec 5.15.64-lmp-standard #1 SMP Thu Sep 1 02:40:19 UTC 2022 aarch64 GNU/Linux
>
> root@uz3cg-dwg-sec:/# cat /proc/cmdline
> earlycon console=ttyPS0,115200 clk_ignore_unused root=/dev/nfs nfsroot=192.168.1.9:/srv/nfs/rootfs rootwait rw nfsrootdebug ip=dhcp rootfstype=ext4 ostree=ostree/boot.1.1/lmp/5c73dc21eb70c12363747b90c04302115715fa46063a9099841cf23cc70c09a6/0
>
> root@uz3cg-dwg-sec:/sys/bus/mmc/drivers# cd mmcblk/
>
> root@uz3cg-dwg-sec:/sys/bus/mmc/drivers/mmcblk# ls
> bind mmc0:0001 uevent unbind
>
> root@uz3cg-dwg-sec:/sys/bus/mmc/drivers/mmcblk# echo 'mmc0:0001' > unbind
>
> root@uz3cg-dwg-sec:/sys/bus/mmc/drivers# echo 'mmc0:0001' > mmc_test/bind
> [ 284.253261] mmc_test mmc0:0001: Card claimed for testing.
>
> root@uz3cg-dwg-sec:/sys/bus/mmc/drivers# cd ..
> root@uz3cg-dwg-sec:/sys/bus/mmc# ls
> devices drivers drivers_autoprobe drivers_probe uevent
>
> root@uz3cg-dwg-sec:/sys/bus/mmc# cd /
>
> root@uz3cg-dwg-sec:/# cat /sys/kernel/debug/mmc0/mmc0\:0001/test
> test testlist
>
> root@uz3cg-dwg-sec:/# cat /sys/kernel/debug/mmc0/mmc0\:0001/testlist | grep tuning
> 52: Re-tuning reliability
>
> root@uz3cg-dwg-sec:/# echo 52 > /sys/kernel/debug/mmc0/mmc0\:0001/test
> [ 352.283447] mmc0: Starting tests of card mmc0:0001...
> [ 352.288597] mmc0: Test case 52. Re-tuning reliability...
> [ 354.265441] mmc0: Result: ERROR (-84)
> [ 354.269142] mmc0: Tests completed.
>
> root@uz3cg-dwg-sec:/#
Thanks for doing that! That seems to explain the mystery.
You could hack the test to get an idea of how many successful
iterations there are before getting an error.
For SDHCI, one difference between tuning and re-tuning is the
setting of bit-7 "Sampling Clock Select" of "Host Control 2 Register".
It is initially 0 and then set to 1 after the successful tuning.
Essentially, leaving it set to 1 is meant to speed up the re-tuning.
You could try setting it to zero instead, and see if that helps.
e.g.
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index c79f73459915..714d8cc39709 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -2732,6 +2732,7 @@ void sdhci_start_tuning(struct sdhci_host *host)
ctrl |= SDHCI_CTRL_EXEC_TUNING;
if (host->quirks2 & SDHCI_QUIRK2_TUNING_WORK_AROUND)
ctrl |= SDHCI_CTRL_TUNED_CLK;
+ ctrl &= ~SDHCI_CTRL_TUNED_CLK;
sdhci_writew(host, ctrl, SDHCI_HOST_CONTROL2);
/*
>
>
>
>
>>
>>
>>>
>>> TIA
>>>
>>>>
>>>> BTW, I also tried delaying the timing between those consecutive retunes
>>>> (up to 1 second), but the issue still persisted.
>>>>
>>>>>
>>>>> Adrian, I couldn't find the original performance justification for
>>>>> enabling this feature globally. At which point do you think it becomes
>>>>> beneficial to retune before accessing RPMB?
>>>>
>>>> How should we proceed with this patch then? can it be merged as I
>>>> proposed? should I rewrite it differently? not sure what is next
>>>>
>>>> TIA
>>>> Jorge
>>
On 03/01/24 10:03:38, Adrian Hunter wrote:
> Thanks for doing that! That seems to explain the mystery.
>
> You could hack the test to get an idea of how many successful
> iterations there are before getting an error.
>
> For SDHCI, one difference between tuning and re-tuning is the
> setting of bit-7 "Sampling Clock Select" of "Host Control 2 Register".
> It is initially 0 and then set to 1 after the successful tuning.
> Essentially, leaving it set to 1 is meant to speed up the re-tuning.
> You could try setting it to zero instead, and see if that helps.
> e.g.
>
> diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
> index c79f73459915..714d8cc39709 100644
> --- a/drivers/mmc/host/sdhci.c
> +++ b/drivers/mmc/host/sdhci.c
> @@ -2732,6 +2732,7 @@ void sdhci_start_tuning(struct sdhci_host *host)
> ctrl |= SDHCI_CTRL_EXEC_TUNING;
> if (host->quirks2 & SDHCI_QUIRK2_TUNING_WORK_AROUND)
> ctrl |= SDHCI_CTRL_TUNED_CLK;
> + ctrl &= ~SDHCI_CTRL_TUNED_CLK;
> sdhci_writew(host, ctrl, SDHCI_HOST_CONTROL2);
>
> /*
>
Yes with that change, the re-tuning reliability test does pass.
root@uz3cg-dwg-sec:/sys/kernel/debug/mmc0# echo 52 > /sys/kernel/debug/mmc0/mmc0\:0001/test
[ 237.833585] mmc0: Starting tests of card mmc0:0001...
[ 237.838759] mmc0: Test case 52. Re-tuning reliability...
[ 267.845403] mmc0: Result: OK
[ 267.848365] mmc0: Tests completed.
Unfortunately I still see the error when looping on RPMB reads.
For instance with this test script
$ while true; do rpmb_read m4hash; usleep 300; done
I can see the error triggering on the serial port after a minute or so.
[ 151.682907] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
Causing OP-TEE to panic since the RPMB read returns an error
E/TC:? 0
E/TC:? 0 TA panicked with code 0xffff0000
E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
E/LD: arch: aarch64
[...]
if anything else springs to your mind I am happy to test of course - there are
so many tunnables in this subsystem that experience is this area has exponential
value (and I dont have much).
Would it make sense if re-tuning requests are rejected unless a minimum number
of jiffies have passed? should I try that as a change?
or maybe delay a bit longer the RPMB access after a retune request?
On 3/01/24 11:20, Jorge Ramirez-Ortiz, Foundries wrote:
> On 03/01/24 10:03:38, Adrian Hunter wrote:
>> Thanks for doing that! That seems to explain the mystery.
>>
>> You could hack the test to get an idea of how many successful
>> iterations there are before getting an error.
>>
>> For SDHCI, one difference between tuning and re-tuning is the
>> setting of bit-7 "Sampling Clock Select" of "Host Control 2 Register".
>> It is initially 0 and then set to 1 after the successful tuning.
>> Essentially, leaving it set to 1 is meant to speed up the re-tuning.
>> You could try setting it to zero instead, and see if that helps.
>> e.g.
>>
>> diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
>> index c79f73459915..714d8cc39709 100644
>> --- a/drivers/mmc/host/sdhci.c
>> +++ b/drivers/mmc/host/sdhci.c
>> @@ -2732,6 +2732,7 @@ void sdhci_start_tuning(struct sdhci_host *host)
>> ctrl |= SDHCI_CTRL_EXEC_TUNING;
>> if (host->quirks2 & SDHCI_QUIRK2_TUNING_WORK_AROUND)
>> ctrl |= SDHCI_CTRL_TUNED_CLK;
>> + ctrl &= ~SDHCI_CTRL_TUNED_CLK;
>> sdhci_writew(host, ctrl, SDHCI_HOST_CONTROL2);
>>
>> /*
>>
>
>
> Yes with that change, the re-tuning reliability test does pass.
>
> root@uz3cg-dwg-sec:/sys/kernel/debug/mmc0# echo 52 > /sys/kernel/debug/mmc0/mmc0\:0001/test
> [ 237.833585] mmc0: Starting tests of card mmc0:0001...
> [ 237.838759] mmc0: Test case 52. Re-tuning reliability...
> [ 267.845403] mmc0: Result: OK
> [ 267.848365] mmc0: Tests completed.
>
>
> Unfortunately I still see the error when looping on RPMB reads.
>
> For instance with this test script
> $ while true; do rpmb_read m4hash; usleep 300; done
>
> I can see the error triggering on the serial port after a minute or so.
> [ 151.682907] sdhci-arasan ff160000.mmc: __mmc_blk_ioctl_cmd: data error -84
>
> Causing OP-TEE to panic since the RPMB read returns an error
> E/TC:? 0
> E/TC:? 0 TA panicked with code 0xffff0000
> E/LD: Status of TA 22250a54-0bf1-48fe-8002-7b20f1c9c9b1
> E/LD: arch: aarch64
> [...]
>
> if anything else springs to your mind I am happy to test of course - there are
> so many tunnables in this subsystem that experience is this area has exponential
> value (and I dont have much).
>
> Would it make sense if re-tuning requests are rejected unless a minimum number
> of jiffies have passed? should I try that as a change?
>
> or maybe delay a bit longer the RPMB access after a retune request?
It seems re-tuning is not working properly, so ideally the
SoC vendor / driver implementer would provide a solution.
There is also mmc_doing_retune() which could be used to skip
tuning execution entirely in the case of re-tuning.