The existing TPM polling code sleeps in each loop iteration for time in
msecs ranging from 1 msecs to 5 msecs. However, many of the TPM commands
complete much faster, resulting in unnecessary delays.
This set of patches identifies such iterations and optimizes the sleep
time. The first patch replaces TPM_POLL_SLEEP with TPM_TIMEOUT_POLL and
moves it from tpm_tis_core.c to tpm.h as an enum with value 1 msecs. The
second patch further reduces the TPM poll sleep time in get_burstcount()
and wait_for_tpm_stat() in tpm_tis_core.c by calling usleep_range()
directly.
The change is only in the polling time, and the maximum timeout is still
maintained the same. Thus, it should not affect the overall existing
behavior.
Changelog:
v2:
tpm: reduce poll sleep time in tpm_transmit()
* merged previously defined two patches into this.
* updated patch description as per Jarkko's feedback
tpm: reduce polling time to usecs for even finer granularity
* directly use usleep_range with finer granularity less than 1msec
Nayna Jain (2):
tpm: reduce poll sleep time in tpm_transmit()
tpm: reduce polling time to usecs for even finer granularity
drivers/char/tpm/tpm-interface.c | 2 +-
drivers/char/tpm/tpm.h | 5 ++++-
drivers/char/tpm/tpm_tis_core.c | 11 +++--------
3 files changed, 8 insertions(+), 10 deletions(-)
--
2.13.3
The TPM polling code in tpm_transmit sleeps in each loop iteration for
5 msecs. However, the TPM might return earlier, and thus waiting for
5 msecs adds an unnecessary delay. This patch reduces the polling sleep
time in tpm_transmit() from 5 msecs to 1 msecs.
Additionally, this patch renames TPM_POLL_SLEEP and moves it to tpm.h as
an enum value.
After this change, performance on a TPM 1.2 with an 8 byte burstcount
for 1000 extends improved from ~14 sec to ~10.7 sec.
Signed-off-by: Nayna Jain <[email protected]>
---
drivers/char/tpm/tpm-interface.c | 2 +-
drivers/char/tpm/tpm.h | 3 ++-
drivers/char/tpm/tpm_tis_core.c | 10 ++--------
3 files changed, 5 insertions(+), 10 deletions(-)
diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
index 9e80a953d693..a676d8ad5992 100644
--- a/drivers/char/tpm/tpm-interface.c
+++ b/drivers/char/tpm/tpm-interface.c
@@ -470,7 +470,7 @@ ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
goto out;
}
- tpm_msleep(TPM_TIMEOUT);
+ tpm_msleep(TPM_TIMEOUT_POLL);
rmb();
} while (time_before(jiffies, stop));
diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
index f895fba4e20d..7e797377e1eb 100644
--- a/drivers/char/tpm/tpm.h
+++ b/drivers/char/tpm/tpm.h
@@ -53,7 +53,8 @@ enum tpm_const {
enum tpm_timeout {
TPM_TIMEOUT = 5, /* msecs */
TPM_TIMEOUT_RETRY = 100, /* msecs */
- TPM_TIMEOUT_RANGE_US = 300 /* usecs */
+ TPM_TIMEOUT_RANGE_US = 300, /* usecs */
+ TPM_TIMEOUT_POLL = 1 /* msecs */
};
/* TPM addresses */
diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
index da074e3db19b..021e6b68f2db 100644
--- a/drivers/char/tpm/tpm_tis_core.c
+++ b/drivers/char/tpm/tpm_tis_core.c
@@ -31,12 +31,6 @@
#include "tpm.h"
#include "tpm_tis_core.h"
-/* This is a polling delay to check for status and burstcount.
- * As per ddwg input, expectation is that status check and burstcount
- * check should return within few usecs.
- */
-#define TPM_POLL_SLEEP 1 /* msec */
-
static void tpm_tis_clkrun_enable(struct tpm_chip *chip, bool value);
static bool wait_for_tpm_stat_cond(struct tpm_chip *chip, u8 mask,
@@ -90,7 +84,7 @@ static int wait_for_tpm_stat(struct tpm_chip *chip, u8 mask,
}
} else {
do {
- tpm_msleep(TPM_POLL_SLEEP);
+ tpm_msleep(TPM_TIMEOUT_POLL);
status = chip->ops->status(chip);
if ((status & mask) == mask)
return 0;
@@ -232,7 +226,7 @@ static int get_burstcount(struct tpm_chip *chip)
burstcnt = (value >> 8) & 0xFFFF;
if (burstcnt)
return burstcnt;
- tpm_msleep(TPM_POLL_SLEEP);
+ tpm_msleep(TPM_TIMEOUT_POLL);
} while (time_before(jiffies, stop));
return -EBUSY;
}
--
2.13.3
The TPM burstcount and status commands are supposed to return very
quickly [1][2]. This patch further reduces the TPM poll sleep time to usecs
in get_burstcount() and wait_for_tpm_stat() by calling usleep_range()
directly.
After this change, performance on a TPM 1.2 with an 8 byte burstcount for
1000 extends improved from ~10.7 sec to ~7 sec.
[1] From TCG Specification "TCG PC Client Specific TPM Interface
Specification (TIS), Family 1.2":
"NOTE : It takes roughly 330 ns per byte transfer on LPC. 256 bytes would
take 84 us, which is a long time to stall the CPU. Chipsets may not be
designed to post this much data to LPC; therefore, the CPU itself is
stalled for much of this time. Sending 1 kB would take 350 μs. Therefore,
even if the TPM_STS_x.burstCount field is a high value, software SHOULD
be interruptible during this period."
[2] From TCG Specification 2.0, "TCG PC Client Platform TPM Profile
(PTP) Specification":
"It takes roughly 330 ns per byte transfer on LPC. 256 bytes would take
84 us. Chipsets may not be designed to post this much data to LPC;
therefore, the CPU itself is stalled for much of this time. Sending 1 kB
would take 350 us. Therefore, even if the TPM_STS_x.burstCount field is a
high value, software should be interruptible during this period. For SPI,
assuming 20MHz clock and 64-byte transfers, it would take about 120 usec
to move 256B of data. Sending 1kB would take about 500 usec. If the
transactions are done using 4 bytes at a time, then it would take about
1 msec. to transfer 1kB of data."
Signed-off-by: Nayna Jain <[email protected]>
---
drivers/char/tpm/tpm.h | 4 +++-
drivers/char/tpm/tpm_tis_core.c | 5 +++--
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
index 7e797377e1eb..f0e4d290c347 100644
--- a/drivers/char/tpm/tpm.h
+++ b/drivers/char/tpm/tpm.h
@@ -54,7 +54,9 @@ enum tpm_timeout {
TPM_TIMEOUT = 5, /* msecs */
TPM_TIMEOUT_RETRY = 100, /* msecs */
TPM_TIMEOUT_RANGE_US = 300, /* usecs */
- TPM_TIMEOUT_POLL = 1 /* msecs */
+ TPM_TIMEOUT_POLL = 1, /* msecs */
+ TPM_TIMEOUT_USECS_MIN = 100, /* usecs */
+ TPM_TIMEOUT_USECS_MAX = 500 /* usecs */
};
/* TPM addresses */
diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
index 021e6b68f2db..5bba5c662423 100644
--- a/drivers/char/tpm/tpm_tis_core.c
+++ b/drivers/char/tpm/tpm_tis_core.c
@@ -84,7 +84,8 @@ static int wait_for_tpm_stat(struct tpm_chip *chip, u8 mask,
}
} else {
do {
- tpm_msleep(TPM_TIMEOUT_POLL);
+ usleep_range(TPM_TIMEOUT_USECS_MIN,
+ TPM_TIMEOUT_USECS_MAX);
status = chip->ops->status(chip);
if ((status & mask) == mask)
return 0;
@@ -226,7 +227,7 @@ static int get_burstcount(struct tpm_chip *chip)
burstcnt = (value >> 8) & 0xFFFF;
if (burstcnt)
return burstcnt;
- tpm_msleep(TPM_TIMEOUT_POLL);
+ usleep_range(TPM_TIMEOUT_USECS_MIN, TPM_TIMEOUT_USECS_MAX);
} while (time_before(jiffies, stop));
return -EBUSY;
}
--
2.13.3
On Tue, 2018-04-17 at 09:12 -0400, Nayna Jain wrote:
> The TPM polling code in tpm_transmit sleeps in each loop iteration for
> 5 msecs. However, the TPM might return earlier, and thus waiting for
> 5 msecs adds an unnecessary delay. This patch reduces the polling sleep
> time in tpm_transmit() from 5 msecs to 1 msecs.
>
> Additionally, this patch renames TPM_POLL_SLEEP and moves it to tpm.h as
> an enum value.
>
> After this change, performance on a TPM 1.2 with an 8 byte burstcount
> for 1000 extends improved from ~14 sec to ~10.7 sec.
>
> Signed-off-by: Nayna Jain <[email protected]>
Reviewed-by: Mimi Zohar <[email protected]>
> ---
> drivers/char/tpm/tpm-interface.c | 2 +-
> drivers/char/tpm/tpm.h | 3 ++-
> drivers/char/tpm/tpm_tis_core.c | 10 ++--------
> 3 files changed, 5 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
> index 9e80a953d693..a676d8ad5992 100644
> --- a/drivers/char/tpm/tpm-interface.c
> +++ b/drivers/char/tpm/tpm-interface.c
> @@ -470,7 +470,7 @@ ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
> goto out;
> }
>
> - tpm_msleep(TPM_TIMEOUT);
> + tpm_msleep(TPM_TIMEOUT_POLL);
> rmb();
> } while (time_before(jiffies, stop));
>
> diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
> index f895fba4e20d..7e797377e1eb 100644
> --- a/drivers/char/tpm/tpm.h
> +++ b/drivers/char/tpm/tpm.h
> @@ -53,7 +53,8 @@ enum tpm_const {
> enum tpm_timeout {
> TPM_TIMEOUT = 5, /* msecs */
> TPM_TIMEOUT_RETRY = 100, /* msecs */
> - TPM_TIMEOUT_RANGE_US = 300 /* usecs */
> + TPM_TIMEOUT_RANGE_US = 300, /* usecs */
> + TPM_TIMEOUT_POLL = 1 /* msecs */
> };
>
> /* TPM addresses */
> diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
> index da074e3db19b..021e6b68f2db 100644
> --- a/drivers/char/tpm/tpm_tis_core.c
> +++ b/drivers/char/tpm/tpm_tis_core.c
> @@ -31,12 +31,6 @@
> #include "tpm.h"
> #include "tpm_tis_core.h"
>
> -/* This is a polling delay to check for status and burstcount.
> - * As per ddwg input, expectation is that status check and burstcount
> - * check should return within few usecs.
> - */
> -#define TPM_POLL_SLEEP 1 /* msec */
> -
> static void tpm_tis_clkrun_enable(struct tpm_chip *chip, bool value);
>
> static bool wait_for_tpm_stat_cond(struct tpm_chip *chip, u8 mask,
> @@ -90,7 +84,7 @@ static int wait_for_tpm_stat(struct tpm_chip *chip, u8 mask,
> }
> } else {
> do {
> - tpm_msleep(TPM_POLL_SLEEP);
> + tpm_msleep(TPM_TIMEOUT_POLL);
> status = chip->ops->status(chip);
> if ((status & mask) == mask)
> return 0;
> @@ -232,7 +226,7 @@ static int get_burstcount(struct tpm_chip *chip)
> burstcnt = (value >> 8) & 0xFFFF;
> if (burstcnt)
> return burstcnt;
> - tpm_msleep(TPM_POLL_SLEEP);
> + tpm_msleep(TPM_TIMEOUT_POLL);
> } while (time_before(jiffies, stop));
> return -EBUSY;
> }
On Tue, 2018-04-17 at 09:12 -0400, Nayna Jain wrote:
> The TPM burstcount and status commands are supposed to return very
> quickly [1][2]. This patch further reduces the TPM poll sleep time to usecs
> in get_burstcount() and wait_for_tpm_stat() by calling usleep_range()
> directly.
>
> After this change, performance on a TPM 1.2 with an 8 byte burstcount for
> 1000 extends improved from ~10.7 sec to ~7 sec.
>
> [1] From TCG Specification "TCG PC Client Specific TPM Interface
> Specification (TIS), Family 1.2":
>
> "NOTE : It takes roughly 330 ns per byte transfer on LPC. 256 bytes would
> take 84 us, which is a long time to stall the CPU. Chipsets may not be
> designed to post this much data to LPC; therefore, the CPU itself is
> stalled for much of this time. Sending 1 kB would take 350 μs. Therefore,
> even if the TPM_STS_x.burstCount field is a high value, software SHOULD
> be interruptible during this period."
>
> [2] From TCG Specification 2.0, "TCG PC Client Platform TPM Profile
> (PTP) Specification":
>
> "It takes roughly 330 ns per byte transfer on LPC. 256 bytes would take
> 84 us. Chipsets may not be designed to post this much data to LPC;
> therefore, the CPU itself is stalled for much of this time. Sending 1 kB
> would take 350 us. Therefore, even if the TPM_STS_x.burstCount field is a
> high value, software should be interruptible during this period. For SPI,
> assuming 20MHz clock and 64-byte transfers, it would take about 120 usec
> to move 256B of data. Sending 1kB would take about 500 usec. If the
> transactions are done using 4 bytes at a time, then it would take about
> 1 msec. to transfer 1kB of data."
>
> Signed-off-by: Nayna Jain <[email protected]>
Reviewed-by: Mimi Zohar <[email protected]>
> ---
> drivers/char/tpm/tpm.h | 4 +++-
> drivers/char/tpm/tpm_tis_core.c | 5 +++--
> 2 files changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
> index 7e797377e1eb..f0e4d290c347 100644
> --- a/drivers/char/tpm/tpm.h
> +++ b/drivers/char/tpm/tpm.h
> @@ -54,7 +54,9 @@ enum tpm_timeout {
> TPM_TIMEOUT = 5, /* msecs */
> TPM_TIMEOUT_RETRY = 100, /* msecs */
> TPM_TIMEOUT_RANGE_US = 300, /* usecs */
> - TPM_TIMEOUT_POLL = 1 /* msecs */
> + TPM_TIMEOUT_POLL = 1, /* msecs */
> + TPM_TIMEOUT_USECS_MIN = 100, /* usecs */
> + TPM_TIMEOUT_USECS_MAX = 500 /* usecs */
> };
>
> /* TPM addresses */
> diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
> index 021e6b68f2db..5bba5c662423 100644
> --- a/drivers/char/tpm/tpm_tis_core.c
> +++ b/drivers/char/tpm/tpm_tis_core.c
> @@ -84,7 +84,8 @@ static int wait_for_tpm_stat(struct tpm_chip *chip, u8 mask,
> }
> } else {
> do {
> - tpm_msleep(TPM_TIMEOUT_POLL);
> + usleep_range(TPM_TIMEOUT_USECS_MIN,
> + TPM_TIMEOUT_USECS_MAX);
> status = chip->ops->status(chip);
> if ((status & mask) == mask)
> return 0;
> @@ -226,7 +227,7 @@ static int get_burstcount(struct tpm_chip *chip)
> burstcnt = (value >> 8) & 0xFFFF;
> if (burstcnt)
> return burstcnt;
> - tpm_msleep(TPM_TIMEOUT_POLL);
> + usleep_range(TPM_TIMEOUT_USECS_MIN, TPM_TIMEOUT_USECS_MAX);
> } while (time_before(jiffies, stop));
> return -EBUSY;
> }
On Tue, Apr 17, 2018 at 09:12:45AM -0400, Nayna Jain wrote:
> The TPM polling code in tpm_transmit sleeps in each loop iteration for
> 5 msecs. However, the TPM might return earlier, and thus waiting for
> 5 msecs adds an unnecessary delay. This patch reduces the polling sleep
> time in tpm_transmit() from 5 msecs to 1 msecs.
I'm not sure what TPM returning earlier has to do with this. TPM probably
never returns exactly in the spec defined timeout/duration. I just don't
understand reasoning in this paragraph.
> Additionally, this patch renames TPM_POLL_SLEEP and moves it to tpm.h as
> an enum value.
>
> After this change, performance on a TPM 1.2 with an 8 byte burstcount
> for 1000 extends improved from ~14 sec to ~10.7 sec.
You cannot give absolute numbers without a context (platform, software).
> Signed-off-by: Nayna Jain <[email protected]>
/Jarkko
On Tue, Apr 17, 2018 at 09:12:46AM -0400, Nayna Jain wrote:
> The TPM burstcount and status commands are supposed to return very
> quickly [1][2]. This patch further reduces the TPM poll sleep time to usecs
> in get_burstcount() and wait_for_tpm_stat() by calling usleep_range()
> directly.
>
> After this change, performance on a TPM 1.2 with an 8 byte burstcount for
> 1000 extends improved from ~10.7 sec to ~7 sec.
>
> [1] From TCG Specification "TCG PC Client Specific TPM Interface
> Specification (TIS), Family 1.2":
>
> "NOTE : It takes roughly 330 ns per byte transfer on LPC. 256 bytes would
> take 84 us, which is a long time to stall the CPU. Chipsets may not be
> designed to post this much data to LPC; therefore, the CPU itself is
> stalled for much of this time. Sending 1 kB would take 350 μs. Therefore,
> even if the TPM_STS_x.burstCount field is a high value, software SHOULD
> be interruptible during this period."
>
> [2] From TCG Specification 2.0, "TCG PC Client Platform TPM Profile
> (PTP) Specification":
>
> "It takes roughly 330 ns per byte transfer on LPC. 256 bytes would take
> 84 us. Chipsets may not be designed to post this much data to LPC;
> therefore, the CPU itself is stalled for much of this time. Sending 1 kB
> would take 350 us. Therefore, even if the TPM_STS_x.burstCount field is a
> high value, software should be interruptible during this period. For SPI,
> assuming 20MHz clock and 64-byte transfers, it would take about 120 usec
> to move 256B of data. Sending 1kB would take about 500 usec. If the
> transactions are done using 4 bytes at a time, then it would take about
> 1 msec. to transfer 1kB of data."
>
> Signed-off-by: Nayna Jain <[email protected]>
Great, thanks for finding those references. Kind of stuff that I will
forget within months and have to revisit with git blame/log :-)
Reviewed-by: Jarkko Sakkinen <[email protected]>
/Jarkko