2020-06-25 08:05:19

by Zhenyu Ye

[permalink] [raw]
Subject: [RESEND PATCH v5 0/6] arm64: tlb: add support for TTL feature

In order to reduce the cost of TLB invalidation, ARMv8.4 provides
the TTL field in TLBI instruction. The TTL field indicates the
level of page table walk holding the leaf entry for the address
being invalidated. This series provide support for this feature.

When ARMv8.4-TTL is implemented, the operand for TLBIs looks like
below:

* +----------+-------+----------------------+
* | ASID | TTL | BADDR |
* +----------+-------+----------------------+
* |63 48|47 44|43 0|

See patches for details, Thanks.

--
ChangeList:
v5:
rebase the series on Linux 5.8-rc2.

v4:
implement flush_*_tlb_range only on arm64.

v3:
minor changes: reduce the indentation levels of __tlbi_level().

v2:
rebase series on Linux 5.7-rc1 and simplify the code implementation.

v1:
add support for TTL feature in arm64.

Marc Zyngier (2):
arm64: Detect the ARMv8.4 TTL feature
arm64: Add level-hinted TLB invalidation helper

Peter Zijlstra (Intel) (1):
tlb: mmu_gather: add tlb_flush_*_range APIs

Zhenyu Ye (3):
arm64: Add tlbi_user_level TLB invalidation helper
arm64: tlb: Set the TTL field in flush_tlb_range
arm64: tlb: Set the TTL field in flush_*_tlb_range

arch/arm64/include/asm/cpucaps.h | 3 +-
arch/arm64/include/asm/pgtable.h | 10 ++++++
arch/arm64/include/asm/sysreg.h | 1 +
arch/arm64/include/asm/tlb.h | 29 +++++++++++++++-
arch/arm64/include/asm/tlbflush.h | 54 +++++++++++++++++++++++++-----
arch/arm64/kernel/cpufeature.c | 11 +++++++
include/asm-generic/tlb.h | 55 ++++++++++++++++++++++---------
7 files changed, 138 insertions(+), 25 deletions(-)

--
2.26.2



2020-06-25 08:07:42

by Zhenyu Ye

[permalink] [raw]
Subject: [RESEND PATCH v5 1/6] arm64: Detect the ARMv8.4 TTL feature

From: Marc Zyngier <[email protected]>

In order to reduce the cost of TLB invalidation, the ARMv8.4 TTL
feature allows TLBs to be issued with a level allowing for quicker
invalidation.

Let's detect the feature for now. Further patches will implement
its actual usage.

Signed-off-by: Marc Zyngier <[email protected]>
Signed-off-by: Zhenyu Ye <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
---
arch/arm64/include/asm/cpucaps.h | 3 ++-
arch/arm64/include/asm/sysreg.h | 1 +
arch/arm64/kernel/cpufeature.c | 11 +++++++++++
3 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
index d7b3bb0cb180..d44ba903d11d 100644
--- a/arch/arm64/include/asm/cpucaps.h
+++ b/arch/arm64/include/asm/cpucaps.h
@@ -62,7 +62,8 @@
#define ARM64_HAS_GENERIC_AUTH 52
#define ARM64_HAS_32BIT_EL1 53
#define ARM64_BTI 54
+#define ARM64_HAS_ARMv8_4_TTL 55

-#define ARM64_NCAPS 55
+#define ARM64_NCAPS 56

#endif /* __ASM_CPUCAPS_H */
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 463175f80341..8c209aa17273 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -746,6 +746,7 @@

/* id_aa64mmfr2 */
#define ID_AA64MMFR2_E0PD_SHIFT 60
+#define ID_AA64MMFR2_TTL_SHIFT 48
#define ID_AA64MMFR2_FWB_SHIFT 40
#define ID_AA64MMFR2_AT_SHIFT 32
#define ID_AA64MMFR2_LVA_SHIFT 16
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 4ae41670c2e6..bda002078ec5 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -323,6 +323,7 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {

static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_E0PD_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_TTL_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_FWB_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_AT_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
@@ -1880,6 +1881,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.matches = has_cpuid_feature,
.cpu_enable = cpu_has_fwb,
},
+ {
+ .desc = "ARMv8.4 Translation Table Level",
+ .type = ARM64_CPUCAP_SYSTEM_FEATURE,
+ .capability = ARM64_HAS_ARMv8_4_TTL,
+ .sys_reg = SYS_ID_AA64MMFR2_EL1,
+ .sign = FTR_UNSIGNED,
+ .field_pos = ID_AA64MMFR2_TTL_SHIFT,
+ .min_field_value = 1,
+ .matches = has_cpuid_feature,
+ },
#ifdef CONFIG_ARM64_HW_AFDBM
{
/*
--
2.26.2


2020-07-07 13:50:47

by Catalin Marinas

[permalink] [raw]
Subject: Re: [RESEND PATCH v5 0/6] arm64: tlb: add support for TTL feature

On Thu, 25 Jun 2020 16:03:08 +0800, Zhenyu Ye wrote:
> In order to reduce the cost of TLB invalidation, ARMv8.4 provides
> the TTL field in TLBI instruction. The TTL field indicates the
> level of page table walk holding the leaf entry for the address
> being invalidated. This series provide support for this feature.
>
> When ARMv8.4-TTL is implemented, the operand for TLBIs looks like
> below:
>
> [...]

Applied to arm64 (for-next/tlbi), thanks!

[3/6] arm64: Add tlbi_user_level TLB invalidation helper
https://git.kernel.org/arm64/c/e735b98a5fe0
[4/6] tlb: mmu_gather: add tlb_flush_*_range APIs
https://git.kernel.org/arm64/c/2631ed00b049
[5/6] arm64: tlb: Set the TTL field in flush_tlb_range
https://git.kernel.org/arm64/c/c4ab2cbc1d87
[6/6] arm64: tlb: Set the TTL field in flush_*_tlb_range
https://git.kernel.org/arm64/c/a7ac1cfa4c05

I haven't included the first 2 patches as I rebased the above on top of
Marc's TTL branch:

git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms.git kvm-arm64/ttl-for-arm64

--
Catalin