2023-05-15 11:01:58

by Anshuman Khandual

[permalink] [raw]
Subject: [PATCH V2] arm64: Disable EL2 traps for BRBE instructions executed in EL1

This disables EL2 traps for BRBE instructions executed in EL1. This would
enable BRBE to be configured and used successfully in the guest kernel.
While here, this updates Documentation/arm64/booting.rst as well.

Cc: Catalin Marinas <[email protected]>
Cc: Will Deacon <[email protected]>
Cc: Mark Brown <[email protected]>
Cc: Marc Zyngier <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Anshuman Khandual <[email protected]>
---
This patch applies on v6.4-rc2

Changes in V2:

- Updated Documentation/arm64/booting.rst

Changes in V1:

https://lore.kernel.org/all/[email protected]/

Documentation/arm64/booting.rst | 8 ++++++++
arch/arm64/include/asm/el2_setup.h | 10 ++++++++++
2 files changed, 18 insertions(+)

diff --git a/Documentation/arm64/booting.rst b/Documentation/arm64/booting.rst
index ffeccdd6bdac..cb9e151f6928 100644
--- a/Documentation/arm64/booting.rst
+++ b/Documentation/arm64/booting.rst
@@ -379,6 +379,14 @@ Before jumping into the kernel, the following conditions must be met:

- SMCR_EL2.EZT0 (bit 30) must be initialised to 0b1.

+ For CPUs with the Branch Record Buffer Extension (FEAT_BRBE):
+
+ - If the kernel is entered at EL1 and EL2 is present:
+
+ - HFGITR_EL2.nBRBINJ (bit 55) must be initialised to 0b1.
+
+ - HFGITR_EL2.nBRBIALL (bit 56) must be initialised to 0b1.
+
The requirements described above for CPU mode, caches, MMUs, architected
timers, coherency and system registers apply to all CPUs. All CPUs must
enter the kernel in the same exception level. Where the values documented
diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
index 037724b19c5c..06bf321a17be 100644
--- a/arch/arm64/include/asm/el2_setup.h
+++ b/arch/arm64/include/asm/el2_setup.h
@@ -161,6 +161,16 @@
msr_s SYS_HFGWTR_EL2, x0
msr_s SYS_HFGITR_EL2, xzr

+ mrs x1, id_aa64dfr0_el1
+ ubfx x1, x1, #ID_AA64DFR0_EL1_BRBE_SHIFT, #4
+ cbz x1, .Lskip_brbe_\@
+
+ mov x0, xzr
+ orr x0, x0, #HFGITR_EL2_nBRBIALL
+ orr x0, x0, #HFGITR_EL2_nBRBINJ
+ msr_s SYS_HFGITR_EL2, x0
+
+.Lskip_brbe_\@:
mrs x1, id_aa64pfr0_el1 // AMU traps UNDEF without AMU
ubfx x1, x1, #ID_AA64PFR0_EL1_AMU_SHIFT, #4
cbz x1, .Lskip_fgt_\@
--
2.25.1



2023-05-15 13:50:14

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH V2] arm64: Disable EL2 traps for BRBE instructions executed in EL1

On Mon, 15 May 2023 11:53:28 +0100,
Anshuman Khandual <[email protected]> wrote:
>
> This disables EL2 traps for BRBE instructions executed in EL1. This would
> enable BRBE to be configured and used successfully in the guest kernel.
> While here, this updates Documentation/arm64/booting.rst as well.
>
> Cc: Catalin Marinas <[email protected]>
> Cc: Will Deacon <[email protected]>
> Cc: Mark Brown <[email protected]>
> Cc: Marc Zyngier <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: Anshuman Khandual <[email protected]>
> ---
> This patch applies on v6.4-rc2
>
> Changes in V2:
>
> - Updated Documentation/arm64/booting.rst
>
> Changes in V1:
>
> https://lore.kernel.org/all/[email protected]/
>
> Documentation/arm64/booting.rst | 8 ++++++++
> arch/arm64/include/asm/el2_setup.h | 10 ++++++++++
> 2 files changed, 18 insertions(+)
>
> diff --git a/Documentation/arm64/booting.rst b/Documentation/arm64/booting.rst
> index ffeccdd6bdac..cb9e151f6928 100644
> --- a/Documentation/arm64/booting.rst
> +++ b/Documentation/arm64/booting.rst
> @@ -379,6 +379,14 @@ Before jumping into the kernel, the following conditions must be met:
>
> - SMCR_EL2.EZT0 (bit 30) must be initialised to 0b1.
>
> + For CPUs with the Branch Record Buffer Extension (FEAT_BRBE):
> +
> + - If the kernel is entered at EL1 and EL2 is present:
> +
> + - HFGITR_EL2.nBRBINJ (bit 55) must be initialised to 0b1.
> +
> + - HFGITR_EL2.nBRBIALL (bit 56) must be initialised to 0b1.
> +
> The requirements described above for CPU mode, caches, MMUs, architected
> timers, coherency and system registers apply to all CPUs. All CPUs must
> enter the kernel in the same exception level. Where the values documented
> diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
> index 037724b19c5c..06bf321a17be 100644
> --- a/arch/arm64/include/asm/el2_setup.h
> +++ b/arch/arm64/include/asm/el2_setup.h
> @@ -161,6 +161,16 @@
> msr_s SYS_HFGWTR_EL2, x0
> msr_s SYS_HFGITR_EL2, xzr
>
> + mrs x1, id_aa64dfr0_el1
> + ubfx x1, x1, #ID_AA64DFR0_EL1_BRBE_SHIFT, #4
> + cbz x1, .Lskip_brbe_\@
> +
> + mov x0, xzr
> + orr x0, x0, #HFGITR_EL2_nBRBIALL
> + orr x0, x0, #HFGITR_EL2_nBRBINJ
> + msr_s SYS_HFGITR_EL2, x0

This will break badly if someone inserts something between this hunk
and the initial setting of HFGITR_EL2. I'd really prefer a RMW
approach. It's not that this code has to be optimised anyway.

M.

--
Without deviation from the norm, progress is not possible.

2023-05-16 02:53:24

by Anshuman Khandual

[permalink] [raw]
Subject: Re: [PATCH V2] arm64: Disable EL2 traps for BRBE instructions executed in EL1



On 5/15/23 19:12, Marc Zyngier wrote:
> On Mon, 15 May 2023 11:53:28 +0100,
> Anshuman Khandual <[email protected]> wrote:
>>
>> This disables EL2 traps for BRBE instructions executed in EL1. This would
>> enable BRBE to be configured and used successfully in the guest kernel.
>> While here, this updates Documentation/arm64/booting.rst as well.
>>
>> Cc: Catalin Marinas <[email protected]>
>> Cc: Will Deacon <[email protected]>
>> Cc: Mark Brown <[email protected]>
>> Cc: Marc Zyngier <[email protected]>
>> Cc: [email protected]
>> Cc: [email protected]
>> Signed-off-by: Anshuman Khandual <[email protected]>
>> ---
>> This patch applies on v6.4-rc2
>>
>> Changes in V2:
>>
>> - Updated Documentation/arm64/booting.rst
>>
>> Changes in V1:
>>
>> https://lore.kernel.org/all/[email protected]/
>>
>> Documentation/arm64/booting.rst | 8 ++++++++
>> arch/arm64/include/asm/el2_setup.h | 10 ++++++++++
>> 2 files changed, 18 insertions(+)
>>
>> diff --git a/Documentation/arm64/booting.rst b/Documentation/arm64/booting.rst
>> index ffeccdd6bdac..cb9e151f6928 100644
>> --- a/Documentation/arm64/booting.rst
>> +++ b/Documentation/arm64/booting.rst
>> @@ -379,6 +379,14 @@ Before jumping into the kernel, the following conditions must be met:
>>
>> - SMCR_EL2.EZT0 (bit 30) must be initialised to 0b1.
>>
>> + For CPUs with the Branch Record Buffer Extension (FEAT_BRBE):
>> +
>> + - If the kernel is entered at EL1 and EL2 is present:
>> +
>> + - HFGITR_EL2.nBRBINJ (bit 55) must be initialised to 0b1.
>> +
>> + - HFGITR_EL2.nBRBIALL (bit 56) must be initialised to 0b1.
>> +
>> The requirements described above for CPU mode, caches, MMUs, architected
>> timers, coherency and system registers apply to all CPUs. All CPUs must
>> enter the kernel in the same exception level. Where the values documented
>> diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
>> index 037724b19c5c..06bf321a17be 100644
>> --- a/arch/arm64/include/asm/el2_setup.h
>> +++ b/arch/arm64/include/asm/el2_setup.h
>> @@ -161,6 +161,16 @@
>> msr_s SYS_HFGWTR_EL2, x0
>> msr_s SYS_HFGITR_EL2, xzr
>>
>> + mrs x1, id_aa64dfr0_el1
>> + ubfx x1, x1, #ID_AA64DFR0_EL1_BRBE_SHIFT, #4
>> + cbz x1, .Lskip_brbe_\@
>> +
>> + mov x0, xzr
>> + orr x0, x0, #HFGITR_EL2_nBRBIALL
>> + orr x0, x0, #HFGITR_EL2_nBRBINJ
>> + msr_s SYS_HFGITR_EL2, x0
>
> This will break badly if someone inserts something between this hunk
> and the initial setting of HFGITR_EL2. I'd really prefer a RMW
> approach. It's not that this code has to be optimised anyway.

Something like this instead ? So that even if there are more changes
before this hunk, it will be fetched correctly with first mrs_s and
only additional bits related to BRBE will be set there after.

diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
index 037724b19c5c..bfaf41ad9c4e 100644
--- a/arch/arm64/include/asm/el2_setup.h
+++ b/arch/arm64/include/asm/el2_setup.h
@@ -161,6 +161,16 @@
msr_s SYS_HFGWTR_EL2, x0
msr_s SYS_HFGITR_EL2, xzr

+ mrs x1, id_aa64dfr0_el1
+ ubfx x1, x1, #ID_AA64DFR0_EL1_BRBE_SHIFT, #4
+ cbz x1, .Lskip_brbe_\@
+
+ mrs_s x0, SYS_HFGITR_EL2
+ orr x0, x0, #HFGITR_EL2_nBRBIALL
+ orr x0, x0, #HFGITR_EL2_nBRBINJ
+ msr_s SYS_HFGITR_EL2, x0
+
+.Lskip_brbe_\@:
mrs x1, id_aa64pfr0_el1 // AMU traps UNDEF without AMU
ubfx x1, x1, #ID_AA64PFR0_EL1_AMU_SHIFT, #4
cbz x1, .Lskip_fgt_\@


2023-05-16 07:32:46

by Marc Zyngier

[permalink] [raw]
Subject: Re: [PATCH V2] arm64: Disable EL2 traps for BRBE instructions executed in EL1

On Tue, 16 May 2023 03:43:27 +0100,
Anshuman Khandual <[email protected]> wrote:
>
>
>
> On 5/15/23 19:12, Marc Zyngier wrote:
> > On Mon, 15 May 2023 11:53:28 +0100,
> > Anshuman Khandual <[email protected]> wrote:
> >>

[...]

> >> diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
> >> index 037724b19c5c..06bf321a17be 100644
> >> --- a/arch/arm64/include/asm/el2_setup.h
> >> +++ b/arch/arm64/include/asm/el2_setup.h
> >> @@ -161,6 +161,16 @@
> >> msr_s SYS_HFGWTR_EL2, x0
> >> msr_s SYS_HFGITR_EL2, xzr
> >>
> >> + mrs x1, id_aa64dfr0_el1
> >> + ubfx x1, x1, #ID_AA64DFR0_EL1_BRBE_SHIFT, #4
> >> + cbz x1, .Lskip_brbe_\@
> >> +
> >> + mov x0, xzr
> >> + orr x0, x0, #HFGITR_EL2_nBRBIALL
> >> + orr x0, x0, #HFGITR_EL2_nBRBINJ
> >> + msr_s SYS_HFGITR_EL2, x0
> >
> > This will break badly if someone inserts something between this hunk
> > and the initial setting of HFGITR_EL2. I'd really prefer a RMW
> > approach. It's not that this code has to be optimised anyway.
>
> Something like this instead ? So that even if there are more changes
> before this hunk, it will be fetched correctly with first mrs_s and
> only additional bits related to BRBE will be set there after.
>
> diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
> index 037724b19c5c..bfaf41ad9c4e 100644
> --- a/arch/arm64/include/asm/el2_setup.h
> +++ b/arch/arm64/include/asm/el2_setup.h
> @@ -161,6 +161,16 @@
> msr_s SYS_HFGWTR_EL2, x0
> msr_s SYS_HFGITR_EL2, xzr
>
> + mrs x1, id_aa64dfr0_el1
> + ubfx x1, x1, #ID_AA64DFR0_EL1_BRBE_SHIFT, #4
> + cbz x1, .Lskip_brbe_\@
> +
> + mrs_s x0, SYS_HFGITR_EL2
> + orr x0, x0, #HFGITR_EL2_nBRBIALL
> + orr x0, x0, #HFGITR_EL2_nBRBINJ
> + msr_s SYS_HFGITR_EL2, x0
> +
> +.Lskip_brbe_\@:
> mrs x1, id_aa64pfr0_el1 // AMU traps UNDEF without AMU
> ubfx x1, x1, #ID_AA64PFR0_EL1_AMU_SHIFT, #4
> cbz x1, .Lskip_fgt_\@

Yes, this is much better.

M.

--
Without deviation from the norm, progress is not possible.