2020-03-16 12:42:27

by Rémi Denis-Courmont

[permalink] [raw]
Subject: [PATCH 1/3] arm64: clean up trampoline vector loads

From: Rémi Denis-Courmont <[email protected]>

This switches from custom instruction patterns to the regular large
memory model sequence with ADRP and LDR. In doing so, the ADD
instruction can be eliminated in the SDEI handler, and the code no
longer assumes that the trampoline vectors and the vectors address both
start on a page boundary.

Signed-off-by: Rémi Denis-Courmont <[email protected]>
---
arch/arm64/kernel/entry.S | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index e5d4e30ee242..24f828739696 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -805,9 +805,9 @@ alternative_else_nop_endif
2:
tramp_map_kernel x30
#ifdef CONFIG_RANDOMIZE_BASE
- adr x30, tramp_vectors + PAGE_SIZE
+ adrp x30, tramp_vectors + PAGE_SIZE
alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
- ldr x30, [x30]
+ ldr x30, [x30, #:lo12:__entry_tramp_data_start]
#else
ldr x30, =vectors
#endif
@@ -953,9 +953,8 @@ SYM_CODE_START(__sdei_asm_entry_trampoline)
1: str x4, [x1, #(SDEI_EVENT_INTREGS + S_ORIG_ADDR_LIMIT)]

#ifdef CONFIG_RANDOMIZE_BASE
- adr x4, tramp_vectors + PAGE_SIZE
- add x4, x4, #:lo12:__sdei_asm_trampoline_next_handler
- ldr x4, [x4]
+ adrp x4, tramp_vectors + PAGE_SIZE
+ ldr x4, [x4, #:lo12:__sdei_asm_trampoline_next_handler]
#else
ldr x4, =__sdei_asm_handler
#endif
--
2.25.1


2020-03-17 22:31:24

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH 1/3] arm64: clean up trampoline vector loads

On Mon, Mar 16, 2020 at 02:40:44PM +0200, R?mi Denis-Courmont wrote:
> From: R?mi Denis-Courmont <[email protected]>
>
> This switches from custom instruction patterns to the regular large
> memory model sequence with ADRP and LDR. In doing so, the ADD
> instruction can be eliminated in the SDEI handler, and the code no
> longer assumes that the trampoline vectors and the vectors address both
> start on a page boundary.
>
> Signed-off-by: R?mi Denis-Courmont <[email protected]>
> ---
> arch/arm64/kernel/entry.S | 9 ++++-----
> 1 file changed, 4 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index e5d4e30ee242..24f828739696 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -805,9 +805,9 @@ alternative_else_nop_endif
> 2:
> tramp_map_kernel x30
> #ifdef CONFIG_RANDOMIZE_BASE
> - adr x30, tramp_vectors + PAGE_SIZE
> + adrp x30, tramp_vectors + PAGE_SIZE
> alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
> - ldr x30, [x30]
> + ldr x30, [x30, #:lo12:__entry_tramp_data_start]
> #else
> ldr x30, =vectors
> #endif
> @@ -953,9 +953,8 @@ SYM_CODE_START(__sdei_asm_entry_trampoline)
> 1: str x4, [x1, #(SDEI_EVENT_INTREGS + S_ORIG_ADDR_LIMIT)]
>
> #ifdef CONFIG_RANDOMIZE_BASE
> - adr x4, tramp_vectors + PAGE_SIZE
> - add x4, x4, #:lo12:__sdei_asm_trampoline_next_handler
> - ldr x4, [x4]
> + adrp x4, tramp_vectors + PAGE_SIZE
> + ldr x4, [x4, #:lo12:__sdei_asm_trampoline_next_handler]
> #else
> ldr x4, =__sdei_asm_handler
> #endif

Acked-by: Will Deacon <[email protected]>

Will

2020-03-18 17:58:06

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH 1/3] arm64: clean up trampoline vector loads

On Mon, Mar 16, 2020 at 02:40:44PM +0200, R?mi Denis-Courmont wrote:
> From: R?mi Denis-Courmont <[email protected]>
>
> This switches from custom instruction patterns to the regular large
> memory model sequence with ADRP and LDR. In doing so, the ADD
> instruction can be eliminated in the SDEI handler, and the code no
> longer assumes that the trampoline vectors and the vectors address both
> start on a page boundary.
>
> Signed-off-by: R?mi Denis-Courmont <[email protected]>

I queued the 3 trampoline patches for 5.7. Thanks.

--
Catalin

2020-03-18 18:08:34

by Catalin Marinas

[permalink] [raw]
Subject: Re: [PATCH 1/3] arm64: clean up trampoline vector loads

On Wed, Mar 18, 2020 at 05:57:09PM +0000, Catalin Marinas wrote:
> On Mon, Mar 16, 2020 at 02:40:44PM +0200, R?mi Denis-Courmont wrote:
> > From: R?mi Denis-Courmont <[email protected]>
> >
> > This switches from custom instruction patterns to the regular large
> > memory model sequence with ADRP and LDR. In doing so, the ADD
> > instruction can be eliminated in the SDEI handler, and the code no
> > longer assumes that the trampoline vectors and the vectors address both
> > start on a page boundary.
> >
> > Signed-off-by: R?mi Denis-Courmont <[email protected]>
>
> I queued the 3 trampoline patches for 5.7. Thanks.

... and removed. I applied them on top of arm64 for-next/asm-annotations
and with defconfig I get:

LD .tmp_vmlinux1
arch/arm64/kernel/entry.o: in function `tramp_vectors':
arch/arm64/kernel/entry.S:838:(.entry.tramp.text+0x43c): relocation truncated to fit: R_AARCH64_LDST64_ABS_LO12_NC against symbol `__entry_tramp_data_start' defined in .rodata section in arch/arm64/kernel/entry.o
ld: arch/arm64/kernel/entry.S:838: warning: one possible cause of this error is that the symbol is being referenced in the indicated code as if it had a larger alignment than was declared where it was defined
arch/arm64/kernel/entry.S:839:(.entry.tramp.text+0x4bc): relocation truncated to fit: R_AARCH64_LDST64_ABS_LO12_NC against symbol `__entry_tramp_data_start' defined in .rodata section in arch/arm64/kernel/entry.o
ld: arch/arm64/kernel/entry.S:839: warning: one possible cause of this error is that the symbol is being referenced in the indicated code as if it had a larger alignment than was declared where it was defined
arch/arm64/kernel/entry.S:840:(.entry.tramp.text+0x53c): relocation truncated to fit: R_AARCH64_LDST64_ABS_LO12_NC against symbol `__entry_tramp_data_start' defined in .rodata section in arch/arm64/kernel/entry.o
ld: arch/arm64/kernel/entry.S:840: warning: one possible cause of this error is that the symbol is being referenced in the indicated code as if it had a larger alignment than was declared where it was defined
arch/arm64/kernel/entry.S:841:(.entry.tramp.text+0x5bc): relocation truncated to fit: R_AARCH64_LDST64_ABS_LO12_NC against symbol `__entry_tramp_data_start' defined in .rodata section in arch/arm64/kernel/entry.o
ld: arch/arm64/kernel/entry.S:841: warning: one possible cause of this error is that the symbol is being referenced in the indicated code as if it had a larger alignment than was declared where it was defined
arch/arm64/kernel/entry.S:843:(.entry.tramp.text+0x638): relocation truncated to fit: R_AARCH64_LDST64_ABS_LO12_NC against symbol `__entry_tramp_data_start' defined in .rodata section in arch/arm64/kernel/entry.o
ld: arch/arm64/kernel/entry.S:843: warning: one possible cause of this error is that the symbol is being referenced in the indicated code as if it had a larger alignment than was declared where it was defined
arch/arm64/kernel/entry.S:844:(.entry.tramp.text+0x6b8): relocation truncated to fit: R_AARCH64_LDST64_ABS_LO12_NC against symbol `__entry_tramp_data_start' defined in .rodata section in arch/arm64/kernel/entry.o
ld: arch/arm64/kernel/entry.S:844: warning: one possible cause of this error is that the symbol is being referenced in the indicated code as if it had a larger alignment than was declared where it was defined
arch/arm64/kernel/entry.S:845:(.entry.tramp.text+0x738): relocation truncated to fit: R_AARCH64_LDST64_ABS_LO12_NC against symbol `__entry_tramp_data_start' defined in .rodata section in arch/arm64/kernel/entry.o
ld: arch/arm64/kernel/entry.S:845: warning: one possible cause of this error is that the symbol is being referenced in the indicated code as if it had a larger alignment than was declared where it was defined
arch/arm64/kernel/entry.S:846:(.entry.tramp.text+0x7b8): relocation truncated to fit: R_AARCH64_LDST64_ABS_LO12_NC against symbol `__entry_tramp_data_start' defined in .rodata section in arch/arm64/kernel/entry.o
ld: arch/arm64/kernel/entry.S:846: warning: one possible cause of this error is that the symbol is being referenced in the indicated code as if it had a larger alignment than was declared where it was defined
make[1]: *** [Makefile:1077: vmlinux] Error 1

I haven't bisected to see which patch caused this issue.

$ gcc --version
gcc (Debian 9.2.1-30) 9.2.1 20200224

$ ld --version
GNU ld (GNU Binutils for Debian) 2.34

--
Catalin

2020-03-18 18:31:45

by Rémi Denis-Courmont

[permalink] [raw]
Subject: Re: [PATCH 1/3] arm64: clean up trampoline vector loads

Le keskiviikkona 18. maaliskuuta 2020, 20.06.30 EET Catalin Marinas a écrit :
> On Wed, Mar 18, 2020 at 05:57:09PM +0000, Catalin Marinas wrote:
> > On Mon, Mar 16, 2020 at 02:40:44PM +0200, Rémi Denis-Courmont wrote:
> > > From: Rémi Denis-Courmont <[email protected]>
> > >
> > > This switches from custom instruction patterns to the regular large
> > > memory model sequence with ADRP and LDR. In doing so, the ADD
> > > instruction can be eliminated in the SDEI handler, and the code no
> > > longer assumes that the trampoline vectors and the vectors address both
> > > start on a page boundary.
> > >
> > > Signed-off-by: Rémi Denis-Courmont <[email protected]>
> >
> > I queued the 3 trampoline patches for 5.7. Thanks.
>
> ... and removed. I applied them on top of arm64 for-next/asm-annotations
> and with defconfig I get:
>
> LD .tmp_vmlinux1
> arch/arm64/kernel/entry.o: in function `tramp_vectors':
> arch/arm64/kernel/entry.S:838:(.entry.tramp.text+0x43c): relocation
> truncated to fit: R_AARCH64_LDST64_ABS_LO12_NC against symbol
> `__entry_tramp_data_start' defined in .rodata section in
>
> I haven't bisected to see which patch caused this issue.

Uho, right :-( It only builds with SDEI enabled :-$

I'll check further.

--
Rémi Denis-Courmont
http://www.remlab.net/



2020-03-18 19:50:17

by Rémi Denis-Courmont

[permalink] [raw]
Subject: Re: [PATCH 1/3] arm64: clean up trampoline vector loads

Le 2020-03-18 20:29, Rémi Denis-Courmont a écrit :
> Le keskiviikkona 18. maaliskuuta 2020, 20.06.30 EET Catalin Marinas a
> écrit :
>> On Wed, Mar 18, 2020 at 05:57:09PM +0000, Catalin Marinas wrote:
>> > On Mon, Mar 16, 2020 at 02:40:44PM +0200, Rémi Denis-Courmont wrote:
>> > > From: Rémi Denis-Courmont <[email protected]>
>> > >
>> > > This switches from custom instruction patterns to the regular large
>> > > memory model sequence with ADRP and LDR. In doing so, the ADD
>> > > instruction can be eliminated in the SDEI handler, and the code no
>> > > longer assumes that the trampoline vectors and the vectors address both
>> > > start on a page boundary.
>> > >
>> > > Signed-off-by: Rémi Denis-Courmont <[email protected]>
>> >
>> > I queued the 3 trampoline patches for 5.7. Thanks.
>>
>> ... and removed. I applied them on top of arm64
>> for-next/asm-annotations
>> and with defconfig I get:
>>
>> LD .tmp_vmlinux1
>> arch/arm64/kernel/entry.o: in function `tramp_vectors':
>> arch/arm64/kernel/entry.S:838:(.entry.tramp.text+0x43c): relocation
>> truncated to fit: R_AARCH64_LDST64_ABS_LO12_NC against symbol
>> `__entry_tramp_data_start' defined in .rodata section in
>>
>> I haven't bisected to see which patch caused this issue.

It's the third patch.

> Uho, right :-( It only builds with SDEI enabled :-$
>
> I'll check further.

It seems that the SYM_DATA_START macro does not align the data on its
natural boundary. I guess that is all fine on x86 where data needs not
be aligned, but it leads to this kind of mischief on arm64. Though even
then, the address is of course actually aligned correctly on an 8-bytes
boundary, so I suppose binutils is just being pointlessly pedantic here?

--
Rémi Denis-Courmont