2023-11-22 22:18:54

by Helge Deller

[permalink] [raw]
Subject: [PATCH 3/4] vmlinux.lds.h: Fix alignment for __ksymtab*, __kcrctab_* and .pci_fixup sections

From: Helge Deller <[email protected]>

On 64-bit architectures without CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
(e.g. ppc64, ppc64le, parisc, s390x,...) the __KSYM_REF() macro stores
64-bit pointers into the __ksymtab* sections.
Make sure that the start of those sections is 64-bit aligned in the vmlinux
executable, otherwise unaligned memory accesses may happen at runtime.

The __kcrctab* sections store 32-bit entities, so make those sections
32-bit aligned.

The pci fixup routines want to be 64-bit aligned on 64-bit platforms
which don't define CONFIG_HAVE_ARCH_PREL32_RELOCATIONS. An alignment
of 8 bytes is sufficient to guarantee aligned accesses at runtime.

Signed-off-by: Helge Deller <[email protected]>
Cc: <[email protected]> # v6.0+
---
include/asm-generic/vmlinux.lds.h | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index bae0fe4d499b..fa4335346e7d 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -467,6 +467,7 @@
} \
\
/* PCI quirks */ \
+ . = ALIGN(8); \
.pci_fixup : AT(ADDR(.pci_fixup) - LOAD_OFFSET) { \
BOUNDED_SECTION_PRE_LABEL(.pci_fixup_early, _pci_fixups_early, __start, __end) \
BOUNDED_SECTION_PRE_LABEL(.pci_fixup_header, _pci_fixups_header, __start, __end) \
@@ -484,6 +485,7 @@
PRINTK_INDEX \
\
/* Kernel symbol table: Normal symbols */ \
+ . = ALIGN(8); \
__ksymtab : AT(ADDR(__ksymtab) - LOAD_OFFSET) { \
__start___ksymtab = .; \
KEEP(*(SORT(___ksymtab+*))) \
@@ -491,6 +493,7 @@
} \
\
/* Kernel symbol table: GPL-only symbols */ \
+ . = ALIGN(8); \
__ksymtab_gpl : AT(ADDR(__ksymtab_gpl) - LOAD_OFFSET) { \
__start___ksymtab_gpl = .; \
KEEP(*(SORT(___ksymtab_gpl+*))) \
@@ -498,6 +501,7 @@
} \
\
/* Kernel symbol table: Normal symbols */ \
+ . = ALIGN(4); \
__kcrctab : AT(ADDR(__kcrctab) - LOAD_OFFSET) { \
__start___kcrctab = .; \
KEEP(*(SORT(___kcrctab+*))) \
@@ -505,6 +509,7 @@
} \
\
/* Kernel symbol table: GPL-only symbols */ \
+ . = ALIGN(4); \
__kcrctab_gpl : AT(ADDR(__kcrctab_gpl) - LOAD_OFFSET) { \
__start___kcrctab_gpl = .; \
KEEP(*(SORT(___kcrctab_gpl+*))) \
--
2.41.0


2023-12-21 13:08:41

by Masahiro Yamada

[permalink] [raw]
Subject: Re: [PATCH 3/4] vmlinux.lds.h: Fix alignment for __ksymtab*, __kcrctab_* and .pci_fixup sections

On Thu, Nov 23, 2023 at 7:18 AM <[email protected]> wrote:
>
> From: Helge Deller <[email protected]>
>
> On 64-bit architectures without CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
> (e.g. ppc64, ppc64le, parisc, s390x,...) the __KSYM_REF() macro stores
> 64-bit pointers into the __ksymtab* sections.
> Make sure that the start of those sections is 64-bit aligned in the vmlinux
> executable, otherwise unaligned memory accesses may happen at runtime.


Are you solving a real problem?


1/4 already ensures the proper alignment of __ksymtab*, doesn't it?



I applied the following hack to attempt to
break the alignment intentionally.


diff --git a/include/asm-generic/vmlinux.lds.h
b/include/asm-generic/vmlinux.lds.h
index bae0fe4d499b..e2b5c9acee97 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -482,7 +482,7 @@
TRACEDATA \
\
PRINTK_INDEX \
- \
+ . = . + 1; \
/* Kernel symbol table: Normal symbols */ \
__ksymtab : AT(ADDR(__ksymtab) - LOAD_OFFSET) { \
__start___ksymtab = .; \




The __ksymtab section and __start___ksymtab symbol
are still properly aligned due to the '.balign'
in <linux/export-internal.h>



So, my understanding is this patch is unneeded.


Or, does the behaviour depend on toolchains?








> The __kcrctab* sections store 32-bit entities, so make those sections
> 32-bit aligned.
>
> The pci fixup routines want to be 64-bit aligned on 64-bit platforms
> which don't define CONFIG_HAVE_ARCH_PREL32_RELOCATIONS. An alignment
> of 8 bytes is sufficient to guarantee aligned accesses at runtime.
>
> Signed-off-by: Helge Deller <[email protected]>
> Cc: <[email protected]> # v6.0+


















> ---
> include/asm-generic/vmlinux.lds.h | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
> index bae0fe4d499b..fa4335346e7d 100644
> --- a/include/asm-generic/vmlinux.lds.h
> +++ b/include/asm-generic/vmlinux.lds.h
> @@ -467,6 +467,7 @@
> } \
> \
> /* PCI quirks */ \
> + . = ALIGN(8); \
> .pci_fixup : AT(ADDR(.pci_fixup) - LOAD_OFFSET) { \
> BOUNDED_SECTION_PRE_LABEL(.pci_fixup_early, _pci_fixups_early, __start, __end) \
> BOUNDED_SECTION_PRE_LABEL(.pci_fixup_header, _pci_fixups_header, __start, __end) \
> @@ -484,6 +485,7 @@
> PRINTK_INDEX \
> \
> /* Kernel symbol table: Normal symbols */ \
> + . = ALIGN(8); \
> __ksymtab : AT(ADDR(__ksymtab) - LOAD_OFFSET) { \
> __start___ksymtab = .; \
> KEEP(*(SORT(___ksymtab+*))) \
> @@ -491,6 +493,7 @@
> } \
> \
> /* Kernel symbol table: GPL-only symbols */ \
> + . = ALIGN(8); \
> __ksymtab_gpl : AT(ADDR(__ksymtab_gpl) - LOAD_OFFSET) { \
> __start___ksymtab_gpl = .; \
> KEEP(*(SORT(___ksymtab_gpl+*))) \
> @@ -498,6 +501,7 @@
> } \
> \
> /* Kernel symbol table: Normal symbols */ \
> + . = ALIGN(4); \
> __kcrctab : AT(ADDR(__kcrctab) - LOAD_OFFSET) { \
> __start___kcrctab = .; \
> KEEP(*(SORT(___kcrctab+*))) \
> @@ -505,6 +509,7 @@
> } \
> \
> /* Kernel symbol table: GPL-only symbols */ \
> + . = ALIGN(4); \
> __kcrctab_gpl : AT(ADDR(__kcrctab_gpl) - LOAD_OFFSET) { \
> __start___kcrctab_gpl = .; \
> KEEP(*(SORT(___kcrctab_gpl+*))) \
> --
> 2.41.0
>


--
Best Regards
Masahiro Yamada

2023-12-22 09:02:46

by Helge Deller

[permalink] [raw]
Subject: Re: [PATCH 3/4] vmlinux.lds.h: Fix alignment for __ksymtab*, __kcrctab_* and .pci_fixup sections

On 12/21/23 14:07, Masahiro Yamada wrote:
> On Thu, Nov 23, 2023 at 7:18 AM <[email protected]> wrote:
>>
>> From: Helge Deller <[email protected]>
>>
>> On 64-bit architectures without CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
>> (e.g. ppc64, ppc64le, parisc, s390x,...) the __KSYM_REF() macro stores
>> 64-bit pointers into the __ksymtab* sections.
>> Make sure that the start of those sections is 64-bit aligned in the vmlinux
>> executable, otherwise unaligned memory accesses may happen at runtime.
>
>
> Are you solving a real problem?

Not any longer.
I faced a problem on parisc when neither #1 and #3 were applied
because of a buggy unalignment exception handler. But this is
not something which I would count a "real generic problem".

> 1/4 already ensures the proper alignment of __ksymtab*, doesn't it?

Yes, it does.

>...
> So, my understanding is this patch is unneeded.

Yes, it's not required and I'm fine if we drop it.

But regarding __kcrctab:

>> @@ -498,6 +501,7 @@
>> } \
>> \
>> /* Kernel symbol table: Normal symbols */ \
>> + . = ALIGN(4); \
>> __kcrctab : AT(ADDR(__kcrctab) - LOAD_OFFSET) { \
>> __start___kcrctab = .; \
>> KEEP(*(SORT(___kcrctab+*))) \

I think this patch would be beneficial to get proper alignment:

diff --git a/include/linux/export-internal.h b/include/linux/export-internal.h
index cd253eb51d6c..d445705ac13c 100644
--- a/include/linux/export-internal.h
+++ b/include/linux/export-internal.h
@@ -64,6 +64,7 @@

#define SYMBOL_CRC(sym, crc, sec) \
asm(".section \"___kcrctab" sec "+" #sym "\",\"a\"" "\n" \
+ ".balign 4" "\n" \
"__crc_" #sym ":" "\n" \
".long " #crc "\n" \
".previous" "\n")


Helge

2023-12-23 04:11:15

by Masahiro Yamada

[permalink] [raw]
Subject: Re: [PATCH 3/4] vmlinux.lds.h: Fix alignment for __ksymtab*, __kcrctab_* and .pci_fixup sections

On Fri, Dec 22, 2023 at 6:02 PM Helge Deller <[email protected]> wrote:
>
> On 12/21/23 14:07, Masahiro Yamada wrote:
> > On Thu, Nov 23, 2023 at 7:18 AM <[email protected]> wrote:
> >>
> >> From: Helge Deller <[email protected]>
> >>
> >> On 64-bit architectures without CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
> >> (e.g. ppc64, ppc64le, parisc, s390x,...) the __KSYM_REF() macro stores
> >> 64-bit pointers into the __ksymtab* sections.
> >> Make sure that the start of those sections is 64-bit aligned in the vmlinux
> >> executable, otherwise unaligned memory accesses may happen at runtime.
> >
> >
> > Are you solving a real problem?
>
> Not any longer.
> I faced a problem on parisc when neither #1 and #3 were applied
> because of a buggy unalignment exception handler. But this is
> not something which I would count a "real generic problem".
>
> > 1/4 already ensures the proper alignment of __ksymtab*, doesn't it?
>
> Yes, it does.
>
> >...
> > So, my understanding is this patch is unneeded.
>
> Yes, it's not required and I'm fine if we drop it.
>
> But regarding __kcrctab:
>
> >> @@ -498,6 +501,7 @@
> >> } \
> >> \
> >> /* Kernel symbol table: Normal symbols */ \
> >> + . = ALIGN(4); \
> >> __kcrctab : AT(ADDR(__kcrctab) - LOAD_OFFSET) { \
> >> __start___kcrctab = .; \
> >> KEEP(*(SORT(___kcrctab+*))) \
>
> I think this patch would be beneficial to get proper alignment:
>
> diff --git a/include/linux/export-internal.h b/include/linux/export-internal.h
> index cd253eb51d6c..d445705ac13c 100644
> --- a/include/linux/export-internal.h
> +++ b/include/linux/export-internal.h
> @@ -64,6 +64,7 @@
>
> #define SYMBOL_CRC(sym, crc, sec) \
> asm(".section \"___kcrctab" sec "+" #sym "\",\"a\"" "\n" \
> + ".balign 4" "\n" \
> "__crc_" #sym ":" "\n" \
> ".long " #crc "\n" \
> ".previous" "\n")


Yes!


Please send a patch with this:


Fixes: f3304ecd7f06 ("linux/export: use inline assembler to populate
symbol CRCs")



--
Best Regards
Masahiro Yamada