2022-04-26 12:59:26

by Feng Tang

[permalink] [raw]
Subject: [PATCH v3] x86, vmlinux.lds: Add debug option to force all data sections aligned

0Day has reported many strange performance changes (regression or
improvement), in which there was no obvious relation between the culprit
commit and the benchmark at the first look, and it causes people to doubt
the test itself is wrong.

Upon further check, many of these cases are caused by the change to the
alignment of kernel text or data, as whole text/data of kernel are linked
together compactly, change in one domain can affect alignments of other
domains linked after it.

To help quickly identifying if the strange performance change is caused
by _data_ alignment, add a debug option to force the data sections from
all .o files aligned on PAGE_SIZE, so that change in one domain won't
affect other modules' data alignment.

We have used this option to check some strange kernel changes [1][2][3],
and those performance changes were gone after enabling it, which proved
they are data alignment related. Besides these publicly reported cases,
recently there are other similar cases found by 0Day, and this option
has been actively used by 0Day for analyzing strange performance changes,
and filter some from reporting out.

With the debug option on, the vmlinux is around 0.73% larger:

$ls -l vmlinux*
805891208 Apr 24 19:31 vmlinux.data-4k-align
799599752 Apr 24 19:28 vmlinux.raw

$size vmlinux*
text data bss dec hex filename
17849671 22654886 4702208 45206765 2b1cced vmlinux.data-4k-align
17849671 14784294 6275072 38909037 251b46d vmlinux.raw

Similarly, there is another kernel debug option to check text alignment
related performance changes: CONFIG_DEBUG_FORCE_FUNCTION_ALIGN_64B,
which forces all function's start address to be 64 bytes alinged.

This option depends on CONFIG_DYNAMIC_DEBUG==n, as '__dyndbg' subsection
of .data has a hard requirement of ALIGN(8), shown in the 'vmlinux.lds':

"
. = ALIGN(8); __start___dyndbg = .; KEEP(*(__dyndbg)) __stop___dyndbg = .;
"

It contains all pointers to 'struct _ddebug', and dynamic_debug_init()
will "pointer++" to loop accessing these pointers, which will be broken
with this option enabled.

[1]. https://lore.kernel.org/lkml/20200205123216.GO12867@shao2-debian/
[2]. https://lore.kernel.org/lkml/20200305062138.GI5972@shao2-debian/
[3]. https://lore.kernel.org/lkml/20201112140625.GA21612@xsang-OptiPlex-9020/

Signed-off-by: Feng Tang <[email protected]>
---
Changelog:

sicne v2
* correct some typos about THRED_SIZE/PAGE_SIZE, as pointed out
by Peter Zijlstra

since v1
* reduce the alignment from THREAD_SIZE to PAGE_SIZE
* refine the commit log with size change data, and code comments

since RFC (https://lore.kernel.org/lkml/[email protected]/)
* rebase against 5.17-rc1
* modify the changelog adding more recent info

arch/x86/Kconfig.debug | 13 +++++++++++++
arch/x86/kernel/vmlinux.lds.S | 12 +++++++++++-
2 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/arch/x86/Kconfig.debug b/arch/x86/Kconfig.debug
index d3a6f74a94bd..d8edf546f372 100644
--- a/arch/x86/Kconfig.debug
+++ b/arch/x86/Kconfig.debug
@@ -225,6 +225,19 @@ config PUNIT_ATOM_DEBUG
The current power state can be read from
/sys/kernel/debug/punit_atom/dev_power_state

+config DEBUG_FORCE_DATA_SECTION_ALIGNED
+ bool "Force all data sections to be PAGE_SIZE aligned"
+ depends on EXPERT && !DYNAMIC_DEBUG
+ help
+ There are cases that a commit from one kernel domain changes
+ data sections' alignment of other domains, as they are all
+ linked together compactly, and cause magic performance bump
+ (regression or improvement), which is hard to debug. Enable
+ this option will help to verify if the bump is caused by
+ data alignment changes.
+
+ It is mainly for debug and performance tuning use.
+
choice
prompt "Choose kernel unwinder"
default UNWINDER_ORC if X86_64
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 7fda7f27e762..0919872602f1 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -155,7 +155,17 @@ SECTIONS
X86_ALIGN_RODATA_END

/* Data */
- .data : AT(ADDR(.data) - LOAD_OFFSET) {
+ .data : AT(ADDR(.data) - LOAD_OFFSET)
+#ifdef CONFIG_DEBUG_FORCE_DATA_SECTION_ALIGNED
+ /*
+ * In theory, THREAD_SIZE as the biggest alignment of below sections
+ * should be picked, but since upper 'X86_ALIGN_RODATA_END' can
+ * guarantee the alignment of 'INIT_TASK_DATA', PAGE_SIZE is picked
+ * instead to reduce size of kernel binary
+ */
+ SUBALIGN(PAGE_SIZE)
+#endif
+ {
/* Start of data section */
_sdata = .;

--
2.27.0