2020-06-24 20:33:26

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 00/22] add support for Clang LTO

This patch series adds support for building x86_64 and arm64 kernels
with Clang's Link Time Optimization (LTO).

In addition to performance, the primary motivation for LTO is to allow
Clang's Control-Flow Integrity (CFI) to be used in the kernel. Google's
Pixel devices have shipped with LTO+CFI kernels since 2018.

Most of the patches are build system changes for handling LLVM bitcode,
which Clang produces with LTO instead of ELF object files, postponing
ELF processing until a later stage, and ensuring initcall ordering.

Note that first objtool patch in the series is already in linux-next,
but as it's needed with LTO, I'm including it also here to make testing
easier.

Sami Tolvanen (22):
objtool: use sh_info to find the base for .rela sections
kbuild: add support for Clang LTO
kbuild: lto: fix module versioning
kbuild: lto: fix recordmcount
kbuild: lto: postpone objtool
kbuild: lto: limit inlining
kbuild: lto: merge module sections
kbuild: lto: remove duplicate dependencies from .mod files
init: lto: ensure initcall ordering
init: lto: fix PREL32 relocations
pci: lto: fix PREL32 relocations
modpost: lto: strip .lto from module names
scripts/mod: disable LTO for empty.c
efi/libstub: disable LTO
drivers/misc/lkdtm: disable LTO for rodata.o
arm64: export CC_USING_PATCHABLE_FUNCTION_ENTRY
arm64: vdso: disable LTO
arm64: allow LTO_CLANG and THINLTO to be selected
x86, vdso: disable LTO only for vDSO
x86, ftrace: disable recordmcount for ftrace_make_nop
x86, relocs: Ignore L4_PAGE_OFFSET relocations
x86, build: allow LTO_CLANG and THINLTO to be selected

.gitignore | 1 +
Makefile | 27 ++-
arch/Kconfig | 65 +++++++
arch/arm64/Kconfig | 2 +
arch/arm64/Makefile | 1 +
arch/arm64/kernel/vdso/Makefile | 4 +-
arch/x86/Kconfig | 2 +
arch/x86/Makefile | 5 +
arch/x86/entry/vdso/Makefile | 5 +-
arch/x86/kernel/ftrace.c | 1 +
arch/x86/tools/relocs.c | 1 +
drivers/firmware/efi/libstub/Makefile | 2 +
drivers/misc/lkdtm/Makefile | 1 +
include/asm-generic/vmlinux.lds.h | 12 +-
include/linux/compiler-clang.h | 4 +
include/linux/compiler.h | 2 +-
include/linux/compiler_types.h | 4 +
include/linux/init.h | 78 +++++++-
include/linux/pci.h | 15 +-
kernel/trace/ftrace.c | 1 +
lib/Kconfig.debug | 2 +-
scripts/Makefile.build | 55 +++++-
scripts/Makefile.lib | 6 +-
scripts/Makefile.modfinal | 40 +++-
scripts/Makefile.modpost | 26 ++-
scripts/generate_initcall_order.pl | 270 ++++++++++++++++++++++++++
scripts/link-vmlinux.sh | 100 +++++++++-
scripts/mod/Makefile | 1 +
scripts/mod/modpost.c | 16 +-
scripts/mod/modpost.h | 9 +
scripts/mod/sumversion.c | 6 +-
scripts/module-lto.lds | 26 +++
scripts/recordmcount.c | 3 +-
tools/objtool/elf.c | 2 +-
34 files changed, 737 insertions(+), 58 deletions(-)
create mode 100755 scripts/generate_initcall_order.pl
create mode 100644 scripts/module-lto.lds


base-commit: 26e122e97a3d0390ebec389347f64f3730fdf48f
--
2.27.0.212.ge8ba1cc988-goog


2020-06-24 20:33:40

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 01/22] objtool: use sh_info to find the base for .rela sections

ELF doesn't require .rela section names to match the base section. Use
the section index in sh_info to find the section instead of looking it
up by name.

LLD, for example, generates a .rela section that doesn't match the base
section name when we merge sections in a linker script for a binary
compiled with -ffunction-sections.

Signed-off-by: Sami Tolvanen <[email protected]>
Signed-off-by: Josh Poimboeuf <[email protected]>
Reviewed-by: Kees Cook <[email protected]>
---
tools/objtool/elf.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/objtool/elf.c b/tools/objtool/elf.c
index 84225679f96d..c1ba92abaa03 100644
--- a/tools/objtool/elf.c
+++ b/tools/objtool/elf.c
@@ -502,7 +502,7 @@ static int read_relas(struct elf *elf)
if (sec->sh.sh_type != SHT_RELA)
continue;

- sec->base = find_section_by_name(elf, sec->name + 5);
+ sec->base = find_section_by_index(elf, sec->sh.sh_info);
if (!sec->base) {
WARN("can't find base section for rela section %s",
sec->name);
--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:33:44

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 02/22] kbuild: add support for Clang LTO

This change adds build system support for Clang's Link Time
Optimization (LTO). With -flto, instead of ELF object files, Clang
produces LLVM bitcode, which is compiled into native code at link
time, allowing the final binary to be optimized globally. For more
details, see:

https://llvm.org/docs/LinkTimeOptimization.html

The Kconfig option CONFIG_LTO_CLANG is implemented as a choice,
which defaults to LTO being disabled. To use LTO, the architecture
must select ARCH_SUPPORTS_LTO_CLANG and support:

- compiling with Clang,
- compiling inline assembly with Clang's integrated assembler,
- and linking with LLD.

While using full LTO results in the best runtime performance, the
compilation is not scalable in time or memory. CONFIG_THINLTO
enables ThinLTO, which allows parallel optimization and faster
incremental builds. ThinLTO is used by default if the architecture
also selects ARCH_SUPPORTS_THINLTO:

https://clang.llvm.org/docs/ThinLTO.html

To enable LTO, LLVM tools must be used to handle bitcode files. The
easiest way is to pass the LLVM=1 option to make:

$ make LLVM=1 defconfig
$ scripts/config -e LTO_CLANG
$ make LLVM=1

Alternatively, at least the following LLVM tools must be used:

CC=clang LD=ld.lld AR=llvm-ar NM=llvm-nm

To prepare for LTO support with other compilers, common parts are
gated behind the CONFIG_LTO option, and LTO can be disabled for
specific files by filtering out CC_FLAGS_LTO.

Note that support for DYNAMIC_FTRACE and MODVERSIONS are added in
follow-up patches.

Signed-off-by: Sami Tolvanen <[email protected]>
---
Makefile | 16 ++++++++
arch/Kconfig | 66 +++++++++++++++++++++++++++++++
include/asm-generic/vmlinux.lds.h | 11 ++++--
scripts/Makefile.build | 9 ++++-
scripts/Makefile.modfinal | 9 ++++-
scripts/Makefile.modpost | 24 ++++++++++-
scripts/link-vmlinux.sh | 32 +++++++++++----
7 files changed, 151 insertions(+), 16 deletions(-)

diff --git a/Makefile b/Makefile
index ac2c61c37a73..0c7fe6fb2143 100644
--- a/Makefile
+++ b/Makefile
@@ -886,6 +886,22 @@ KBUILD_CFLAGS += $(CC_FLAGS_SCS)
export CC_FLAGS_SCS
endif

+ifdef CONFIG_LTO_CLANG
+ifdef CONFIG_THINLTO
+CC_FLAGS_LTO_CLANG := -flto=thin $(call cc-option, -fsplit-lto-unit)
+KBUILD_LDFLAGS += --thinlto-cache-dir=.thinlto-cache
+else
+CC_FLAGS_LTO_CLANG := -flto
+endif
+CC_FLAGS_LTO_CLANG += -fvisibility=default
+endif
+
+ifdef CONFIG_LTO
+CC_FLAGS_LTO := $(CC_FLAGS_LTO_CLANG)
+KBUILD_CFLAGS += $(CC_FLAGS_LTO)
+export CC_FLAGS_LTO
+endif
+
# arch Makefile may override CC so keep this after arch Makefile is included
NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)

diff --git a/arch/Kconfig b/arch/Kconfig
index 8cc35dc556c7..e00b122293f8 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -552,6 +552,72 @@ config SHADOW_CALL_STACK
reading and writing arbitrary memory may be able to locate them
and hijack control flow by modifying the stacks.

+config LTO
+ bool
+
+config ARCH_SUPPORTS_LTO_CLANG
+ bool
+ help
+ An architecture should select this option if it supports:
+ - compiling with Clang,
+ - compiling inline assembly with Clang's integrated assembler,
+ - and linking with LLD.
+
+config ARCH_SUPPORTS_THINLTO
+ bool
+ help
+ An architecture should select this option if it supports Clang's
+ ThinLTO.
+
+config THINLTO
+ bool "Clang ThinLTO"
+ depends on LTO_CLANG && ARCH_SUPPORTS_THINLTO
+ default y
+ help
+ This option enables Clang's ThinLTO, which allows for parallel
+ optimization and faster incremental compiles. More information
+ can be found from Clang's documentation:
+
+ https://clang.llvm.org/docs/ThinLTO.html
+
+choice
+ prompt "Link Time Optimization (LTO)"
+ default LTO_NONE
+ help
+ This option enables Link Time Optimization (LTO), which allows the
+ compiler to optimize binaries globally.
+
+ If unsure, select LTO_NONE.
+
+config LTO_NONE
+ bool "None"
+
+config LTO_CLANG
+ bool "Clang's Link Time Optimization (EXPERIMENTAL)"
+ depends on CC_IS_CLANG && CLANG_VERSION >= 110000 && LD_IS_LLD
+ depends on $(success,$(NM) --help | head -n 1 | grep -qi llvm)
+ depends on $(success,$(AR) --help | head -n 1 | grep -qi llvm)
+ depends on ARCH_SUPPORTS_LTO_CLANG
+ depends on !FTRACE_MCOUNT_RECORD
+ depends on !KASAN
+ depends on !MODVERSIONS
+ select LTO
+ help
+ This option enables Clang's Link Time Optimization (LTO), which
+ allows the compiler to optimize the kernel globally. If you enable
+ this option, the compiler generates LLVM bitcode instead of ELF
+ object files, and the actual compilation from bitcode happens at
+ the LTO link step, which may take several minutes depending on the
+ kernel configuration. More information can be found from LLVM's
+ documentation:
+
+ https://llvm.org/docs/LinkTimeOptimization.html
+
+ To select this option, you also need to use LLVM tools to handle
+ the bitcode by passing LLVM=1 to make.
+
+endchoice
+
config HAVE_ARCH_WITHIN_STACK_FRAMES
bool
help
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index db600ef218d7..78079000c05a 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -89,15 +89,18 @@
* .data. We don't want to pull in .data..other sections, which Linux
* has defined. Same for text and bss.
*
+ * With LTO_CLANG, the linker also splits sections by default, so we need
+ * these macros to combine the sections during the final link.
+ *
* RODATA_MAIN is not used because existing code already defines .rodata.x
* sections to be brought in with rodata.
*/
-#ifdef CONFIG_LD_DEAD_CODE_DATA_ELIMINATION
+#if defined(CONFIG_LD_DEAD_CODE_DATA_ELIMINATION) || defined(CONFIG_LTO_CLANG)
#define TEXT_MAIN .text .text.[0-9a-zA-Z_]*
-#define DATA_MAIN .data .data.[0-9a-zA-Z_]* .data..LPBX*
+#define DATA_MAIN .data .data.[0-9a-zA-Z_]* .data..L* .data..compoundliteral*
#define SDATA_MAIN .sdata .sdata.[0-9a-zA-Z_]*
-#define RODATA_MAIN .rodata .rodata.[0-9a-zA-Z_]*
-#define BSS_MAIN .bss .bss.[0-9a-zA-Z_]*
+#define RODATA_MAIN .rodata .rodata.[0-9a-zA-Z_]* .rodata..L*
+#define BSS_MAIN .bss .bss.[0-9a-zA-Z_]* .bss..compoundliteral*
#define SBSS_MAIN .sbss .sbss.[0-9a-zA-Z_]*
#else
#define TEXT_MAIN .text
diff --git a/scripts/Makefile.build b/scripts/Makefile.build
index 2e8810b7e5ed..f307e708a1b7 100644
--- a/scripts/Makefile.build
+++ b/scripts/Makefile.build
@@ -108,7 +108,7 @@ endif
# ---------------------------------------------------------------------------

quiet_cmd_cc_s_c = CC $(quiet_modtag) $@
- cmd_cc_s_c = $(CC) $(filter-out $(DEBUG_CFLAGS), $(c_flags)) $(DISABLE_LTO) -fverbose-asm -S -o $@ $<
+ cmd_cc_s_c = $(CC) $(filter-out $(DEBUG_CFLAGS) $(CC_FLAGS_LTO), $(c_flags)) -fverbose-asm -S -o $@ $<

$(obj)/%.s: $(src)/%.c FORCE
$(call if_changed_dep,cc_s_c)
@@ -424,8 +424,15 @@ $(obj)/lib.a: $(lib-y) FORCE
# Do not replace $(filter %.o,^) with $(real-prereqs). When a single object
# module is turned into a multi object module, $^ will contain header file
# dependencies recorded in the .*.cmd file.
+ifdef CONFIG_LTO_CLANG
+quiet_cmd_link_multi-m = AR [M] $@
+cmd_link_multi-m = \
+ rm -f $@; \
+ $(AR) rcsTP$(KBUILD_ARFLAGS) $@ $(filter %.o,$^)
+else
quiet_cmd_link_multi-m = LD [M] $@
cmd_link_multi-m = $(LD) $(ld_flags) -r -o $@ $(filter %.o,$^)
+endif

$(multi-used-m): FORCE
$(call if_changed,link_multi-m)
diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
index 411c1e600e7d..1005b147abd0 100644
--- a/scripts/Makefile.modfinal
+++ b/scripts/Makefile.modfinal
@@ -6,6 +6,7 @@
PHONY := __modfinal
__modfinal:

+include $(objtree)/include/config/auto.conf
include $(srctree)/scripts/Kbuild.include

# for c_flags
@@ -29,6 +30,12 @@ quiet_cmd_cc_o_c = CC [M] $@

ARCH_POSTLINK := $(wildcard $(srctree)/arch/$(SRCARCH)/Makefile.postlink)

+ifdef CONFIG_LTO_CLANG
+# With CONFIG_LTO_CLANG, reuse the object file we compiled for modpost to
+# avoid a second slow LTO link
+prelink-ext := .lto
+endif
+
quiet_cmd_ld_ko_o = LD [M] $@
cmd_ld_ko_o = \
$(LD) -r $(KBUILD_LDFLAGS) \
@@ -37,7 +44,7 @@ quiet_cmd_ld_ko_o = LD [M] $@
-o $@ $(filter %.o, $^); \
$(if $(ARCH_POSTLINK), $(MAKE) -f $(ARCH_POSTLINK) $@, true)

-$(modules): %.ko: %.o %.mod.o $(KBUILD_LDS_MODULE) FORCE
+$(modules): %.ko: %$(prelink-ext).o %.mod.o $(KBUILD_LDS_MODULE) FORCE
+$(call if_changed,ld_ko_o)

targets += $(modules) $(modules:.ko=.mod.o)
diff --git a/scripts/Makefile.modpost b/scripts/Makefile.modpost
index 3651cbf6ad49..9ced8aecd579 100644
--- a/scripts/Makefile.modpost
+++ b/scripts/Makefile.modpost
@@ -102,12 +102,32 @@ $(input-symdump):
@echo >&2 'WARNING: Symbol version dump "$@" is missing.'
@echo >&2 ' Modules may not have dependencies or modversions.'

+ifdef CONFIG_LTO_CLANG
+# With CONFIG_LTO_CLANG, .o files might be LLVM bitcode, so we need to run
+# LTO to compile them into native code before running modpost
+prelink-ext = .lto
+
+quiet_cmd_cc_lto_link_modules = LTO [M] $@
+cmd_cc_lto_link_modules = \
+ $(LD) $(ld_flags) -r -o $@ \
+ --whole-archive $(filter-out FORCE,$^)
+
+%.lto.o: %.o FORCE
+ $(call if_changed,cc_lto_link_modules)
+
+PHONY += FORCE
+FORCE:
+
+endif
+
+modules := $(sort $(shell cat $(MODORDER)))
+
# Read out modules.order to pass in modpost.
# Otherwise, allmodconfig would fail with "Argument list too long".
quiet_cmd_modpost = MODPOST $@
- cmd_modpost = sed 's/ko$$/o/' $< | $(MODPOST) -T -
+ cmd_modpost = sed 's/\.ko$$/$(prelink-ext)\.o/' $< | $(MODPOST) -T -

-$(output-symdump): $(MODORDER) $(input-symdump) FORCE
+$(output-symdump): $(MODORDER) $(input-symdump) $(modules:.ko=$(prelink-ext).o) FORCE
$(call if_changed,modpost)

targets += $(output-symdump)
diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
index 92dd745906f4..a681b3b6722e 100755
--- a/scripts/link-vmlinux.sh
+++ b/scripts/link-vmlinux.sh
@@ -52,6 +52,14 @@ modpost_link()
${KBUILD_VMLINUX_LIBS} \
--end-group"

+ if [ -n "${CONFIG_LTO_CLANG}" ]; then
+ # This might take a while, so indicate that we're doing
+ # an LTO link
+ info LTO ${1}
+ else
+ info LD ${1}
+ fi
+
${LD} ${KBUILD_LDFLAGS} -r -o ${1} ${objects}
}

@@ -99,13 +107,22 @@ vmlinux_link()
fi

if [ "${SRCARCH}" != "um" ]; then
- objects="--whole-archive \
- ${KBUILD_VMLINUX_OBJS} \
- --no-whole-archive \
- --start-group \
- ${KBUILD_VMLINUX_LIBS} \
- --end-group \
- ${@}"
+ if [ -n "${CONFIG_LTO_CLANG}" ]; then
+ # Use vmlinux.o instead of performing the slow LTO
+ # link again.
+ objects="--whole-archive \
+ vmlinux.o \
+ --no-whole-archive \
+ ${@}"
+ else
+ objects="--whole-archive \
+ ${KBUILD_VMLINUX_OBJS} \
+ --no-whole-archive \
+ --start-group \
+ ${KBUILD_VMLINUX_LIBS} \
+ --end-group \
+ ${@}"
+ fi

${LD} ${KBUILD_LDFLAGS} ${LDFLAGS_vmlinux} \
${strip_debug#-Wl,} \
@@ -270,7 +287,6 @@ fi;
${MAKE} -f "${srctree}/scripts/Makefile.build" obj=init need-builtin=1

#link vmlinux.o
-info LD vmlinux.o
modpost_link vmlinux.o
objtool_link vmlinux.o

--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:33:58

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 04/22] kbuild: lto: fix recordmcount

With LTO, LLVM bitcode won't be compiled into native code until
modpost_link. This change postpones calls to recordmcount until after
this step.

In order to exclude specific functions from inspection, we add a new
code section .text..nomcount, which we tell recordmcount to ignore, and
a __nomcount attribute for moving functions to this section.

Signed-off-by: Sami Tolvanen <[email protected]>
---
Makefile | 2 +-
arch/Kconfig | 2 +-
include/asm-generic/vmlinux.lds.h | 1 +
include/linux/compiler-clang.h | 4 ++++
include/linux/compiler_types.h | 4 ++++
kernel/trace/ftrace.c | 1 +
scripts/Makefile.build | 9 +++++++++
scripts/Makefile.modfinal | 18 ++++++++++++++++--
scripts/link-vmlinux.sh | 29 +++++++++++++++++++++++++++++
scripts/recordmcount.c | 3 ++-
10 files changed, 68 insertions(+), 5 deletions(-)

diff --git a/Makefile b/Makefile
index 161ad0d1f77f..3a7e5e5c17b9 100644
--- a/Makefile
+++ b/Makefile
@@ -861,7 +861,7 @@ KBUILD_AFLAGS += $(CC_FLAGS_USING)
ifdef CONFIG_DYNAMIC_FTRACE
ifdef CONFIG_HAVE_C_RECORDMCOUNT
BUILD_C_RECORDMCOUNT := y
- export BUILD_C_RECORDMCOUNT
+ export BUILD_C_RECORDMCOUNT RECORDMCOUNT_WARN
endif
endif
endif
diff --git a/arch/Kconfig b/arch/Kconfig
index 87488fe1e6b8..85b2044b927d 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -598,7 +598,7 @@ config LTO_CLANG
depends on $(success,$(NM) --help | head -n 1 | grep -qi llvm)
depends on $(success,$(AR) --help | head -n 1 | grep -qi llvm)
depends on ARCH_SUPPORTS_LTO_CLANG
- depends on !FTRACE_MCOUNT_RECORD
+ depends on !FTRACE_MCOUNT_RECORD || HAVE_C_RECORDMCOUNT
depends on !KASAN
select LTO
help
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index 78079000c05a..a1c902b808d0 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -565,6 +565,7 @@
*(.text.hot TEXT_MAIN .text.fixup .text.unlikely) \
NOINSTR_TEXT \
*(.text..refcount) \
+ *(.text..nomcount) \
*(.ref.text) \
MEM_KEEP(init.text*) \
MEM_KEEP(exit.text*) \
diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h
index ee37256ec8bd..fd78475c0642 100644
--- a/include/linux/compiler-clang.h
+++ b/include/linux/compiler-clang.h
@@ -55,3 +55,7 @@
#if __has_feature(shadow_call_stack)
# define __noscs __attribute__((__no_sanitize__("shadow-call-stack")))
#endif
+
+#if defined(CONFIG_LTO_CLANG) && defined(CONFIG_FTRACE_MCOUNT_RECORD)
+#define __nomcount __attribute__((__section__(".text..nomcount")))
+#endif
diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
index e368384445b6..1470c9703a25 100644
--- a/include/linux/compiler_types.h
+++ b/include/linux/compiler_types.h
@@ -233,6 +233,10 @@ struct ftrace_likely_data {
# define __noscs
#endif

+#ifndef __nomcount
+# define __nomcount
+#endif
+
#ifndef asm_volatile_goto
#define asm_volatile_goto(x...) asm goto(x)
#endif
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 1903b80db6eb..8e3ddb8123d9 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -6062,6 +6062,7 @@ static int ftrace_cmp_ips(const void *a, const void *b)
return 0;
}

+__nomcount
static int ftrace_process_locs(struct module *mod,
unsigned long *start,
unsigned long *end)
diff --git a/scripts/Makefile.build b/scripts/Makefile.build
index 5c0bbb6ddfcf..64e99f4baa5b 100644
--- a/scripts/Makefile.build
+++ b/scripts/Makefile.build
@@ -187,6 +187,9 @@ endif

ifdef CONFIG_FTRACE_MCOUNT_RECORD
ifndef CC_USING_RECORD_MCOUNT
+ifndef CC_USING_PATCHABLE_FUNCTION_ENTRY
+# With LTO, we postpone recordmcount until we compile a native binary
+ifndef CONFIG_LTO_CLANG
# compiler will not generate __mcount_loc use recordmcount or recordmcount.pl
ifdef BUILD_C_RECORDMCOUNT
ifeq ("$(origin RECORDMCOUNT_WARN)", "command line")
@@ -200,6 +203,8 @@ sub_cmd_record_mcount = \
if [ $(@) != "scripts/mod/empty.o" ]; then \
$(objtree)/scripts/recordmcount $(RECORDMCOUNT_FLAGS) "$(@)"; \
fi;
+endif # CONFIG_LTO_CLANG
+
recordmcount_source := $(srctree)/scripts/recordmcount.c \
$(srctree)/scripts/recordmcount.h
else
@@ -209,11 +214,15 @@ sub_cmd_record_mcount = perl $(srctree)/scripts/recordmcount.pl "$(ARCH)" \
"$(OBJDUMP)" "$(OBJCOPY)" "$(CC) $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS)" \
"$(LD) $(KBUILD_LDFLAGS)" "$(NM)" "$(RM)" "$(MV)" \
"$(if $(part-of-module),1,0)" "$(@)";
+
recordmcount_source := $(srctree)/scripts/recordmcount.pl
endif # BUILD_C_RECORDMCOUNT
+ifndef CONFIG_LTO_CLANG
cmd_record_mcount = $(if $(findstring $(strip $(CC_FLAGS_FTRACE)),$(_c_flags)), \
$(sub_cmd_record_mcount))
+endif # CONFIG_LTO_CLANG
endif # CC_USING_RECORD_MCOUNT
+endif # CC_USING_PATCHABLE_FUNCTION_ENTRY
endif # CONFIG_FTRACE_MCOUNT_RECORD

ifdef CONFIG_STACK_VALIDATION
diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
index 1005b147abd0..d168f0cfe67c 100644
--- a/scripts/Makefile.modfinal
+++ b/scripts/Makefile.modfinal
@@ -34,10 +34,24 @@ ifdef CONFIG_LTO_CLANG
# With CONFIG_LTO_CLANG, reuse the object file we compiled for modpost to
# avoid a second slow LTO link
prelink-ext := .lto
-endif
+
+# ELF processing was skipped earlier because we didn't have native code,
+# so let's now process the prelinked binary before we link the module.
+
+ifdef CONFIG_FTRACE_MCOUNT_RECORD
+ifndef CC_USING_RECORD_MCOUNT
+ifndef CC_USING_PATCHABLE_FUNCTION_ENTRY
+cmd_ld_ko_o += $(objtree)/scripts/recordmcount $(RECORDMCOUNT_FLAGS) \
+ $(@:.ko=$(prelink-ext).o);
+
+endif # CC_USING_PATCHABLE_FUNCTION_ENTRY
+endif # CC_USING_RECORD_MCOUNT
+endif # CONFIG_FTRACE_MCOUNT_RECORD
+
+endif # CONFIG_LTO_CLANG

quiet_cmd_ld_ko_o = LD [M] $@
- cmd_ld_ko_o = \
+ cmd_ld_ko_o += \
$(LD) -r $(KBUILD_LDFLAGS) \
$(KBUILD_LDFLAGS_MODULE) $(LDFLAGS_MODULE) \
$(addprefix -T , $(KBUILD_LDS_MODULE)) \
diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
index 69a6d7254e28..c72f5d0238f1 100755
--- a/scripts/link-vmlinux.sh
+++ b/scripts/link-vmlinux.sh
@@ -108,6 +108,29 @@ objtool_link()
fi
}

+# If CONFIG_LTO_CLANG is selected, we postpone running recordmcount until
+# we have compiled LLVM IR to an object file.
+recordmcount()
+{
+ if [ "${CONFIG_LTO_CLANG} ${CONFIG_FTRACE_MCOUNT_RECORD}" != "y y" ]; then
+ return
+ fi
+
+ if [ -n "${CC_USING_RECORD_MCOUNT}" ]; then
+ return
+ fi
+ if [ -n "${CC_USING_PATCHABLE_FUNCTION_ENTRY}" ]; then
+ return
+ fi
+
+ local flags=""
+
+ [ -n "${RECORDMCOUNT_WARN}" ] && flags="-w"
+
+ info MCOUNT $*
+ ${objtree}/scripts/recordmcount ${flags} $*
+}
+
# Link of vmlinux
# ${1} - output file
# ${2}, ${3}, ... - optional extra .o files
@@ -316,6 +339,12 @@ objtool_link vmlinux.o
# modpost vmlinux.o to check for section mismatches
${MAKE} -f "${srctree}/scripts/Makefile.modpost" MODPOST_VMLINUX=1

+if [ -n "${CONFIG_LTO_CLANG}" ]; then
+ # If we postponed ELF processing steps due to LTO, process
+ # vmlinux.o instead.
+ recordmcount vmlinux.o
+fi
+
info MODINFO modules.builtin.modinfo
${OBJCOPY} -j .modinfo -O binary vmlinux.o modules.builtin.modinfo
info GEN modules.builtin
diff --git a/scripts/recordmcount.c b/scripts/recordmcount.c
index 7225107a9aaf..9e9f10b4d649 100644
--- a/scripts/recordmcount.c
+++ b/scripts/recordmcount.c
@@ -404,7 +404,8 @@ static uint32_t (*w2)(uint16_t);
/* Names of the sections that could contain calls to mcount. */
static int is_mcounted_section_name(char const *const txtname)
{
- return strncmp(".text", txtname, 5) == 0 ||
+ return (strncmp(".text", txtname, 5) == 0 &&
+ strcmp(".text..nomcount", txtname) != 0) ||
strcmp(".init.text", txtname) == 0 ||
strcmp(".ref.text", txtname) == 0 ||
strcmp(".sched.text", txtname) == 0 ||
--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:34:09

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 06/22] kbuild: lto: limit inlining

This change limits function inlining across translation unit
boundaries in order to reduce the binary size with LTO.

The -import-instr-limit flag defines a size limit, as the number
of LLVM IR instructions, for importing functions from other TUs.
The default value is 100, and decreasing it to 5 reduces the size
of a stripped arm64 defconfig vmlinux by 11%.

Suggested-by: George Burgess IV <[email protected]>
Signed-off-by: Sami Tolvanen <[email protected]>
---
Makefile | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/Makefile b/Makefile
index 3a7e5e5c17b9..ee66513a5b66 100644
--- a/Makefile
+++ b/Makefile
@@ -894,6 +894,10 @@ else
CC_FLAGS_LTO_CLANG := -flto
endif
CC_FLAGS_LTO_CLANG += -fvisibility=default
+
+# Limit inlining across translation units to reduce binary size
+LD_FLAGS_LTO_CLANG := -mllvm -import-instr-limit=5
+KBUILD_LDFLAGS += $(LD_FLAGS_LTO_CLANG)
endif

ifdef CONFIG_LTO
--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:34:19

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 13/22] scripts/mod: disable LTO for empty.c

With CONFIG_LTO_CLANG, clang generates LLVM IR instead of ELF object
files. As empty.o is used for probing target properties, disable LTO
for it to produce an object file instead.

Signed-off-by: Sami Tolvanen <[email protected]>
---
scripts/mod/Makefile | 1 +
1 file changed, 1 insertion(+)

diff --git a/scripts/mod/Makefile b/scripts/mod/Makefile
index 296b6a3878b2..b6e3b40c6eeb 100644
--- a/scripts/mod/Makefile
+++ b/scripts/mod/Makefile
@@ -1,5 +1,6 @@
# SPDX-License-Identifier: GPL-2.0
OBJECT_FILES_NON_STANDARD := y
+CFLAGS_REMOVE_empty.o += $(CC_FLAGS_LTO)

hostprogs := modpost mk_elfconfig
always-y := $(hostprogs) empty.o
--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:34:23

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 14/22] efi/libstub: disable LTO

With CONFIG_LTO_CLANG, we produce LLVM bitcode instead of ELF object
files. Since LTO is not really needed here and the Makefile assumes we
produce an object file, disable LTO for libstub.

Signed-off-by: Sami Tolvanen <[email protected]>
---
drivers/firmware/efi/libstub/Makefile | 2 ++
1 file changed, 2 insertions(+)

diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index 75daaf20374e..95e12002cc7c 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -35,6 +35,8 @@ KBUILD_CFLAGS := $(cflags-y) -Os -DDISABLE_BRANCH_PROFILING \

# remove SCS flags from all objects in this directory
KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_SCS), $(KBUILD_CFLAGS))
+# disable LTO
+KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_LTO), $(KBUILD_CFLAGS))

GCOV_PROFILE := n
# Sanitizer runtimes are unavailable and cannot be linked here.
--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:34:27

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 17/22] arm64: vdso: disable LTO

Filter out CC_FLAGS_LTO for the vDSO.

Signed-off-by: Sami Tolvanen <[email protected]>
---
arch/arm64/kernel/vdso/Makefile | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
index 556d424c6f52..cfad4c296ca1 100644
--- a/arch/arm64/kernel/vdso/Makefile
+++ b/arch/arm64/kernel/vdso/Makefile
@@ -29,8 +29,8 @@ ldflags-y := -shared -nostdlib -soname=linux-vdso.so.1 --hash-style=sysv \
ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18
ccflags-y += -DDISABLE_BRANCH_PROFILING

-CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS)
-KBUILD_CFLAGS += $(DISABLE_LTO)
+CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) \
+ $(CC_FLAGS_LTO)
KASAN_SANITIZE := n
UBSAN_SANITIZE := n
OBJECT_FILES_NON_STANDARD := y
--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:34:28

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 18/22] arm64: allow LTO_CLANG and THINLTO to be selected

Allow CONFIG_LTO_CLANG and CONFIG_THINLTO to be enabled.

Signed-off-by: Sami Tolvanen <[email protected]>
---
arch/arm64/Kconfig | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index a4a094bedcb2..e1961653964d 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -72,6 +72,8 @@ config ARM64
select ARCH_USE_SYM_ANNOTATIONS
select ARCH_SUPPORTS_MEMORY_FAILURE
select ARCH_SUPPORTS_SHADOW_CALL_STACK if CC_HAVE_SHADOW_CALL_STACK
+ select ARCH_SUPPORTS_LTO_CLANG
+ select ARCH_SUPPORTS_THINLTO
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_INT128 if CC_HAS_INT128 && (GCC_VERSION >= 50000 || CC_IS_CLANG)
select ARCH_SUPPORTS_NUMA_BALANCING
--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:34:30

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 20/22] x86, ftrace: disable recordmcount for ftrace_make_nop

Ignore mcount relocations in ftrace_make_nop.

Signed-off-by: Sami Tolvanen <[email protected]>
---
arch/x86/kernel/ftrace.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index 51504566b3a6..c3b28b81277b 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -121,6 +121,7 @@ ftrace_modify_code_direct(unsigned long ip, const char *old_code,
return 0;
}

+__nomcount
int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, unsigned long addr)
{
unsigned long ip = rec->ip;
--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:34:40

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 22/22] x86, build: allow LTO_CLANG and THINLTO to be selected

Allow CONFIG_LTO_CLANG and CONFIG_THINLTO to be enabled.

Signed-off-by: Sami Tolvanen <[email protected]>
---
arch/x86/Kconfig | 2 ++
arch/x86/Makefile | 5 +++++
2 files changed, 7 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 6a0cc524882d..df335b1f9c31 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -92,6 +92,8 @@ config X86
select ARCH_SUPPORTS_ACPI
select ARCH_SUPPORTS_ATOMIC_RMW
select ARCH_SUPPORTS_NUMA_BALANCING if X86_64
+ select ARCH_SUPPORTS_LTO_CLANG if X86_64
+ select ARCH_SUPPORTS_THINLTO if X86_64
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_QUEUED_RWLOCKS
select ARCH_USE_QUEUED_SPINLOCKS
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 00e378de8bc0..a1abc1e081ad 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -188,6 +188,11 @@ ifdef CONFIG_X86_64
KBUILD_LDFLAGS += $(call ld-option, -z max-page-size=0x200000)
endif

+ifdef CONFIG_LTO_CLANG
+KBUILD_LDFLAGS += -plugin-opt=-code-model=kernel \
+ -plugin-opt=-stack-alignment=$(if $(CONFIG_X86_32),4,8)
+endif
+
# Workaround for a gcc prelease that unfortunately was shipped in a suse release
KBUILD_CFLAGS += -Wno-sign-compare
#
--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:35:00

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 19/22] x86, vdso: disable LTO only for vDSO

Remove the undefined DISABLE_LTO flag from the vDSO, and filter out
CC_FLAGS_LTO flags instead where needed.

Signed-off-by: Sami Tolvanen <[email protected]>
---
arch/x86/entry/vdso/Makefile | 5 ++---
1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
index 04e65f0698f6..67f60662830a 100644
--- a/arch/x86/entry/vdso/Makefile
+++ b/arch/x86/entry/vdso/Makefile
@@ -9,8 +9,6 @@ ARCH_REL_TYPE_ABS := R_X86_64_JUMP_SLOT|R_X86_64_GLOB_DAT|R_X86_64_RELATIVE|
ARCH_REL_TYPE_ABS += R_386_GLOB_DAT|R_386_JMP_SLOT|R_386_RELATIVE
include $(srctree)/lib/vdso/Makefile

-KBUILD_CFLAGS += $(DISABLE_LTO)
-
# Sanitizer runtimes are unavailable and cannot be linked here.
KASAN_SANITIZE := n
UBSAN_SANITIZE := n
@@ -92,7 +90,7 @@ ifneq ($(RETPOLINE_VDSO_CFLAGS),)
endif
endif

-$(vobjs): KBUILD_CFLAGS := $(filter-out $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)
+$(vobjs): KBUILD_CFLAGS := $(filter-out $(CC_FLAGS_LTO) $(GCC_PLUGINS_CFLAGS) $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS)) $(CFL)

#
# vDSO code runs in userspace and -pg doesn't help with profiling anyway.
@@ -150,6 +148,7 @@ KBUILD_CFLAGS_32 := $(filter-out -fno-pic,$(KBUILD_CFLAGS_32))
KBUILD_CFLAGS_32 := $(filter-out -mfentry,$(KBUILD_CFLAGS_32))
KBUILD_CFLAGS_32 := $(filter-out $(GCC_PLUGINS_CFLAGS),$(KBUILD_CFLAGS_32))
KBUILD_CFLAGS_32 := $(filter-out $(RETPOLINE_CFLAGS),$(KBUILD_CFLAGS_32))
+KBUILD_CFLAGS_32 := $(filter-out $(CC_FLAGS_LTO),$(KBUILD_CFLAGS_32))
KBUILD_CFLAGS_32 += -m32 -msoft-float -mregparm=0 -fpic
KBUILD_CFLAGS_32 += $(call cc-option, -fno-stack-protector)
KBUILD_CFLAGS_32 += $(call cc-option, -foptimize-sibling-calls)
--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:35:15

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 16/22] arm64: export CC_USING_PATCHABLE_FUNCTION_ENTRY

Since arm64 does not use -pg in CC_FLAGS_FTRACE with
DYNAMIC_FTRACE_WITH_REGS, skip running recordmcount by
exporting CC_USING_PATCHABLE_FUNCTION_ENTRY.

Signed-off-by: Sami Tolvanen <[email protected]>
---
arch/arm64/Makefile | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
index a0d94d063fa8..fc6c20a10291 100644
--- a/arch/arm64/Makefile
+++ b/arch/arm64/Makefile
@@ -115,6 +115,7 @@ endif
ifeq ($(CONFIG_DYNAMIC_FTRACE_WITH_REGS),y)
KBUILD_CPPFLAGS += -DCC_USING_PATCHABLE_FUNCTION_ENTRY
CC_FLAGS_FTRACE := -fpatchable-function-entry=2
+ export CC_USING_PATCHABLE_FUNCTION_ENTRY := 1
endif

# Default value
--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:35:21

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 10/22] init: lto: fix PREL32 relocations

With LTO, the compiler can rename static functions to avoid global
naming collisions. As initcall functions are typically static,
renaming can break references to them in inline assembly. This
change adds a global stub with a stable name for each initcall to
fix the issue when PREL32 relocations are used.

Signed-off-by: Sami Tolvanen <[email protected]>
---
include/linux/init.h | 30 ++++++++++++++++++++++++++----
1 file changed, 26 insertions(+), 4 deletions(-)

diff --git a/include/linux/init.h b/include/linux/init.h
index af638cd6dd52..5b4bdc5a8399 100644
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -209,26 +209,48 @@ extern bool initcall_debug;
*/
#define __initcall_section(__sec, __iid) \
#__sec ".init.." #__iid
+
+/*
+ * With LTO, the compiler can rename static functions to avoid
+ * global naming collisions. We use a global stub function for
+ * initcalls to create a stable symbol name whose address can be
+ * taken in inline assembly when PREL32 relocations are used.
+ */
+#define __initcall_stub(fn, __iid, id) \
+ __initcall_name(initstub, __iid, id)
+
+#define __define_initcall_stub(__stub, fn) \
+ int __init __stub(void) \
+ { \
+ return fn(); \
+ } \
+ __ADDRESSABLE(__stub)
#else
#define __initcall_section(__sec, __iid) \
#__sec ".init"
+
+#define __initcall_stub(fn, __iid, id) fn
+
+#define __define_initcall_stub(__stub, fn) \
+ __ADDRESSABLE(fn)
#endif

#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
-#define ____define_initcall(fn, __name, __sec) \
- __ADDRESSABLE(fn) \
+#define ____define_initcall(fn, __stub, __name, __sec) \
+ __define_initcall_stub(__stub, fn) \
asm(".section \"" __sec "\", \"a\" \n" \
__stringify(__name) ": \n" \
- ".long " #fn " - . \n" \
+ ".long " __stringify(__stub) " - . \n" \
".previous \n");
#else
-#define ____define_initcall(fn, __name, __sec) \
+#define ____define_initcall(fn, __unused, __name, __sec) \
static initcall_t __name __used \
__attribute__((__section__(__sec))) = fn;
#endif

#define __unique_initcall(fn, id, __sec, __iid) \
____define_initcall(fn, \
+ __initcall_stub(fn, __iid, id), \
__initcall_name(initcall, __iid, id), \
__initcall_section(__sec, __iid))

--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:35:23

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 15/22] drivers/misc/lkdtm: disable LTO for rodata.o

Disable LTO for rodata.o to allow objcopy to be used to
manipulate sections.

Signed-off-by: Sami Tolvanen <[email protected]>
Acked-by: Kees Cook <[email protected]>
---
drivers/misc/lkdtm/Makefile | 1 +
1 file changed, 1 insertion(+)

diff --git a/drivers/misc/lkdtm/Makefile b/drivers/misc/lkdtm/Makefile
index c70b3822013f..dd4c936d4d73 100644
--- a/drivers/misc/lkdtm/Makefile
+++ b/drivers/misc/lkdtm/Makefile
@@ -13,6 +13,7 @@ lkdtm-$(CONFIG_LKDTM) += cfi.o

KASAN_SANITIZE_stackleak.o := n
KCOV_INSTRUMENT_rodata.o := n
+CFLAGS_REMOVE_rodata.o += $(CC_FLAGS_LTO)

OBJCOPYFLAGS :=
OBJCOPYFLAGS_rodata_objcopy.o := \
--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:35:40

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 11/22] pci: lto: fix PREL32 relocations

With LTO, the compiler can rename static functions to avoid global
naming collisions. As PCI fixup functions are typically static,
renaming can break references to them in inline assembly. This
change adds a global stub to DECLARE_PCI_FIXUP_SECTION to fix the
issue when PREL32 relocations are used.

Signed-off-by: Sami Tolvanen <[email protected]>
---
include/linux/pci.h | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/include/linux/pci.h b/include/linux/pci.h
index c79d83304e52..1e65e16f165a 100644
--- a/include/linux/pci.h
+++ b/include/linux/pci.h
@@ -1909,19 +1909,24 @@ enum pci_fixup_pass {
};

#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
-#define __DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class, \
- class_shift, hook) \
- __ADDRESSABLE(hook) \
+#define ___DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class, \
+ class_shift, hook, stub) \
+ void stub(struct pci_dev *dev) { hook(dev); } \
asm(".section " #sec ", \"a\" \n" \
".balign 16 \n" \
".short " #vendor ", " #device " \n" \
".long " #class ", " #class_shift " \n" \
- ".long " #hook " - . \n" \
+ ".long " #stub " - . \n" \
".previous \n");
+
+#define __DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class, \
+ class_shift, hook, stub) \
+ ___DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class, \
+ class_shift, hook, stub)
#define DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class, \
class_shift, hook) \
__DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class, \
- class_shift, hook)
+ class_shift, hook, __UNIQUE_ID(hook))
#else
/* Anonymous variables would be nice... */
#define DECLARE_PCI_FIXUP_SECTION(section, name, vendor, device, class, \
--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:36:24

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 07/22] kbuild: lto: merge module sections

LLD always splits sections with LTO, which increases module sizes. This
change adds a linker script that merges the split sections in the final
module and discards the .eh_frame section that LLD may generate.

Suggested-by: Nick Desaulniers <[email protected]>
Signed-off-by: Sami Tolvanen <[email protected]>
---
Makefile | 2 ++
scripts/module-lto.lds | 26 ++++++++++++++++++++++++++
2 files changed, 28 insertions(+)
create mode 100644 scripts/module-lto.lds

diff --git a/Makefile b/Makefile
index ee66513a5b66..9ffec5fe1737 100644
--- a/Makefile
+++ b/Makefile
@@ -898,6 +898,8 @@ CC_FLAGS_LTO_CLANG += -fvisibility=default
# Limit inlining across translation units to reduce binary size
LD_FLAGS_LTO_CLANG := -mllvm -import-instr-limit=5
KBUILD_LDFLAGS += $(LD_FLAGS_LTO_CLANG)
+
+KBUILD_LDS_MODULE += $(srctree)/scripts/module-lto.lds
endif

ifdef CONFIG_LTO
diff --git a/scripts/module-lto.lds b/scripts/module-lto.lds
new file mode 100644
index 000000000000..65884c652bf2
--- /dev/null
+++ b/scripts/module-lto.lds
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * With CONFIG_LTO_CLANG, LLD always enables -fdata-sections and
+ * -ffunction-sections, which increases the size of the final module.
+ * Merge the split sections in the final binary.
+ */
+SECTIONS {
+ __patchable_function_entries : { *(__patchable_function_entries) }
+
+ .bss : {
+ *(.bss .bss.[0-9a-zA-Z_]*)
+ *(.bss..L* .bss..compoundliteral*)
+ }
+
+ .data : {
+ *(.data .data.[0-9a-zA-Z_]*)
+ *(.data..L* .data..compoundliteral*)
+ }
+
+ .rodata : {
+ *(.rodata .rodata.[0-9a-zA-Z_]*)
+ *(.rodata..L* .rodata..compoundliteral*)
+ }
+
+ .text : { *(.text .text.[0-9a-zA-Z_]*) }
+}
--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:36:31

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 05/22] kbuild: lto: postpone objtool

With LTO, LLVM bitcode won't be compiled into native code until
modpost_link, or modfinal for modules. This change postpones calls
to objtool until after these steps.

Signed-off-by: Sami Tolvanen <[email protected]>
---
include/linux/compiler.h | 2 +-
lib/Kconfig.debug | 2 +-
scripts/Makefile.build | 2 ++
scripts/Makefile.modfinal | 15 +++++++++++++++
4 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/include/linux/compiler.h b/include/linux/compiler.h
index 30827f82ad62..12b115152532 100644
--- a/include/linux/compiler.h
+++ b/include/linux/compiler.h
@@ -120,7 +120,7 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
/* Annotate a C jump table to allow objtool to follow the code flow */
#define __annotate_jump_table __section(.rodata..c_jump_table)

-#ifdef CONFIG_DEBUG_ENTRY
+#if defined(CONFIG_DEBUG_ENTRY) || defined(CONFIG_LTO_CLANG)
/* Begin/end of an instrumentation safe region */
#define instrumentation_begin() ({ \
asm volatile("%c0:\n\t" \
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 9ad9210d70a1..9fdba71c135a 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -399,7 +399,7 @@ config STACK_VALIDATION

config VMLINUX_VALIDATION
bool
- depends on STACK_VALIDATION && DEBUG_ENTRY && !PARAVIRT
+ depends on STACK_VALIDATION && (DEBUG_ENTRY || LTO_CLANG) && !PARAVIRT
default y

config DEBUG_FORCE_WEAK_PER_CPU
diff --git a/scripts/Makefile.build b/scripts/Makefile.build
index 64e99f4baa5b..82977350f5a6 100644
--- a/scripts/Makefile.build
+++ b/scripts/Makefile.build
@@ -226,6 +226,7 @@ endif # CC_USING_PATCHABLE_FUNCTION_ENTRY
endif # CONFIG_FTRACE_MCOUNT_RECORD

ifdef CONFIG_STACK_VALIDATION
+ifndef CONFIG_LTO_CLANG
ifneq ($(SKIP_STACK_VALIDATION),1)

__objtool_obj := $(objtree)/tools/objtool/objtool
@@ -258,6 +259,7 @@ objtool_obj = $(if $(patsubst y%,, \
$(__objtool_obj))

endif # SKIP_STACK_VALIDATION
+endif # CONFIG_LTO_CLANG
endif # CONFIG_STACK_VALIDATION

# Rebuild all objects when objtool changes, or is enabled/disabled.
diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
index d168f0cfe67c..9f1df2f1fab5 100644
--- a/scripts/Makefile.modfinal
+++ b/scripts/Makefile.modfinal
@@ -48,6 +48,21 @@ endif # CC_USING_PATCHABLE_FUNCTION_ENTRY
endif # CC_USING_RECORD_MCOUNT
endif # CONFIG_FTRACE_MCOUNT_RECORD

+ifdef CONFIG_STACK_VALIDATION
+ifneq ($(SKIP_STACK_VALIDATION),1)
+cmd_ld_ko_o += \
+ $(objtree)/tools/objtool/objtool \
+ $(if $(CONFIG_UNWINDER_ORC),orc generate,check) \
+ --module \
+ $(if $(CONFIG_FRAME_POINTER),,--no-fp) \
+ $(if $(CONFIG_GCOV_KERNEL),--no-unreachable,) \
+ $(if $(CONFIG_RETPOLINE),--retpoline,) \
+ $(if $(CONFIG_X86_SMAP),--uaccess,) \
+ $(@:.ko=$(prelink-ext).o);
+
+endif # SKIP_STACK_VALIDATION
+endif # CONFIG_STACK_VALIDATION
+
endif # CONFIG_LTO_CLANG

quiet_cmd_ld_ko_o = LD [M] $@
--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:36:43

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 21/22] x86, relocs: Ignore L4_PAGE_OFFSET relocations

L4_PAGE_OFFSET is a constant value, so don't warn about absolute
relocations.

Signed-off-by: Sami Tolvanen <[email protected]>
---
arch/x86/tools/relocs.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c
index ce7188cbdae5..8f3bf34840ce 100644
--- a/arch/x86/tools/relocs.c
+++ b/arch/x86/tools/relocs.c
@@ -47,6 +47,7 @@ static const char * const sym_regex_kernel[S_NSYMTYPES] = {
[S_ABS] =
"^(xen_irq_disable_direct_reloc$|"
"xen_save_fl_direct_reloc$|"
+ "L4_PAGE_OFFSET|"
"VDSO|"
"__crc_)",

--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:37:03

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 08/22] kbuild: lto: remove duplicate dependencies from .mod files

With LTO, llvm-nm prints out symbols for each archive member
separately, which results in a lot of duplicate dependencies in the
.mod file when CONFIG_TRIM_UNUSED_SYMS is enabled. When a module
consists of several compilation units, the output can exceed the
default xargs command size limit and split the dependency list to
multiple lines, which results in used symbols getting trimmed.

This change removes duplicate dependencies, which will reduce the
probability of this happening and makes .mod files smaller and
easier to read.

Signed-off-by: Sami Tolvanen <[email protected]>
---
scripts/Makefile.build | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/scripts/Makefile.build b/scripts/Makefile.build
index 82977350f5a6..82b465ce3ca0 100644
--- a/scripts/Makefile.build
+++ b/scripts/Makefile.build
@@ -291,7 +291,7 @@ endef

# List module undefined symbols (or empty line if not enabled)
ifdef CONFIG_TRIM_UNUSED_KSYMS
-cmd_undef_syms = $(NM) $< | sed -n 's/^ *U //p' | xargs echo
+cmd_undef_syms = $(NM) $< | sed -n 's/^ *U //p' | sort -u | xargs echo
else
cmd_undef_syms = echo
endif
--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:37:25

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 12/22] modpost: lto: strip .lto from module names

With LTO, everything is compiled into LLVM bitcode, so we have to link
each module into native code before modpost. Kbuild uses the .lto.o
suffix for these files, which also ends up in module information. This
change strips the unnecessary .lto suffix from the module name.

Suggested-by: Bill Wendling <[email protected]>
Signed-off-by: Sami Tolvanen <[email protected]>
---
scripts/mod/modpost.c | 16 +++++++---------
scripts/mod/modpost.h | 9 +++++++++
scripts/mod/sumversion.c | 6 +++++-
3 files changed, 21 insertions(+), 10 deletions(-)

diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
index 6aea65c65745..8352f8a1a138 100644
--- a/scripts/mod/modpost.c
+++ b/scripts/mod/modpost.c
@@ -17,7 +17,6 @@
#include <ctype.h>
#include <string.h>
#include <limits.h>
-#include <stdbool.h>
#include <errno.h>
#include "modpost.h"
#include "../../include/linux/license.h"
@@ -80,14 +79,6 @@ modpost_log(enum loglevel loglevel, const char *fmt, ...)
exit(1);
}

-static inline bool strends(const char *str, const char *postfix)
-{
- if (strlen(str) < strlen(postfix))
- return false;
-
- return strcmp(str + strlen(str) - strlen(postfix), postfix) == 0;
-}
-
void *do_nofail(void *ptr, const char *expr)
{
if (!ptr)
@@ -1975,6 +1966,10 @@ static char *remove_dot(char *s)
size_t m = strspn(s + n + 1, "0123456789");
if (m && (s[n + m] == '.' || s[n + m] == 0))
s[n] = 0;
+
+ /* strip trailing .lto */
+ if (strends(s, ".lto"))
+ s[strlen(s) - 4] = '\0';
}
return s;
}
@@ -1998,6 +1993,9 @@ static void read_symbols(const char *modname)
/* strip trailing .o */
tmp = NOFAIL(strdup(modname));
tmp[strlen(tmp) - 2] = '\0';
+ /* strip trailing .lto */
+ if (strends(tmp, ".lto"))
+ tmp[strlen(tmp) - 4] = '\0';
mod = new_module(tmp);
free(tmp);
}
diff --git a/scripts/mod/modpost.h b/scripts/mod/modpost.h
index 3aa052722233..fab30d201f9e 100644
--- a/scripts/mod/modpost.h
+++ b/scripts/mod/modpost.h
@@ -2,6 +2,7 @@
#include <stdio.h>
#include <stdlib.h>
#include <stdarg.h>
+#include <stdbool.h>
#include <string.h>
#include <sys/types.h>
#include <sys/stat.h>
@@ -180,6 +181,14 @@ static inline unsigned int get_secindex(const struct elf_info *info,
return info->symtab_shndx_start[sym - info->symtab_start];
}

+static inline bool strends(const char *str, const char *postfix)
+{
+ if (strlen(str) < strlen(postfix))
+ return false;
+
+ return strcmp(str + strlen(str) - strlen(postfix), postfix) == 0;
+}
+
/* file2alias.c */
extern unsigned int cross_build;
void handle_moddevtable(struct module *mod, struct elf_info *info,
diff --git a/scripts/mod/sumversion.c b/scripts/mod/sumversion.c
index d587f40f1117..760e6baa7eda 100644
--- a/scripts/mod/sumversion.c
+++ b/scripts/mod/sumversion.c
@@ -391,10 +391,14 @@ void get_src_version(const char *modname, char sum[], unsigned sumlen)
struct md4_ctx md;
char *fname;
char filelist[PATH_MAX + 1];
+ int postfix_len = 1;
+
+ if (strends(modname, ".lto.o"))
+ postfix_len = 5;

/* objects for a module are listed in the first line of *.mod file. */
snprintf(filelist, sizeof(filelist), "%.*smod",
- (int)strlen(modname) - 1, modname);
+ (int)strlen(modname) - postfix_len, modname);

buf = read_text_file(filelist);

--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:38:06

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 09/22] init: lto: ensure initcall ordering

With LTO, the compiler doesn't necessarily obey the link order for
initcalls, and initcall variables need globally unique names to avoid
collisions at link time.

This change exports __KBUILD_MODNAME and adds the initcall_id() macro,
which uses it together with __COUNTER__ and __LINE__ to help ensure
these variables have unique names, and moves each variable to its own
section when LTO is enabled, so the correct order can be specified using
a linker script.

The generate_initcall_ordering.pl script uses nm to find initcalls from
the object files passed to the linker, and generates a linker script
that specifies the intended order. With LTO, the script is called in
link-vmlinux.sh.

Signed-off-by: Sami Tolvanen <[email protected]>
---
include/linux/init.h | 52 +++++-
scripts/Makefile.lib | 6 +-
scripts/generate_initcall_order.pl | 270 +++++++++++++++++++++++++++++
scripts/link-vmlinux.sh | 14 ++
4 files changed, 333 insertions(+), 9 deletions(-)
create mode 100755 scripts/generate_initcall_order.pl

diff --git a/include/linux/init.h b/include/linux/init.h
index 212fc9e2f691..af638cd6dd52 100644
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -184,19 +184,57 @@ extern bool initcall_debug;
* as KEEP() in the linker script.
*/

+/* Format: <modname>__<counter>_<line>_<fn> */
+#define __initcall_id(fn) \
+ __PASTE(__KBUILD_MODNAME, \
+ __PASTE(__, \
+ __PASTE(__COUNTER__, \
+ __PASTE(_, \
+ __PASTE(__LINE__, \
+ __PASTE(_, fn))))))
+
+/* Format: __<prefix>__<iid><id> */
+#define __initcall_name(prefix, __iid, id) \
+ __PASTE(__, \
+ __PASTE(prefix, \
+ __PASTE(__, \
+ __PASTE(__iid, id))))
+
+#ifdef CONFIG_LTO_CLANG
+/*
+ * With LTO, the compiler doesn't necessarily obey link order for
+ * initcalls. In order to preserve the correct order, we add each
+ * variable into its own section and generate a linker script (in
+ * scripts/link-vmlinux.sh) to specify the order of the sections.
+ */
+#define __initcall_section(__sec, __iid) \
+ #__sec ".init.." #__iid
+#else
+#define __initcall_section(__sec, __iid) \
+ #__sec ".init"
+#endif
+
#ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
-#define ___define_initcall(fn, id, __sec) \
+#define ____define_initcall(fn, __name, __sec) \
__ADDRESSABLE(fn) \
- asm(".section \"" #__sec ".init\", \"a\" \n" \
- "__initcall_" #fn #id ": \n" \
+ asm(".section \"" __sec "\", \"a\" \n" \
+ __stringify(__name) ": \n" \
".long " #fn " - . \n" \
".previous \n");
#else
-#define ___define_initcall(fn, id, __sec) \
- static initcall_t __initcall_##fn##id __used \
- __attribute__((__section__(#__sec ".init"))) = fn;
+#define ____define_initcall(fn, __name, __sec) \
+ static initcall_t __name __used \
+ __attribute__((__section__(__sec))) = fn;
#endif

+#define __unique_initcall(fn, id, __sec, __iid) \
+ ____define_initcall(fn, \
+ __initcall_name(initcall, __iid, id), \
+ __initcall_section(__sec, __iid))
+
+#define ___define_initcall(fn, id, __sec) \
+ __unique_initcall(fn, id, __sec, __initcall_id(fn))
+
#define __define_initcall(fn, id) ___define_initcall(fn, id, .initcall##id)

/*
@@ -236,7 +274,7 @@ extern bool initcall_debug;
#define __exitcall(fn) \
static exitcall_t __exitcall_##fn __exit_call = fn

-#define console_initcall(fn) ___define_initcall(fn,, .con_initcall)
+#define console_initcall(fn) ___define_initcall(fn, con, .con_initcall)

struct obs_kernel_param {
const char *str;
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 99ac59c59826..17447354b543 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -106,9 +106,11 @@ target-stem = $(basename $(patsubst $(obj)/%,%,$@))
# These flags are needed for modversions and compiling, so we define them here
# $(modname_flags) defines KBUILD_MODNAME as the name of the module it will
# end up in (or would, if it gets compiled in)
-name-fix = $(call stringify,$(subst $(comma),_,$(subst -,_,$1)))
+name-fix-token = $(subst $(comma),_,$(subst -,_,$1))
+name-fix = $(call stringify,$(call name-fix-token,$1))
basename_flags = -DKBUILD_BASENAME=$(call name-fix,$(basetarget))
-modname_flags = -DKBUILD_MODNAME=$(call name-fix,$(modname))
+modname_flags = -DKBUILD_MODNAME=$(call name-fix,$(modname)) \
+ -D__KBUILD_MODNAME=$(call name-fix-token,$(modname))
modfile_flags = -DKBUILD_MODFILE=$(call stringify,$(modfile))

orig_c_flags = $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS) \
diff --git a/scripts/generate_initcall_order.pl b/scripts/generate_initcall_order.pl
new file mode 100755
index 000000000000..fe83aec2b51e
--- /dev/null
+++ b/scripts/generate_initcall_order.pl
@@ -0,0 +1,270 @@
+#!/usr/bin/env perl
+# SPDX-License-Identifier: GPL-2.0
+#
+# Generates a linker script that specifies the correct initcall order.
+#
+# Copyright (C) 2019 Google LLC
+
+use strict;
+use warnings;
+use IO::Handle;
+use IO::Select;
+use POSIX ":sys_wait_h";
+
+my $nm = $ENV{'NM'} || die "$0: ERROR: NM not set?";
+my $objtree = $ENV{'objtree'} || '.';
+
+## currently active child processes
+my $jobs = {}; # child process pid -> file handle
+## results from child processes
+my $results = {}; # object index -> [ { level, secname }, ... ]
+
+## reads _NPROCESSORS_ONLN to determine the maximum number of processes to
+## start
+sub get_online_processors {
+ open(my $fh, "getconf _NPROCESSORS_ONLN 2>/dev/null |")
+ or die "$0: ERROR: failed to execute getconf: $!";
+ my $procs = <$fh>;
+ close($fh);
+
+ if (!($procs =~ /^\d+$/)) {
+ return 1;
+ }
+
+ return int($procs);
+}
+
+## writes results to the parent process
+## format: <file index> <initcall level> <base initcall section name>
+sub write_results {
+ my ($index, $initcalls) = @_;
+
+ # sort by the counter value to ensure the order of initcalls within
+ # each object file is correct
+ foreach my $counter (sort { $a <=> $b } keys(%{$initcalls})) {
+ my $level = $initcalls->{$counter}->{'level'};
+
+ # section name for the initcall function
+ my $secname = $initcalls->{$counter}->{'module'} . '__' .
+ $counter . '_' .
+ $initcalls->{$counter}->{'line'} . '_' .
+ $initcalls->{$counter}->{'function'};
+
+ print "$index $level $secname\n";
+ }
+}
+
+## reads a result line from a child process and adds it to the $results array
+sub read_results{
+ my ($fh) = @_;
+
+ # each child prints out a full line w/ autoflush and exits after the
+ # last line, so even if buffered I/O blocks here, it shouldn't block
+ # very long
+ my $data = <$fh>;
+
+ if (!defined($data)) {
+ return 0;
+ }
+
+ chomp($data);
+
+ my ($index, $level, $secname) = $data =~
+ /^(\d+)\ ([^\ ]+)\ (.*)$/;
+
+ if (!defined($index) ||
+ !defined($level) ||
+ !defined($secname)) {
+ die "$0: ERROR: child process returned invalid data: $data\n";
+ }
+
+ $index = int($index);
+
+ if (!exists($results->{$index})) {
+ $results->{$index} = [];
+ }
+
+ push (@{$results->{$index}}, {
+ 'level' => $level,
+ 'secname' => $secname
+ });
+
+ return 1;
+}
+
+## finds initcalls from an object file or all object files in an archive, and
+## writes results back to the parent process
+sub find_initcalls {
+ my ($index, $file) = @_;
+
+ die "$0: ERROR: file $file doesn't exist?" if (! -f $file);
+
+ open(my $fh, "\"$nm\" --defined-only \"$file\" 2>/dev/null |")
+ or die "$0: ERROR: failed to execute \"$nm\": $!";
+
+ my $initcalls = {};
+
+ while (<$fh>) {
+ chomp;
+
+ # check for the start of a new object file (if processing an
+ # archive)
+ my ($path)= $_ =~ /^(.+)\:$/;
+
+ if (defined($path)) {
+ write_results($index, $initcalls);
+ $initcalls = {};
+ next;
+ }
+
+ # look for an initcall
+ my ($module, $counter, $line, $symbol) = $_ =~
+ /[a-z]\s+__initcall__(\S*)__(\d+)_(\d+)_(.*)$/;
+
+ if (!defined($module)) {
+ $module = ''
+ }
+
+ if (!defined($counter) ||
+ !defined($line) ||
+ !defined($symbol)) {
+ next;
+ }
+
+ # parse initcall level
+ my ($function, $level) = $symbol =~
+ /^(.*)((early|rootfs|con|[0-9])s?)$/;
+
+ die "$0: ERROR: invalid initcall name $symbol in $file($path)"
+ if (!defined($function) || !defined($level));
+
+ $initcalls->{$counter} = {
+ 'module' => $module,
+ 'line' => $line,
+ 'function' => $function,
+ 'level' => $level,
+ };
+ }
+
+ close($fh);
+ write_results($index, $initcalls);
+}
+
+## waits for any child process to complete, reads the results, and adds them to
+## the $results array for later processing
+sub wait_for_results {
+ my ($select) = @_;
+
+ my $pid = 0;
+ do {
+ # unblock children that may have a full write buffer
+ foreach my $fh ($select->can_read(0)) {
+ read_results($fh);
+ }
+
+ # check for children that have exited, read the remaining data
+ # from them, and clean up
+ $pid = waitpid(-1, WNOHANG);
+ if ($pid > 0) {
+ if (!exists($jobs->{$pid})) {
+ next;
+ }
+
+ my $fh = $jobs->{$pid};
+ $select->remove($fh);
+
+ while (read_results($fh)) {
+ # until eof
+ }
+
+ close($fh);
+ delete($jobs->{$pid});
+ }
+ } while ($pid > 0);
+}
+
+## forks a child to process each file passed in the command line and collects
+## the results
+sub process_files {
+ my $index = 0;
+ my $njobs = get_online_processors();
+ my $select = IO::Select->new();
+
+ while (my $file = shift(@ARGV)) {
+ # fork a child process and read it's stdout
+ my $pid = open(my $fh, '-|');
+
+ if (!defined($pid)) {
+ die "$0: ERROR: failed to fork: $!";
+ } elsif ($pid) {
+ # save the child process pid and the file handle
+ $select->add($fh);
+ $jobs->{$pid} = $fh;
+ } else {
+ # in the child process
+ STDOUT->autoflush(1);
+ find_initcalls($index, "$objtree/$file");
+ exit;
+ }
+
+ $index++;
+
+ # limit the number of children to $njobs
+ if (scalar(keys(%{$jobs})) >= $njobs) {
+ wait_for_results($select);
+ }
+ }
+
+ # wait for the remaining children to complete
+ while (scalar(keys(%{$jobs})) > 0) {
+ wait_for_results($select);
+ }
+}
+
+sub generate_initcall_lds() {
+ process_files();
+
+ my $sections = {}; # level -> [ secname, ...]
+
+ # sort results to retain link order and split to sections per
+ # initcall level
+ foreach my $index (sort { $a <=> $b } keys(%{$results})) {
+ foreach my $result (@{$results->{$index}}) {
+ my $level = $result->{'level'};
+
+ if (!exists($sections->{$level})) {
+ $sections->{$level} = [];
+ }
+
+ push(@{$sections->{$level}}, $result->{'secname'});
+ }
+ }
+
+ die "$0: ERROR: no initcalls?" if (!keys(%{$sections}));
+
+ # print out a linker script that defines the order of initcalls for
+ # each level
+ print "SECTIONS {\n";
+
+ foreach my $level (sort(keys(%{$sections}))) {
+ my $section;
+
+ if ($level eq 'con') {
+ $section = '.con_initcall.init';
+ } else {
+ $section = ".initcall${level}.init";
+ }
+
+ print "\t${section} : {\n";
+
+ foreach my $secname (@{$sections->{$level}}) {
+ print "\t\t*(${section}..${secname}) ;\n";
+ }
+
+ print "\t}\n";
+ }
+
+ print "}\n";
+}
+
+generate_initcall_lds();
diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
index c72f5d0238f1..42c73e24e820 100755
--- a/scripts/link-vmlinux.sh
+++ b/scripts/link-vmlinux.sh
@@ -39,6 +39,16 @@ info()
fi
}

+# Generate a linker script to ensure correct ordering of initcalls.
+gen_initcalls()
+{
+ info GEN .tmp_initcalls.lds
+
+ ${srctree}/scripts/generate_initcall_order.pl \
+ ${KBUILD_VMLINUX_OBJS} ${KBUILD_VMLINUX_LIBS} \
+ > .tmp_initcalls.lds
+}
+
# If CONFIG_LTO_CLANG is selected, collect generated symbol versions into
# .tmp_symversions.lds
gen_symversions()
@@ -70,6 +80,9 @@ modpost_link()
--end-group"

if [ -n "${CONFIG_LTO_CLANG}" ]; then
+ gen_initcalls
+ lds="-T .tmp_initcalls.lds"
+
if [ -n "${CONFIG_MODVERSIONS}" ]; then
gen_symversions
lds="${lds} -T .tmp_symversions.lds"
@@ -283,6 +296,7 @@ cleanup()
{
rm -f .btf.*
rm -f .tmp_System.map
+ rm -f .tmp_initcalls.lds
rm -f .tmp_symversions.lds
rm -f .tmp_vmlinux*
rm -f System.map
--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 20:38:29

by Sami Tolvanen

[permalink] [raw]
Subject: [PATCH 03/22] kbuild: lto: fix module versioning

With CONFIG_MODVERSIONS, version information is linked into each
compilation unit that exports symbols. With LTO, we cannot use this
method as all C code is compiled into LLVM bitcode instead. This
change collects symbol versions into .symversions files and merges
them in link-vmlinux.sh where they are all linked into vmlinux.o at
the same time.

Signed-off-by: Sami Tolvanen <[email protected]>
---
.gitignore | 1 +
Makefile | 3 ++-
arch/Kconfig | 1 -
scripts/Makefile.build | 33 +++++++++++++++++++++++++++++++--
scripts/Makefile.modpost | 2 ++
scripts/link-vmlinux.sh | 25 ++++++++++++++++++++++++-
6 files changed, 60 insertions(+), 5 deletions(-)

diff --git a/.gitignore b/.gitignore
index 87b9dd8a163b..51b02c2f2826 100644
--- a/.gitignore
+++ b/.gitignore
@@ -41,6 +41,7 @@
*.so.dbg
*.su
*.symtypes
+*.symversions
*.tab.[ch]
*.tar
*.xz
diff --git a/Makefile b/Makefile
index 0c7fe6fb2143..161ad0d1f77f 100644
--- a/Makefile
+++ b/Makefile
@@ -1793,7 +1793,8 @@ clean: $(clean-dirs)
-o -name '.tmp_*.o.*' \
-o -name '*.c.[012]*.*' \
-o -name '*.ll' \
- -o -name '*.gcno' \) -type f -print | xargs rm -f
+ -o -name '*.gcno' \
+ -o -name '*.*.symversions' \) -type f -print | xargs rm -f

# Generate tags for editors
# ---------------------------------------------------------------------------
diff --git a/arch/Kconfig b/arch/Kconfig
index e00b122293f8..87488fe1e6b8 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -600,7 +600,6 @@ config LTO_CLANG
depends on ARCH_SUPPORTS_LTO_CLANG
depends on !FTRACE_MCOUNT_RECORD
depends on !KASAN
- depends on !MODVERSIONS
select LTO
help
This option enables Clang's Link Time Optimization (LTO), which
diff --git a/scripts/Makefile.build b/scripts/Makefile.build
index f307e708a1b7..5c0bbb6ddfcf 100644
--- a/scripts/Makefile.build
+++ b/scripts/Makefile.build
@@ -163,6 +163,15 @@ ifdef CONFIG_MODVERSIONS
# the actual value of the checksum generated by genksyms
# o remove .tmp_<file>.o to <file>.o

+ifdef CONFIG_LTO_CLANG
+# Generate .o.symversions files for each .o with exported symbols, and link these
+# to the kernel and/or modules at the end.
+cmd_modversions_c = \
+ if $(NM) $@ 2>/dev/null | grep -q __ksymtab; then \
+ $(call cmd_gensymtypes_c,$(KBUILD_SYMTYPES),$(@:.o=.symtypes)) \
+ > [email protected]; \
+ fi;
+else
cmd_modversions_c = \
if $(OBJDUMP) -h $@ | grep -q __ksymtab; then \
$(call cmd_gensymtypes_c,$(KBUILD_SYMTYPES),$(@:.o=.symtypes)) \
@@ -174,6 +183,7 @@ cmd_modversions_c = \
rm -f $(@D)/.tmp_$(@F:.o=.ver); \
fi
endif
+endif

ifdef CONFIG_FTRACE_MCOUNT_RECORD
ifndef CC_USING_RECORD_MCOUNT
@@ -389,6 +399,18 @@ $(obj)/%.asn1.c $(obj)/%.asn1.h: $(src)/%.asn1 $(objtree)/scripts/asn1_compiler
$(subdir-builtin): $(obj)/%/built-in.a: $(obj)/% ;
$(subdir-modorder): $(obj)/%/modules.order: $(obj)/% ;

+# combine symversions for later processing
+quiet_cmd_update_lto_symversions = SYMVER $@
+ifeq ($(CONFIG_LTO_CLANG) $(CONFIG_MODVERSIONS),y y)
+ cmd_update_lto_symversions = \
+ rm -f [email protected] \
+ $(foreach n, $(filter-out FORCE,$^), \
+ $(if $(wildcard $(n).symversions), \
+ ; cat $(n).symversions >> [email protected]))
+else
+ cmd_update_lto_symversions = echo >/dev/null
+endif
+
#
# Rule to compile a set of .o files into one .a file (without symbol table)
#
@@ -396,8 +418,11 @@ $(subdir-modorder): $(obj)/%/modules.order: $(obj)/% ;
quiet_cmd_ar_builtin = AR $@
cmd_ar_builtin = rm -f $@; $(AR) cDPrST $@ $(real-prereqs)

+quiet_cmd_ar_and_symver = AR $@
+ cmd_ar_and_symver = $(cmd_update_lto_symversions); $(cmd_ar_builtin)
+
$(obj)/built-in.a: $(real-obj-y) FORCE
- $(call if_changed,ar_builtin)
+ $(call if_changed,ar_and_symver)

#
# Rule to create modules.order file
@@ -417,8 +442,11 @@ $(obj)/modules.order: $(obj-m) FORCE
#
# Rule to compile a set of .o files into one .a file (with symbol table)
#
+quiet_cmd_ar_lib = AR $@
+ cmd_ar_lib = $(cmd_update_lto_symversions); $(cmd_ar)
+
$(obj)/lib.a: $(lib-y) FORCE
- $(call if_changed,ar)
+ $(call if_changed,ar_lib)

# NOTE:
# Do not replace $(filter %.o,^) with $(real-prereqs). When a single object
@@ -427,6 +455,7 @@ $(obj)/lib.a: $(lib-y) FORCE
ifdef CONFIG_LTO_CLANG
quiet_cmd_link_multi-m = AR [M] $@
cmd_link_multi-m = \
+ $(cmd_update_lto_symversions); \
rm -f $@; \
$(AR) rcsTP$(KBUILD_ARFLAGS) $@ $(filter %.o,$^)
else
diff --git a/scripts/Makefile.modpost b/scripts/Makefile.modpost
index 9ced8aecd579..42dbdc2bbf73 100644
--- a/scripts/Makefile.modpost
+++ b/scripts/Makefile.modpost
@@ -110,6 +110,8 @@ prelink-ext = .lto
quiet_cmd_cc_lto_link_modules = LTO [M] $@
cmd_cc_lto_link_modules = \
$(LD) $(ld_flags) -r -o $@ \
+ $(shell [ -s $(@:.lto.o=.o.symversions) ] && \
+ echo -T $(@:.lto.o=.o.symversions)) \
--whole-archive $(filter-out FORCE,$^)

%.lto.o: %.o FORCE
diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
index a681b3b6722e..69a6d7254e28 100755
--- a/scripts/link-vmlinux.sh
+++ b/scripts/link-vmlinux.sh
@@ -39,11 +39,28 @@ info()
fi
}

+# If CONFIG_LTO_CLANG is selected, collect generated symbol versions into
+# .tmp_symversions.lds
+gen_symversions()
+{
+ info GEN .tmp_symversions.lds
+ rm -f .tmp_symversions.lds
+
+ for a in ${KBUILD_VMLINUX_OBJS} ${KBUILD_VMLINUX_LIBS}; do
+ for o in $(${AR} t $a 2>/dev/null); do
+ if [ -f ${o}.symversions ]; then
+ cat ${o}.symversions >> .tmp_symversions.lds
+ fi
+ done
+ done
+}
+
# Link of vmlinux.o used for section mismatch analysis
# ${1} output file
modpost_link()
{
local objects
+ local lds=""

objects="--whole-archive \
${KBUILD_VMLINUX_OBJS} \
@@ -53,6 +70,11 @@ modpost_link()
--end-group"

if [ -n "${CONFIG_LTO_CLANG}" ]; then
+ if [ -n "${CONFIG_MODVERSIONS}" ]; then
+ gen_symversions
+ lds="${lds} -T .tmp_symversions.lds"
+ fi
+
# This might take a while, so indicate that we're doing
# an LTO link
info LTO ${1}
@@ -60,7 +82,7 @@ modpost_link()
info LD ${1}
fi

- ${LD} ${KBUILD_LDFLAGS} -r -o ${1} ${objects}
+ ${LD} ${KBUILD_LDFLAGS} -r -o ${1} ${lds} ${objects}
}

objtool_link()
@@ -238,6 +260,7 @@ cleanup()
{
rm -f .btf.*
rm -f .tmp_System.map
+ rm -f .tmp_symversions.lds
rm -f .tmp_vmlinux*
rm -f System.map
rm -f vmlinux
--
2.27.0.212.ge8ba1cc988-goog

2020-06-24 21:56:45

by Nick Desaulniers

[permalink] [raw]
Subject: Re: [PATCH 13/22] scripts/mod: disable LTO for empty.c

On Wed, Jun 24, 2020 at 1:33 PM Sami Tolvanen <[email protected]> wrote:
>
> With CONFIG_LTO_CLANG, clang generates LLVM IR instead of ELF object
> files. As empty.o is used for probing target properties, disable LTO
> for it to produce an object file instead.
>
> Signed-off-by: Sami Tolvanen <[email protected]>

Reviewed-by: Nick Desaulniers <[email protected]>

> ---
> scripts/mod/Makefile | 1 +
> 1 file changed, 1 insertion(+)
>
> diff --git a/scripts/mod/Makefile b/scripts/mod/Makefile
> index 296b6a3878b2..b6e3b40c6eeb 100644
> --- a/scripts/mod/Makefile
> +++ b/scripts/mod/Makefile
> @@ -1,5 +1,6 @@
> # SPDX-License-Identifier: GPL-2.0
> OBJECT_FILES_NON_STANDARD := y
> +CFLAGS_REMOVE_empty.o += $(CC_FLAGS_LTO)
>
> hostprogs := modpost mk_elfconfig
> always-y := $(hostprogs) empty.o
> --
> 2.27.0.212.ge8ba1cc988-goog
>


--
Thanks,
~Nick Desaulniers

2020-06-24 21:56:46

by Nick Desaulniers

[permalink] [raw]
Subject: Re: [PATCH 02/22] kbuild: add support for Clang LTO

On Wed, Jun 24, 2020 at 1:32 PM Sami Tolvanen <[email protected]> wrote:
>
> This change adds build system support for Clang's Link Time
> Optimization (LTO). With -flto, instead of ELF object files, Clang
> produces LLVM bitcode, which is compiled into native code at link
> time, allowing the final binary to be optimized globally. For more
> details, see:
>
> https://llvm.org/docs/LinkTimeOptimization.html
>
> The Kconfig option CONFIG_LTO_CLANG is implemented as a choice,
> which defaults to LTO being disabled. To use LTO, the architecture
> must select ARCH_SUPPORTS_LTO_CLANG and support:
>
> - compiling with Clang,
> - compiling inline assembly with Clang's integrated assembler,
> - and linking with LLD.
>
> While using full LTO results in the best runtime performance, the
> compilation is not scalable in time or memory. CONFIG_THINLTO
> enables ThinLTO, which allows parallel optimization and faster
> incremental builds. ThinLTO is used by default if the architecture
> also selects ARCH_SUPPORTS_THINLTO:
>
> https://clang.llvm.org/docs/ThinLTO.html
>
> To enable LTO, LLVM tools must be used to handle bitcode files. The
> easiest way is to pass the LLVM=1 option to make:
>
> $ make LLVM=1 defconfig
> $ scripts/config -e LTO_CLANG
> $ make LLVM=1
>
> Alternatively, at least the following LLVM tools must be used:
>
> CC=clang LD=ld.lld AR=llvm-ar NM=llvm-nm
>
> To prepare for LTO support with other compilers, common parts are
> gated behind the CONFIG_LTO option, and LTO can be disabled for
> specific files by filtering out CC_FLAGS_LTO.
>
> Note that support for DYNAMIC_FTRACE and MODVERSIONS are added in
> follow-up patches.
>
> Signed-off-by: Sami Tolvanen <[email protected]>
> ---
> Makefile | 16 ++++++++
> arch/Kconfig | 66 +++++++++++++++++++++++++++++++
> include/asm-generic/vmlinux.lds.h | 11 ++++--
> scripts/Makefile.build | 9 ++++-
> scripts/Makefile.modfinal | 9 ++++-
> scripts/Makefile.modpost | 24 ++++++++++-
> scripts/link-vmlinux.sh | 32 +++++++++++----
> 7 files changed, 151 insertions(+), 16 deletions(-)
>
> diff --git a/Makefile b/Makefile
> index ac2c61c37a73..0c7fe6fb2143 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -886,6 +886,22 @@ KBUILD_CFLAGS += $(CC_FLAGS_SCS)
> export CC_FLAGS_SCS
> endif
>
> +ifdef CONFIG_LTO_CLANG
> +ifdef CONFIG_THINLTO
> +CC_FLAGS_LTO_CLANG := -flto=thin $(call cc-option, -fsplit-lto-unit)

The kconfig change gates this on clang-11; do we still need the
cc-option check here, or can we hardcode the use of -fsplit-lto-unit?
Playing with the flag in godbolt, it looks like clang-8 had support
for this flag.

> +KBUILD_LDFLAGS += --thinlto-cache-dir=.thinlto-cache

It might be nice to have `make distclean` or even `make clean` scrub
the .thinlto-cache? Also, I verified that the `.gitignore` rule for
`.*` properly ignores this dir.

> +else
> +CC_FLAGS_LTO_CLANG := -flto
> +endif
> +CC_FLAGS_LTO_CLANG += -fvisibility=default
> +endif
> +
> +ifdef CONFIG_LTO
> +CC_FLAGS_LTO := $(CC_FLAGS_LTO_CLANG)
> +KBUILD_CFLAGS += $(CC_FLAGS_LTO)
> +export CC_FLAGS_LTO
> +endif
> +
> # arch Makefile may override CC so keep this after arch Makefile is included
> NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
>
> diff --git a/arch/Kconfig b/arch/Kconfig
> index 8cc35dc556c7..e00b122293f8 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -552,6 +552,72 @@ config SHADOW_CALL_STACK
> reading and writing arbitrary memory may be able to locate them
> and hijack control flow by modifying the stacks.
>
> +config LTO
> + bool
> +
> +config ARCH_SUPPORTS_LTO_CLANG
> + bool
> + help
> + An architecture should select this option if it supports:
> + - compiling with Clang,
> + - compiling inline assembly with Clang's integrated assembler,
> + - and linking with LLD.
> +
> +config ARCH_SUPPORTS_THINLTO
> + bool
> + help
> + An architecture should select this option if it supports Clang's
> + ThinLTO.
> +
> +config THINLTO
> + bool "Clang ThinLTO"
> + depends on LTO_CLANG && ARCH_SUPPORTS_THINLTO
> + default y
> + help
> + This option enables Clang's ThinLTO, which allows for parallel
> + optimization and faster incremental compiles. More information
> + can be found from Clang's documentation:
> +
> + https://clang.llvm.org/docs/ThinLTO.html
> +
> +choice
> + prompt "Link Time Optimization (LTO)"
> + default LTO_NONE
> + help
> + This option enables Link Time Optimization (LTO), which allows the
> + compiler to optimize binaries globally.
> +
> + If unsure, select LTO_NONE.
> +
> +config LTO_NONE
> + bool "None"
> +
> +config LTO_CLANG
> + bool "Clang's Link Time Optimization (EXPERIMENTAL)"
> + depends on CC_IS_CLANG && CLANG_VERSION >= 110000 && LD_IS_LLD
> + depends on $(success,$(NM) --help | head -n 1 | grep -qi llvm)
> + depends on $(success,$(AR) --help | head -n 1 | grep -qi llvm)
> + depends on ARCH_SUPPORTS_LTO_CLANG
> + depends on !FTRACE_MCOUNT_RECORD
> + depends on !KASAN
> + depends on !MODVERSIONS
> + select LTO
> + help
> + This option enables Clang's Link Time Optimization (LTO), which
> + allows the compiler to optimize the kernel globally. If you enable
> + this option, the compiler generates LLVM bitcode instead of ELF
> + object files, and the actual compilation from bitcode happens at
> + the LTO link step, which may take several minutes depending on the
> + kernel configuration. More information can be found from LLVM's
> + documentation:
> +
> + https://llvm.org/docs/LinkTimeOptimization.html
> +
> + To select this option, you also need to use LLVM tools to handle
> + the bitcode by passing LLVM=1 to make.
> +
> +endchoice
> +
> config HAVE_ARCH_WITHIN_STACK_FRAMES
> bool
> help
> diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
> index db600ef218d7..78079000c05a 100644
> --- a/include/asm-generic/vmlinux.lds.h
> +++ b/include/asm-generic/vmlinux.lds.h
> @@ -89,15 +89,18 @@
> * .data. We don't want to pull in .data..other sections, which Linux
> * has defined. Same for text and bss.
> *
> + * With LTO_CLANG, the linker also splits sections by default, so we need
> + * these macros to combine the sections during the final link.
> + *
> * RODATA_MAIN is not used because existing code already defines .rodata.x
> * sections to be brought in with rodata.
> */
> -#ifdef CONFIG_LD_DEAD_CODE_DATA_ELIMINATION
> +#if defined(CONFIG_LD_DEAD_CODE_DATA_ELIMINATION) || defined(CONFIG_LTO_CLANG)
> #define TEXT_MAIN .text .text.[0-9a-zA-Z_]*
> -#define DATA_MAIN .data .data.[0-9a-zA-Z_]* .data..LPBX*
> +#define DATA_MAIN .data .data.[0-9a-zA-Z_]* .data..L* .data..compoundliteral*
> #define SDATA_MAIN .sdata .sdata.[0-9a-zA-Z_]*
> -#define RODATA_MAIN .rodata .rodata.[0-9a-zA-Z_]*
> -#define BSS_MAIN .bss .bss.[0-9a-zA-Z_]*
> +#define RODATA_MAIN .rodata .rodata.[0-9a-zA-Z_]* .rodata..L*
> +#define BSS_MAIN .bss .bss.[0-9a-zA-Z_]* .bss..compoundliteral*
> #define SBSS_MAIN .sbss .sbss.[0-9a-zA-Z_]*
> #else
> #define TEXT_MAIN .text
> diff --git a/scripts/Makefile.build b/scripts/Makefile.build
> index 2e8810b7e5ed..f307e708a1b7 100644
> --- a/scripts/Makefile.build
> +++ b/scripts/Makefile.build
> @@ -108,7 +108,7 @@ endif
> # ---------------------------------------------------------------------------
>
> quiet_cmd_cc_s_c = CC $(quiet_modtag) $@
> - cmd_cc_s_c = $(CC) $(filter-out $(DEBUG_CFLAGS), $(c_flags)) $(DISABLE_LTO) -fverbose-asm -S -o $@ $<
> + cmd_cc_s_c = $(CC) $(filter-out $(DEBUG_CFLAGS) $(CC_FLAGS_LTO), $(c_flags)) -fverbose-asm -S -o $@ $<
>
> $(obj)/%.s: $(src)/%.c FORCE
> $(call if_changed_dep,cc_s_c)
> @@ -424,8 +424,15 @@ $(obj)/lib.a: $(lib-y) FORCE
> # Do not replace $(filter %.o,^) with $(real-prereqs). When a single object
> # module is turned into a multi object module, $^ will contain header file
> # dependencies recorded in the .*.cmd file.
> +ifdef CONFIG_LTO_CLANG
> +quiet_cmd_link_multi-m = AR [M] $@
> +cmd_link_multi-m = \
> + rm -f $@; \
> + $(AR) rcsTP$(KBUILD_ARFLAGS) $@ $(filter %.o,$^)
> +else
> quiet_cmd_link_multi-m = LD [M] $@
> cmd_link_multi-m = $(LD) $(ld_flags) -r -o $@ $(filter %.o,$^)
> +endif
>
> $(multi-used-m): FORCE
> $(call if_changed,link_multi-m)
> diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
> index 411c1e600e7d..1005b147abd0 100644
> --- a/scripts/Makefile.modfinal
> +++ b/scripts/Makefile.modfinal
> @@ -6,6 +6,7 @@
> PHONY := __modfinal
> __modfinal:
>
> +include $(objtree)/include/config/auto.conf
> include $(srctree)/scripts/Kbuild.include
>
> # for c_flags
> @@ -29,6 +30,12 @@ quiet_cmd_cc_o_c = CC [M] $@
>
> ARCH_POSTLINK := $(wildcard $(srctree)/arch/$(SRCARCH)/Makefile.postlink)
>
> +ifdef CONFIG_LTO_CLANG
> +# With CONFIG_LTO_CLANG, reuse the object file we compiled for modpost to
> +# avoid a second slow LTO link
> +prelink-ext := .lto
> +endif
> +
> quiet_cmd_ld_ko_o = LD [M] $@
> cmd_ld_ko_o = \
> $(LD) -r $(KBUILD_LDFLAGS) \
> @@ -37,7 +44,7 @@ quiet_cmd_ld_ko_o = LD [M] $@
> -o $@ $(filter %.o, $^); \
> $(if $(ARCH_POSTLINK), $(MAKE) -f $(ARCH_POSTLINK) $@, true)
>
> -$(modules): %.ko: %.o %.mod.o $(KBUILD_LDS_MODULE) FORCE
> +$(modules): %.ko: %$(prelink-ext).o %.mod.o $(KBUILD_LDS_MODULE) FORCE
> +$(call if_changed,ld_ko_o)
>
> targets += $(modules) $(modules:.ko=.mod.o)
> diff --git a/scripts/Makefile.modpost b/scripts/Makefile.modpost
> index 3651cbf6ad49..9ced8aecd579 100644
> --- a/scripts/Makefile.modpost
> +++ b/scripts/Makefile.modpost
> @@ -102,12 +102,32 @@ $(input-symdump):
> @echo >&2 'WARNING: Symbol version dump "$@" is missing.'
> @echo >&2 ' Modules may not have dependencies or modversions.'
>
> +ifdef CONFIG_LTO_CLANG
> +# With CONFIG_LTO_CLANG, .o files might be LLVM bitcode, so we need to run
> +# LTO to compile them into native code before running modpost
> +prelink-ext = .lto
> +
> +quiet_cmd_cc_lto_link_modules = LTO [M] $@
> +cmd_cc_lto_link_modules = \
> + $(LD) $(ld_flags) -r -o $@ \
> + --whole-archive $(filter-out FORCE,$^)
> +
> +%.lto.o: %.o FORCE
> + $(call if_changed,cc_lto_link_modules)
> +
> +PHONY += FORCE
> +FORCE:
> +
> +endif
> +
> +modules := $(sort $(shell cat $(MODORDER)))
> +
> # Read out modules.order to pass in modpost.
> # Otherwise, allmodconfig would fail with "Argument list too long".
> quiet_cmd_modpost = MODPOST $@
> - cmd_modpost = sed 's/ko$$/o/' $< | $(MODPOST) -T -
> + cmd_modpost = sed 's/\.ko$$/$(prelink-ext)\.o/' $< | $(MODPOST) -T -
>
> -$(output-symdump): $(MODORDER) $(input-symdump) FORCE
> +$(output-symdump): $(MODORDER) $(input-symdump) $(modules:.ko=$(prelink-ext).o) FORCE
> $(call if_changed,modpost)
>
> targets += $(output-symdump)
> diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
> index 92dd745906f4..a681b3b6722e 100755
> --- a/scripts/link-vmlinux.sh
> +++ b/scripts/link-vmlinux.sh
> @@ -52,6 +52,14 @@ modpost_link()
> ${KBUILD_VMLINUX_LIBS} \
> --end-group"
>
> + if [ -n "${CONFIG_LTO_CLANG}" ]; then
> + # This might take a while, so indicate that we're doing
> + # an LTO link
> + info LTO ${1}
> + else
> + info LD ${1}
> + fi
> +
> ${LD} ${KBUILD_LDFLAGS} -r -o ${1} ${objects}
> }
>
> @@ -99,13 +107,22 @@ vmlinux_link()
> fi
>
> if [ "${SRCARCH}" != "um" ]; then
> - objects="--whole-archive \
> - ${KBUILD_VMLINUX_OBJS} \
> - --no-whole-archive \
> - --start-group \
> - ${KBUILD_VMLINUX_LIBS} \
> - --end-group \
> - ${@}"
> + if [ -n "${CONFIG_LTO_CLANG}" ]; then
> + # Use vmlinux.o instead of performing the slow LTO
> + # link again.
> + objects="--whole-archive \
> + vmlinux.o \
> + --no-whole-archive \
> + ${@}"
> + else
> + objects="--whole-archive \
> + ${KBUILD_VMLINUX_OBJS} \
> + --no-whole-archive \
> + --start-group \
> + ${KBUILD_VMLINUX_LIBS} \
> + --end-group \
> + ${@}"
> + fi
>
> ${LD} ${KBUILD_LDFLAGS} ${LDFLAGS_vmlinux} \
> ${strip_debug#-Wl,} \
> @@ -270,7 +287,6 @@ fi;
> ${MAKE} -f "${srctree}/scripts/Makefile.build" obj=init need-builtin=1
>
> #link vmlinux.o
> -info LD vmlinux.o
> modpost_link vmlinux.o
> objtool_link vmlinux.o
>
> --
> 2.27.0.212.ge8ba1cc988-goog
>


--
Thanks,
~Nick Desaulniers

2020-06-24 22:00:05

by Nick Desaulniers

[permalink] [raw]
Subject: Re: [PATCH 17/22] arm64: vdso: disable LTO

On Wed, Jun 24, 2020 at 1:33 PM Sami Tolvanen <[email protected]> wrote:
>
> Filter out CC_FLAGS_LTO for the vDSO.

Just curious about this patch (and the following one for x86's vdso),
do you happen to recall specifically what the issues with the vdso's
are?

>
> Signed-off-by: Sami Tolvanen <[email protected]>
> ---
> arch/arm64/kernel/vdso/Makefile | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
> index 556d424c6f52..cfad4c296ca1 100644
> --- a/arch/arm64/kernel/vdso/Makefile
> +++ b/arch/arm64/kernel/vdso/Makefile
> @@ -29,8 +29,8 @@ ldflags-y := -shared -nostdlib -soname=linux-vdso.so.1 --hash-style=sysv \
> ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18
> ccflags-y += -DDISABLE_BRANCH_PROFILING
>
> -CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS)
> -KBUILD_CFLAGS += $(DISABLE_LTO)
> +CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) \
> + $(CC_FLAGS_LTO)
> KASAN_SANITIZE := n
> UBSAN_SANITIZE := n
> OBJECT_FILES_NON_STANDARD := y
> --
> 2.27.0.212.ge8ba1cc988-goog
>


--
Thanks,
~Nick Desaulniers

2020-06-24 22:02:16

by Nick Desaulniers

[permalink] [raw]
Subject: Re: [PATCH 17/22] arm64: vdso: disable LTO

On Wed, Jun 24, 2020 at 1:58 PM Nick Desaulniers
<[email protected]> wrote:
>
> On Wed, Jun 24, 2020 at 1:33 PM Sami Tolvanen <[email protected]> wrote:
> >
> > Filter out CC_FLAGS_LTO for the vDSO.
>
> Just curious about this patch (and the following one for x86's vdso),
> do you happen to recall specifically what the issues with the vdso's
> are?

+ Andi (tangential, I actually have a bunch of tabs open with slides
from http://halobates.de/ right now)
58edae3aac9f2
67424d5a22124
$ git log -S DISABLE_LTO

>
> >
> > Signed-off-by: Sami Tolvanen <[email protected]>
> > ---
> > arch/arm64/kernel/vdso/Makefile | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
> > index 556d424c6f52..cfad4c296ca1 100644
> > --- a/arch/arm64/kernel/vdso/Makefile
> > +++ b/arch/arm64/kernel/vdso/Makefile
> > @@ -29,8 +29,8 @@ ldflags-y := -shared -nostdlib -soname=linux-vdso.so.1 --hash-style=sysv \
> > ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18
> > ccflags-y += -DDISABLE_BRANCH_PROFILING
> >
> > -CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS)
> > -KBUILD_CFLAGS += $(DISABLE_LTO)
> > +CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) \
> > + $(CC_FLAGS_LTO)
> > KASAN_SANITIZE := n
> > UBSAN_SANITIZE := n
> > OBJECT_FILES_NON_STANDARD := y
> > --

--
Thanks,
~Nick Desaulniers

2020-06-24 22:02:27

by Nick Desaulniers

[permalink] [raw]
Subject: Re: [PATCH 07/22] kbuild: lto: merge module sections

On Wed, Jun 24, 2020 at 1:33 PM Sami Tolvanen <[email protected]> wrote:
>
> LLD always splits sections with LTO, which increases module sizes. This
> change adds a linker script that merges the split sections in the final
> module and discards the .eh_frame section that LLD may generate.

For discarding .eh_frame, Kees is currently fighting with a series
that I would really like to see land that enables warnings on orphan
section placement. I don't see any new flags to inhibit .eh_frame
generation, or discard it in the linker script, so I'd expect it to be
treated as an orphan section and kept. Was that missed, or should
that be removed from the commit message?

>
> Suggested-by: Nick Desaulniers <[email protected]>
> Signed-off-by: Sami Tolvanen <[email protected]>
> ---
> Makefile | 2 ++
> scripts/module-lto.lds | 26 ++++++++++++++++++++++++++
> 2 files changed, 28 insertions(+)
> create mode 100644 scripts/module-lto.lds
>
> diff --git a/Makefile b/Makefile
> index ee66513a5b66..9ffec5fe1737 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -898,6 +898,8 @@ CC_FLAGS_LTO_CLANG += -fvisibility=default
> # Limit inlining across translation units to reduce binary size
> LD_FLAGS_LTO_CLANG := -mllvm -import-instr-limit=5
> KBUILD_LDFLAGS += $(LD_FLAGS_LTO_CLANG)
> +
> +KBUILD_LDS_MODULE += $(srctree)/scripts/module-lto.lds
> endif
>
> ifdef CONFIG_LTO
> diff --git a/scripts/module-lto.lds b/scripts/module-lto.lds
> new file mode 100644
> index 000000000000..65884c652bf2
> --- /dev/null
> +++ b/scripts/module-lto.lds
> @@ -0,0 +1,26 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * With CONFIG_LTO_CLANG, LLD always enables -fdata-sections and
> + * -ffunction-sections, which increases the size of the final module.
> + * Merge the split sections in the final binary.
> + */
> +SECTIONS {
> + __patchable_function_entries : { *(__patchable_function_entries) }
> +
> + .bss : {
> + *(.bss .bss.[0-9a-zA-Z_]*)
> + *(.bss..L* .bss..compoundliteral*)
> + }
> +
> + .data : {
> + *(.data .data.[0-9a-zA-Z_]*)
> + *(.data..L* .data..compoundliteral*)
> + }
> +
> + .rodata : {
> + *(.rodata .rodata.[0-9a-zA-Z_]*)
> + *(.rodata..L* .rodata..compoundliteral*)
> + }
> +
> + .text : { *(.text .text.[0-9a-zA-Z_]*) }
> +}
> --
> 2.27.0.212.ge8ba1cc988-goog
>


--
Thanks,
~Nick Desaulniers

2020-06-24 22:02:42

by Nick Desaulniers

[permalink] [raw]
Subject: Re: [PATCH 08/22] kbuild: lto: remove duplicate dependencies from .mod files

On Wed, Jun 24, 2020 at 1:33 PM Sami Tolvanen <[email protected]> wrote:
>
> With LTO, llvm-nm prints out symbols for each archive member
> separately, which results in a lot of duplicate dependencies in the
> .mod file when CONFIG_TRIM_UNUSED_SYMS is enabled. When a module
> consists of several compilation units, the output can exceed the
> default xargs command size limit and split the dependency list to
> multiple lines, which results in used symbols getting trimmed.
>
> This change removes duplicate dependencies, which will reduce the
> probability of this happening and makes .mod files smaller and
> easier to read.
>
> Signed-off-by: Sami Tolvanen <[email protected]>

Reviewed-by: Nick Desaulniers <[email protected]>

> ---
> scripts/Makefile.build | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/scripts/Makefile.build b/scripts/Makefile.build
> index 82977350f5a6..82b465ce3ca0 100644
> --- a/scripts/Makefile.build
> +++ b/scripts/Makefile.build
> @@ -291,7 +291,7 @@ endef
>
> # List module undefined symbols (or empty line if not enabled)
> ifdef CONFIG_TRIM_UNUSED_KSYMS
> -cmd_undef_syms = $(NM) $< | sed -n 's/^ *U //p' | xargs echo
> +cmd_undef_syms = $(NM) $< | sed -n 's/^ *U //p' | sort -u | xargs echo
> else
> cmd_undef_syms = echo
> endif
> --
> 2.27.0.212.ge8ba1cc988-goog
>


--
Thanks,
~Nick Desaulniers

2020-06-24 22:04:19

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 05/22] kbuild: lto: postpone objtool

On Wed, Jun 24, 2020 at 01:31:43PM -0700, Sami Tolvanen wrote:
> diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> index 30827f82ad62..12b115152532 100644
> --- a/include/linux/compiler.h
> +++ b/include/linux/compiler.h
> @@ -120,7 +120,7 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
> /* Annotate a C jump table to allow objtool to follow the code flow */
> #define __annotate_jump_table __section(.rodata..c_jump_table)
>
> -#ifdef CONFIG_DEBUG_ENTRY
> +#if defined(CONFIG_DEBUG_ENTRY) || defined(CONFIG_LTO_CLANG)
> /* Begin/end of an instrumentation safe region */
> #define instrumentation_begin() ({ \
> asm volatile("%c0:\n\t" \

Why would you be doing noinstr validation for lto builds? That doesn't
make sense.

> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index 9ad9210d70a1..9fdba71c135a 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -399,7 +399,7 @@ config STACK_VALIDATION
>
> config VMLINUX_VALIDATION
> bool
> - depends on STACK_VALIDATION && DEBUG_ENTRY && !PARAVIRT
> + depends on STACK_VALIDATION && (DEBUG_ENTRY || LTO_CLANG) && !PARAVIRT
> default y
>

For that very same reason you shouldn't be excluding paravirt either.

> diff --git a/scripts/Makefile.modfinal b/scripts/Makefile.modfinal
> index d168f0cfe67c..9f1df2f1fab5 100644
> --- a/scripts/Makefile.modfinal
> +++ b/scripts/Makefile.modfinal
> @@ -48,6 +48,21 @@ endif # CC_USING_PATCHABLE_FUNCTION_ENTRY
> endif # CC_USING_RECORD_MCOUNT
> endif # CONFIG_FTRACE_MCOUNT_RECORD
>
> +ifdef CONFIG_STACK_VALIDATION
> +ifneq ($(SKIP_STACK_VALIDATION),1)
> +cmd_ld_ko_o += \
> + $(objtree)/tools/objtool/objtool \
> + $(if $(CONFIG_UNWINDER_ORC),orc generate,check) \
> + --module \
> + $(if $(CONFIG_FRAME_POINTER),,--no-fp) \
> + $(if $(CONFIG_GCOV_KERNEL),--no-unreachable,) \
> + $(if $(CONFIG_RETPOLINE),--retpoline,) \
> + $(if $(CONFIG_X86_SMAP),--uaccess,) \
> + $(@:.ko=$(prelink-ext).o);
> +
> +endif # SKIP_STACK_VALIDATION
> +endif # CONFIG_STACK_VALIDATION

What about the objtool invocation from link-vmlinux.sh ?

2020-06-24 22:04:37

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 06/22] kbuild: lto: limit inlining

On Wed, Jun 24, 2020 at 01:31:44PM -0700, Sami Tolvanen wrote:
> This change limits function inlining across translation unit
> boundaries in order to reduce the binary size with LTO.
>
> The -import-instr-limit flag defines a size limit, as the number
> of LLVM IR instructions, for importing functions from other TUs.
> The default value is 100, and decreasing it to 5 reduces the size
> of a stripped arm64 defconfig vmlinux by 11%.

Is that also the right number for x86? What about the effect on
performance? What did 6 do? or 4?

> Suggested-by: George Burgess IV <[email protected]>
> Signed-off-by: Sami Tolvanen <[email protected]>
> ---
> Makefile | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/Makefile b/Makefile
> index 3a7e5e5c17b9..ee66513a5b66 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -894,6 +894,10 @@ else
> CC_FLAGS_LTO_CLANG := -flto
> endif
> CC_FLAGS_LTO_CLANG += -fvisibility=default
> +
> +# Limit inlining across translation units to reduce binary size
> +LD_FLAGS_LTO_CLANG := -mllvm -import-instr-limit=5
> +KBUILD_LDFLAGS += $(LD_FLAGS_LTO_CLANG)
> endif
>
> ifdef CONFIG_LTO
> --
> 2.27.0.212.ge8ba1cc988-goog
>

2020-06-24 22:05:22

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 04/22] kbuild: lto: fix recordmcount

On Wed, Jun 24, 2020 at 01:31:42PM -0700, Sami Tolvanen wrote:
> With LTO, LLVM bitcode won't be compiled into native code until
> modpost_link. This change postpones calls to recordmcount until after
> this step.
>
> In order to exclude specific functions from inspection, we add a new
> code section .text..nomcount, which we tell recordmcount to ignore, and
> a __nomcount attribute for moving functions to this section.

I'm confused, you only add this to functions in ftrace itself, which is
compiled with:

KBUILD_CFLAGS = $(subst $(CC_FLAGS_FTRACE),,$(ORIG_CFLAGS))

and so should not have mcount/fentry sites anyway. So what's the point
of ignoring them further?

This Changelog does not explain.

2020-06-24 22:05:29

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Wed, Jun 24, 2020 at 01:31:38PM -0700, Sami Tolvanen wrote:
> This patch series adds support for building x86_64 and arm64 kernels
> with Clang's Link Time Optimization (LTO).
>
> In addition to performance, the primary motivation for LTO is to allow
> Clang's Control-Flow Integrity (CFI) to be used in the kernel. Google's
> Pixel devices have shipped with LTO+CFI kernels since 2018.
>
> Most of the patches are build system changes for handling LLVM bitcode,
> which Clang produces with LTO instead of ELF object files, postponing
> ELF processing until a later stage, and ensuring initcall ordering.
>
> Note that first objtool patch in the series is already in linux-next,
> but as it's needed with LTO, I'm including it also here to make testing
> easier.

I'm very sad that yet again, memory ordering isn't addressed. LTO vastly
increases the range of the optimizer to wreck things.

2020-06-24 22:05:56

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [PATCH 02/22] kbuild: add support for Clang LTO

On Wed, Jun 24, 2020 at 01:53:52PM -0700, Nick Desaulniers wrote:
> On Wed, Jun 24, 2020 at 1:32 PM Sami Tolvanen <[email protected]> wrote:
> >
> > diff --git a/Makefile b/Makefile
> > index ac2c61c37a73..0c7fe6fb2143 100644
> > --- a/Makefile
> > +++ b/Makefile
> > @@ -886,6 +886,22 @@ KBUILD_CFLAGS += $(CC_FLAGS_SCS)
> > export CC_FLAGS_SCS
> > endif
> >
> > +ifdef CONFIG_LTO_CLANG
> > +ifdef CONFIG_THINLTO
> > +CC_FLAGS_LTO_CLANG := -flto=thin $(call cc-option, -fsplit-lto-unit)
>
> The kconfig change gates this on clang-11; do we still need the
> cc-option check here, or can we hardcode the use of -fsplit-lto-unit?
> Playing with the flag in godbolt, it looks like clang-8 had support
> for this flag.

True, we don't need cc-option here anymore. I'll remove it, thanks.

> > +KBUILD_LDFLAGS += --thinlto-cache-dir=.thinlto-cache
>
> It might be nice to have `make distclean` or even `make clean` scrub
> the .thinlto-cache? Also, I verified that the `.gitignore` rule for
> `.*` properly ignores this dir.

Sure, distclean sounds appropriate to me.

Sami

2020-06-24 22:06:22

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [PATCH 07/22] kbuild: lto: merge module sections

On Wed, Jun 24, 2020 at 02:01:59PM -0700, 'Nick Desaulniers' via Clang Built Linux wrote:
> On Wed, Jun 24, 2020 at 1:33 PM Sami Tolvanen <[email protected]> wrote:
> >
> > LLD always splits sections with LTO, which increases module sizes. This
> > change adds a linker script that merges the split sections in the final
> > module and discards the .eh_frame section that LLD may generate.
>
> For discarding .eh_frame, Kees is currently fighting with a series
> that I would really like to see land that enables warnings on orphan
> section placement. I don't see any new flags to inhibit .eh_frame
> generation, or discard it in the linker script, so I'd expect it to be
> treated as an orphan section and kept. Was that missed, or should
> that be removed from the commit message?

It should be removed from the commit message, thanks for pointing it
out.

Sami

2020-06-24 22:06:24

by Nick Desaulniers

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Wed, Jun 24, 2020 at 2:15 PM Peter Zijlstra <[email protected]> wrote:
>
> On Wed, Jun 24, 2020 at 01:31:38PM -0700, Sami Tolvanen wrote:
> > This patch series adds support for building x86_64 and arm64 kernels
> > with Clang's Link Time Optimization (LTO).
> >
> > In addition to performance, the primary motivation for LTO is to allow
> > Clang's Control-Flow Integrity (CFI) to be used in the kernel. Google's
> > Pixel devices have shipped with LTO+CFI kernels since 2018.
> >
> > Most of the patches are build system changes for handling LLVM bitcode,
> > which Clang produces with LTO instead of ELF object files, postponing
> > ELF processing until a later stage, and ensuring initcall ordering.
> >
> > Note that first objtool patch in the series is already in linux-next,
> > but as it's needed with LTO, I'm including it also here to make testing
> > easier.
>
> I'm very sad that yet again, memory ordering isn't addressed. LTO vastly
> increases the range of the optimizer to wreck things.

Hi Peter, could you expand on the issue for the folks on the thread?
I'm happy to try to hack something up in LLVM if we check that X does
or does not happen; maybe we can even come up with some concrete test
cases that can be added to LLVM's codebase?

--
Thanks,
~Nick Desaulniers

2020-06-24 22:07:05

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Wed, Jun 24, 2020 at 11:15:40PM +0200, Peter Zijlstra wrote:
> On Wed, Jun 24, 2020 at 01:31:38PM -0700, Sami Tolvanen wrote:
> > This patch series adds support for building x86_64 and arm64 kernels
> > with Clang's Link Time Optimization (LTO).
> >
> > In addition to performance, the primary motivation for LTO is to allow
> > Clang's Control-Flow Integrity (CFI) to be used in the kernel. Google's
> > Pixel devices have shipped with LTO+CFI kernels since 2018.
> >
> > Most of the patches are build system changes for handling LLVM bitcode,
> > which Clang produces with LTO instead of ELF object files, postponing
> > ELF processing until a later stage, and ensuring initcall ordering.
> >
> > Note that first objtool patch in the series is already in linux-next,
> > but as it's needed with LTO, I'm including it also here to make testing
> > easier.
>
> I'm very sad that yet again, memory ordering isn't addressed. LTO vastly
> increases the range of the optimizer to wreck things.

I believe Will has some thoughts about this, and patches, but I'll let
him talk about it.

Sami

2020-06-24 22:07:18

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [PATCH 04/22] kbuild: lto: fix recordmcount

On Wed, Jun 24, 2020 at 11:27:37PM +0200, Peter Zijlstra wrote:
> On Wed, Jun 24, 2020 at 01:31:42PM -0700, Sami Tolvanen wrote:
> > With LTO, LLVM bitcode won't be compiled into native code until
> > modpost_link. This change postpones calls to recordmcount until after
> > this step.
> >
> > In order to exclude specific functions from inspection, we add a new
> > code section .text..nomcount, which we tell recordmcount to ignore, and
> > a __nomcount attribute for moving functions to this section.
>
> I'm confused, you only add this to functions in ftrace itself, which is
> compiled with:
>
> KBUILD_CFLAGS = $(subst $(CC_FLAGS_FTRACE),,$(ORIG_CFLAGS))
>
> and so should not have mcount/fentry sites anyway. So what's the point
> of ignoring them further?
>
> This Changelog does not explain.

Normally, recordmcount ignores each ftrace.o file, but since we are
running it on vmlinux.o, we need another way to stop it from looking
at references to mcount/fentry that are not calls. Here's a comment
from recordmcount.c:

/*
* The file kernel/trace/ftrace.o references the mcount
* function but does not call it. Since ftrace.o should
* not be traced anyway, we just skip it.
*/

But I agree, the commit message could use more defails. Also +Steven
for thoughts about this approach.

Sami

2020-06-24 22:07:56

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [PATCH 05/22] kbuild: lto: postpone objtool

On Wed, Jun 24, 2020 at 11:19:08PM +0200, Peter Zijlstra wrote:
> On Wed, Jun 24, 2020 at 01:31:43PM -0700, Sami Tolvanen wrote:
> > diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> > index 30827f82ad62..12b115152532 100644
> > --- a/include/linux/compiler.h
> > +++ b/include/linux/compiler.h
> > @@ -120,7 +120,7 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
> > /* Annotate a C jump table to allow objtool to follow the code flow */
> > #define __annotate_jump_table __section(.rodata..c_jump_table)
> >
> > -#ifdef CONFIG_DEBUG_ENTRY
> > +#if defined(CONFIG_DEBUG_ENTRY) || defined(CONFIG_LTO_CLANG)
> > /* Begin/end of an instrumentation safe region */
> > #define instrumentation_begin() ({ \
> > asm volatile("%c0:\n\t" \
>
> Why would you be doing noinstr validation for lto builds? That doesn't
> make sense.

This is just to avoid a ton of noinstr warnings when we run objtool on
vmlinux.o, but I'm also fine with skipping noinstr validation with LTO.

> > +ifdef CONFIG_STACK_VALIDATION
> > +ifneq ($(SKIP_STACK_VALIDATION),1)
> > +cmd_ld_ko_o += \
> > + $(objtree)/tools/objtool/objtool \
> > + $(if $(CONFIG_UNWINDER_ORC),orc generate,check) \
> > + --module \
> > + $(if $(CONFIG_FRAME_POINTER),,--no-fp) \
> > + $(if $(CONFIG_GCOV_KERNEL),--no-unreachable,) \
> > + $(if $(CONFIG_RETPOLINE),--retpoline,) \
> > + $(if $(CONFIG_X86_SMAP),--uaccess,) \
> > + $(@:.ko=$(prelink-ext).o);
> > +
> > +endif # SKIP_STACK_VALIDATION
> > +endif # CONFIG_STACK_VALIDATION
>
> What about the objtool invocation from link-vmlinux.sh ?

What about it? The existing objtool_link invocation in link-vmlinux.sh
works fine for our purposes as well.

Sami

2020-06-24 22:08:17

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [PATCH 17/22] arm64: vdso: disable LTO

On Wed, Jun 24, 2020 at 01:58:57PM -0700, 'Nick Desaulniers' via Clang Built Linux wrote:
> On Wed, Jun 24, 2020 at 1:33 PM Sami Tolvanen <[email protected]> wrote:
> >
> > Filter out CC_FLAGS_LTO for the vDSO.
>
> Just curious about this patch (and the following one for x86's vdso),
> do you happen to recall specifically what the issues with the vdso's
> are?

I recall the compiler optimizing away functions at some point, but as
LTO is not really needed in the vDSO, it's just easiest to disable it
there.

Sami

2020-06-24 22:10:45

by Nick Desaulniers

[permalink] [raw]
Subject: Re: [PATCH 12/22] modpost: lto: strip .lto from module names

On Wed, Jun 24, 2020 at 1:33 PM Sami Tolvanen <[email protected]> wrote:
>
> With LTO, everything is compiled into LLVM bitcode, so we have to link
> each module into native code before modpost. Kbuild uses the .lto.o
> suffix for these files, which also ends up in module information. This
> change strips the unnecessary .lto suffix from the module name.
>
> Suggested-by: Bill Wendling <[email protected]>
> Signed-off-by: Sami Tolvanen <[email protected]>
> ---
> scripts/mod/modpost.c | 16 +++++++---------
> scripts/mod/modpost.h | 9 +++++++++
> scripts/mod/sumversion.c | 6 +++++-
> 3 files changed, 21 insertions(+), 10 deletions(-)
>
> diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
> index 6aea65c65745..8352f8a1a138 100644
> --- a/scripts/mod/modpost.c
> +++ b/scripts/mod/modpost.c
> @@ -17,7 +17,6 @@
> #include <ctype.h>
> #include <string.h>
> #include <limits.h>
> -#include <stdbool.h>

It looks like `bool` is used in the function signatures of other
functions in this TU, I'm not the biggest fan of hoisting the includes
out of the .c source into the header (I'd keep it in both), but I
don't feel strongly enough to NACK.

Reviewed-by: Nick Desaulniers <[email protected]>

> #include <errno.h>
> #include "modpost.h"
> #include "../../include/linux/license.h"
> @@ -80,14 +79,6 @@ modpost_log(enum loglevel loglevel, const char *fmt, ...)
> exit(1);
> }
>
> -static inline bool strends(const char *str, const char *postfix)
> -{
> - if (strlen(str) < strlen(postfix))
> - return false;
> -
> - return strcmp(str + strlen(str) - strlen(postfix), postfix) == 0;
> -}
> -
> void *do_nofail(void *ptr, const char *expr)
> {
> if (!ptr)
> @@ -1975,6 +1966,10 @@ static char *remove_dot(char *s)
> size_t m = strspn(s + n + 1, "0123456789");
> if (m && (s[n + m] == '.' || s[n + m] == 0))
> s[n] = 0;
> +
> + /* strip trailing .lto */
> + if (strends(s, ".lto"))
> + s[strlen(s) - 4] = '\0';
> }
> return s;
> }
> @@ -1998,6 +1993,9 @@ static void read_symbols(const char *modname)
> /* strip trailing .o */
> tmp = NOFAIL(strdup(modname));
> tmp[strlen(tmp) - 2] = '\0';
> + /* strip trailing .lto */
> + if (strends(tmp, ".lto"))
> + tmp[strlen(tmp) - 4] = '\0';
> mod = new_module(tmp);
> free(tmp);
> }
> diff --git a/scripts/mod/modpost.h b/scripts/mod/modpost.h
> index 3aa052722233..fab30d201f9e 100644
> --- a/scripts/mod/modpost.h
> +++ b/scripts/mod/modpost.h
> @@ -2,6 +2,7 @@
> #include <stdio.h>
> #include <stdlib.h>
> #include <stdarg.h>
> +#include <stdbool.h>
> #include <string.h>
> #include <sys/types.h>
> #include <sys/stat.h>
> @@ -180,6 +181,14 @@ static inline unsigned int get_secindex(const struct elf_info *info,
> return info->symtab_shndx_start[sym - info->symtab_start];
> }
>
> +static inline bool strends(const char *str, const char *postfix)
> +{
> + if (strlen(str) < strlen(postfix))
> + return false;
> +
> + return strcmp(str + strlen(str) - strlen(postfix), postfix) == 0;
> +}
> +
> /* file2alias.c */
> extern unsigned int cross_build;
> void handle_moddevtable(struct module *mod, struct elf_info *info,
> diff --git a/scripts/mod/sumversion.c b/scripts/mod/sumversion.c
> index d587f40f1117..760e6baa7eda 100644
> --- a/scripts/mod/sumversion.c
> +++ b/scripts/mod/sumversion.c
> @@ -391,10 +391,14 @@ void get_src_version(const char *modname, char sum[], unsigned sumlen)
> struct md4_ctx md;
> char *fname;
> char filelist[PATH_MAX + 1];
> + int postfix_len = 1;
> +
> + if (strends(modname, ".lto.o"))
> + postfix_len = 5;
>
> /* objects for a module are listed in the first line of *.mod file. */
> snprintf(filelist, sizeof(filelist), "%.*smod",
> - (int)strlen(modname) - 1, modname);
> + (int)strlen(modname) - postfix_len, modname);
>
> buf = read_text_file(filelist);
>
> --
> 2.27.0.212.ge8ba1cc988-goog
>


--
Thanks,
~Nick Desaulniers

2020-06-24 23:06:31

by Nick Desaulniers

[permalink] [raw]
Subject: Re: [PATCH 17/22] arm64: vdso: disable LTO

On Wed, Jun 24, 2020 at 2:52 PM Sami Tolvanen <[email protected]> wrote:
>
> On Wed, Jun 24, 2020 at 01:58:57PM -0700, 'Nick Desaulniers' via Clang Built Linux wrote:
> > On Wed, Jun 24, 2020 at 1:33 PM Sami Tolvanen <[email protected]> wrote:
> > >
> > > Filter out CC_FLAGS_LTO for the vDSO.
> >
> > Just curious about this patch (and the following one for x86's vdso),
> > do you happen to recall specifically what the issues with the vdso's
> > are?
>
> I recall the compiler optimizing away functions at some point, but as
> LTO is not really needed in the vDSO, it's just easiest to disable it
> there.

Sounds fishy; with extern linkage then I would think it's not safe to
eliminate functions. Probably unnecessary for the initial
implementation, and something we can follow up on, but always good to
have an answer to the inevitable question "why?" in the commit
message.
--
Thanks,
~Nick Desaulniers

2020-06-24 23:38:46

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [PATCH 06/22] kbuild: lto: limit inlining

On Wed, Jun 24, 2020 at 11:20:55PM +0200, Peter Zijlstra wrote:
> On Wed, Jun 24, 2020 at 01:31:44PM -0700, Sami Tolvanen wrote:
> > This change limits function inlining across translation unit
> > boundaries in order to reduce the binary size with LTO.
> >
> > The -import-instr-limit flag defines a size limit, as the number
> > of LLVM IR instructions, for importing functions from other TUs.
> > The default value is 100, and decreasing it to 5 reduces the size
> > of a stripped arm64 defconfig vmlinux by 11%.
>
> Is that also the right number for x86? What about the effect on
> performance? What did 6 do? or 4?

This is the size limit we decided on for Android after testing on
arm64, but the number is obviously a compromise between code size
and performance. I'd be happy to benchmark this further once other
concerns have been resolved.

Sami

2020-06-24 23:40:36

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [PATCH 17/22] arm64: vdso: disable LTO

On Wed, Jun 24, 2020 at 04:05:48PM -0700, Nick Desaulniers wrote:
> On Wed, Jun 24, 2020 at 2:52 PM Sami Tolvanen <[email protected]> wrote:
> >
> > On Wed, Jun 24, 2020 at 01:58:57PM -0700, 'Nick Desaulniers' via Clang Built Linux wrote:
> > > On Wed, Jun 24, 2020 at 1:33 PM Sami Tolvanen <[email protected]> wrote:
> > > >
> > > > Filter out CC_FLAGS_LTO for the vDSO.
> > >
> > > Just curious about this patch (and the following one for x86's vdso),
> > > do you happen to recall specifically what the issues with the vdso's
> > > are?
> >
> > I recall the compiler optimizing away functions at some point, but as
> > LTO is not really needed in the vDSO, it's just easiest to disable it
> > there.
>
> Sounds fishy; with extern linkage then I would think it's not safe to
> eliminate functions. Probably unnecessary for the initial
> implementation, and something we can follow up on, but always good to
> have an answer to the inevitable question "why?" in the commit
> message.

Sure. I can test this again with the current toolchain to see if there
are still problems.

Sami

2020-06-24 23:54:17

by Andi Kleen

[permalink] [raw]
Subject: Re: [PATCH 17/22] arm64: vdso: disable LTO

On Wed, Jun 24, 2020 at 02:09:40PM -0700, Nick Desaulniers wrote:
> On Wed, Jun 24, 2020 at 1:58 PM Nick Desaulniers
> <[email protected]> wrote:
> >
> > On Wed, Jun 24, 2020 at 1:33 PM Sami Tolvanen <[email protected]> wrote:
> > >
> > > Filter out CC_FLAGS_LTO for the vDSO.
> >
> > Just curious about this patch (and the following one for x86's vdso),
> > do you happen to recall specifically what the issues with the vdso's
> > are?
>
> + Andi (tangential, I actually have a bunch of tabs open with slides
> from http://halobates.de/ right now)
> 58edae3aac9f2
> 67424d5a22124
> $ git log -S DISABLE_LTO

I think I did it originally because the vDSO linker step didn't do
all the magic needed for gcc LTO. But it also doesn't seem to be
very useful for just a few functions that don't have complex
interactions, and somewhat risky for violating some assumptions.

-Andi

2020-06-25 02:27:31

by Nathan Chancellor

[permalink] [raw]
Subject: Re: [PATCH 02/22] kbuild: add support for Clang LTO

Hi Sami,

On Wed, Jun 24, 2020 at 01:31:40PM -0700, 'Sami Tolvanen' via Clang Built Linux wrote:
> This change adds build system support for Clang's Link Time
> Optimization (LTO). With -flto, instead of ELF object files, Clang
> produces LLVM bitcode, which is compiled into native code at link
> time, allowing the final binary to be optimized globally. For more
> details, see:
>
> https://llvm.org/docs/LinkTimeOptimization.html
>
> The Kconfig option CONFIG_LTO_CLANG is implemented as a choice,
> which defaults to LTO being disabled. To use LTO, the architecture
> must select ARCH_SUPPORTS_LTO_CLANG and support:
>
> - compiling with Clang,
> - compiling inline assembly with Clang's integrated assembler,
> - and linking with LLD.
>
> While using full LTO results in the best runtime performance, the
> compilation is not scalable in time or memory. CONFIG_THINLTO
> enables ThinLTO, which allows parallel optimization and faster
> incremental builds. ThinLTO is used by default if the architecture
> also selects ARCH_SUPPORTS_THINLTO:
>
> https://clang.llvm.org/docs/ThinLTO.html
>
> To enable LTO, LLVM tools must be used to handle bitcode files. The
> easiest way is to pass the LLVM=1 option to make:
>
> $ make LLVM=1 defconfig
> $ scripts/config -e LTO_CLANG
> $ make LLVM=1
>
> Alternatively, at least the following LLVM tools must be used:
>
> CC=clang LD=ld.lld AR=llvm-ar NM=llvm-nm
>
> To prepare for LTO support with other compilers, common parts are
> gated behind the CONFIG_LTO option, and LTO can be disabled for
> specific files by filtering out CC_FLAGS_LTO.
>
> Note that support for DYNAMIC_FTRACE and MODVERSIONS are added in
> follow-up patches.
>
> Signed-off-by: Sami Tolvanen <[email protected]>
> ---
> Makefile | 16 ++++++++
> arch/Kconfig | 66 +++++++++++++++++++++++++++++++
> include/asm-generic/vmlinux.lds.h | 11 ++++--
> scripts/Makefile.build | 9 ++++-
> scripts/Makefile.modfinal | 9 ++++-
> scripts/Makefile.modpost | 24 ++++++++++-
> scripts/link-vmlinux.sh | 32 +++++++++++----
> 7 files changed, 151 insertions(+), 16 deletions(-)
>
> diff --git a/Makefile b/Makefile
> index ac2c61c37a73..0c7fe6fb2143 100644
> --- a/Makefile
> +++ b/Makefile
> @@ -886,6 +886,22 @@ KBUILD_CFLAGS += $(CC_FLAGS_SCS)
> export CC_FLAGS_SCS
> endif
>
> +ifdef CONFIG_LTO_CLANG
> +ifdef CONFIG_THINLTO
> +CC_FLAGS_LTO_CLANG := -flto=thin $(call cc-option, -fsplit-lto-unit)
> +KBUILD_LDFLAGS += --thinlto-cache-dir=.thinlto-cache
> +else
> +CC_FLAGS_LTO_CLANG := -flto
> +endif
> +CC_FLAGS_LTO_CLANG += -fvisibility=default
> +endif
> +
> +ifdef CONFIG_LTO
> +CC_FLAGS_LTO := $(CC_FLAGS_LTO_CLANG)
> +KBUILD_CFLAGS += $(CC_FLAGS_LTO)
> +export CC_FLAGS_LTO
> +endif
> +
> # arch Makefile may override CC so keep this after arch Makefile is included
> NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
>
> diff --git a/arch/Kconfig b/arch/Kconfig
> index 8cc35dc556c7..e00b122293f8 100644
> --- a/arch/Kconfig
> +++ b/arch/Kconfig
> @@ -552,6 +552,72 @@ config SHADOW_CALL_STACK
> reading and writing arbitrary memory may be able to locate them
> and hijack control flow by modifying the stacks.
>
> +config LTO
> + bool
> +
> +config ARCH_SUPPORTS_LTO_CLANG
> + bool
> + help
> + An architecture should select this option if it supports:
> + - compiling with Clang,
> + - compiling inline assembly with Clang's integrated assembler,
> + - and linking with LLD.
> +
> +config ARCH_SUPPORTS_THINLTO
> + bool
> + help
> + An architecture should select this option if it supports Clang's
> + ThinLTO.
> +
> +config THINLTO
> + bool "Clang ThinLTO"
> + depends on LTO_CLANG && ARCH_SUPPORTS_THINLTO
> + default y
> + help
> + This option enables Clang's ThinLTO, which allows for parallel
> + optimization and faster incremental compiles. More information
> + can be found from Clang's documentation:
> +
> + https://clang.llvm.org/docs/ThinLTO.html
> +
> +choice
> + prompt "Link Time Optimization (LTO)"
> + default LTO_NONE
> + help
> + This option enables Link Time Optimization (LTO), which allows the
> + compiler to optimize binaries globally.
> +
> + If unsure, select LTO_NONE.
> +
> +config LTO_NONE
> + bool "None"
> +
> +config LTO_CLANG
> + bool "Clang's Link Time Optimization (EXPERIMENTAL)"
> + depends on CC_IS_CLANG && CLANG_VERSION >= 110000 && LD_IS_LLD

I am curious, what is the reason for gating this at clang 11.0.0?

Presumably this? https://github.com/ClangBuiltLinux/linux/issues/510

It might be nice to notate this so that we do not have to wonder :)

Cheers,
Nathan

2020-06-25 07:47:54

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 04/22] kbuild: lto: fix recordmcount

On Wed, Jun 24, 2020 at 02:45:30PM -0700, Sami Tolvanen wrote:
> On Wed, Jun 24, 2020 at 11:27:37PM +0200, Peter Zijlstra wrote:
> > On Wed, Jun 24, 2020 at 01:31:42PM -0700, Sami Tolvanen wrote:
> > > With LTO, LLVM bitcode won't be compiled into native code until
> > > modpost_link. This change postpones calls to recordmcount until after
> > > this step.
> > >
> > > In order to exclude specific functions from inspection, we add a new
> > > code section .text..nomcount, which we tell recordmcount to ignore, and
> > > a __nomcount attribute for moving functions to this section.
> >
> > I'm confused, you only add this to functions in ftrace itself, which is
> > compiled with:
> >
> > KBUILD_CFLAGS = $(subst $(CC_FLAGS_FTRACE),,$(ORIG_CFLAGS))
> >
> > and so should not have mcount/fentry sites anyway. So what's the point
> > of ignoring them further?
> >
> > This Changelog does not explain.
>
> Normally, recordmcount ignores each ftrace.o file, but since we are
> running it on vmlinux.o, we need another way to stop it from looking
> at references to mcount/fentry that are not calls. Here's a comment
> from recordmcount.c:
>
> /*
> * The file kernel/trace/ftrace.o references the mcount
> * function but does not call it. Since ftrace.o should
> * not be traced anyway, we just skip it.
> */
>
> But I agree, the commit message could use more defails. Also +Steven
> for thoughts about this approach.

Ah, is thi because recordmcount isn't smart enough to know the
difference between "CALL $mcount" and any other RELA that has mcount?

At least for x86_64 I can do a really quick take for a recordmcount pass
in objtool, but I suppose you also need this for ARM64 ?

2020-06-25 07:49:02

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 05/22] kbuild: lto: postpone objtool

On Wed, Jun 24, 2020 at 02:49:25PM -0700, Sami Tolvanen wrote:
> On Wed, Jun 24, 2020 at 11:19:08PM +0200, Peter Zijlstra wrote:
> > On Wed, Jun 24, 2020 at 01:31:43PM -0700, Sami Tolvanen wrote:
> > > diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> > > index 30827f82ad62..12b115152532 100644
> > > --- a/include/linux/compiler.h
> > > +++ b/include/linux/compiler.h
> > > @@ -120,7 +120,7 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
> > > /* Annotate a C jump table to allow objtool to follow the code flow */
> > > #define __annotate_jump_table __section(.rodata..c_jump_table)
> > >
> > > -#ifdef CONFIG_DEBUG_ENTRY
> > > +#if defined(CONFIG_DEBUG_ENTRY) || defined(CONFIG_LTO_CLANG)
> > > /* Begin/end of an instrumentation safe region */
> > > #define instrumentation_begin() ({ \
> > > asm volatile("%c0:\n\t" \
> >
> > Why would you be doing noinstr validation for lto builds? That doesn't
> > make sense.
>
> This is just to avoid a ton of noinstr warnings when we run objtool on
> vmlinux.o, but I'm also fine with skipping noinstr validation with LTO.

Right, then we need to make --no-vmlinux work properly when
!DEBUG_ENTRY, which I think might be buggered due to us overriding the
argument when the objname ends with "vmlinux.o".

> > > +ifdef CONFIG_STACK_VALIDATION
> > > +ifneq ($(SKIP_STACK_VALIDATION),1)
> > > +cmd_ld_ko_o += \
> > > + $(objtree)/tools/objtool/objtool \
> > > + $(if $(CONFIG_UNWINDER_ORC),orc generate,check) \
> > > + --module \
> > > + $(if $(CONFIG_FRAME_POINTER),,--no-fp) \
> > > + $(if $(CONFIG_GCOV_KERNEL),--no-unreachable,) \
> > > + $(if $(CONFIG_RETPOLINE),--retpoline,) \
> > > + $(if $(CONFIG_X86_SMAP),--uaccess,) \
> > > + $(@:.ko=$(prelink-ext).o);
> > > +
> > > +endif # SKIP_STACK_VALIDATION
> > > +endif # CONFIG_STACK_VALIDATION
> >
> > What about the objtool invocation from link-vmlinux.sh ?
>
> What about it? The existing objtool_link invocation in link-vmlinux.sh
> works fine for our purposes as well.

Well, I was wondering why you're adding yet another objtool invocation
while we already have one.

2020-06-25 08:04:19

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Wed, Jun 24, 2020 at 02:31:36PM -0700, Nick Desaulniers wrote:
> On Wed, Jun 24, 2020 at 2:15 PM Peter Zijlstra <[email protected]> wrote:
> >
> > On Wed, Jun 24, 2020 at 01:31:38PM -0700, Sami Tolvanen wrote:
> > > This patch series adds support for building x86_64 and arm64 kernels
> > > with Clang's Link Time Optimization (LTO).
> > >
> > > In addition to performance, the primary motivation for LTO is to allow
> > > Clang's Control-Flow Integrity (CFI) to be used in the kernel. Google's
> > > Pixel devices have shipped with LTO+CFI kernels since 2018.
> > >
> > > Most of the patches are build system changes for handling LLVM bitcode,
> > > which Clang produces with LTO instead of ELF object files, postponing
> > > ELF processing until a later stage, and ensuring initcall ordering.
> > >
> > > Note that first objtool patch in the series is already in linux-next,
> > > but as it's needed with LTO, I'm including it also here to make testing
> > > easier.
> >
> > I'm very sad that yet again, memory ordering isn't addressed. LTO vastly
> > increases the range of the optimizer to wreck things.
>
> Hi Peter, could you expand on the issue for the folks on the thread?
> I'm happy to try to hack something up in LLVM if we check that X does
> or does not happen; maybe we can even come up with some concrete test
> cases that can be added to LLVM's codebase?

I'm sure Will will respond, but the basic issue is the trainwreck C11
made of dependent loads.

Anyway, here's a link to the last time this came up:

https://lore.kernel.org/linux-arm-kernel/[email protected]/

2020-06-25 09:41:10

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Thu, Jun 25, 2020 at 10:03:13AM +0200, Peter Zijlstra wrote:
> On Wed, Jun 24, 2020 at 02:31:36PM -0700, Nick Desaulniers wrote:
> > On Wed, Jun 24, 2020 at 2:15 PM Peter Zijlstra <[email protected]> wrote:
> > >
> > > On Wed, Jun 24, 2020 at 01:31:38PM -0700, Sami Tolvanen wrote:
> > > > This patch series adds support for building x86_64 and arm64 kernels
> > > > with Clang's Link Time Optimization (LTO).
> > > >
> > > > In addition to performance, the primary motivation for LTO is to allow
> > > > Clang's Control-Flow Integrity (CFI) to be used in the kernel. Google's
> > > > Pixel devices have shipped with LTO+CFI kernels since 2018.
> > > >
> > > > Most of the patches are build system changes for handling LLVM bitcode,
> > > > which Clang produces with LTO instead of ELF object files, postponing
> > > > ELF processing until a later stage, and ensuring initcall ordering.
> > > >
> > > > Note that first objtool patch in the series is already in linux-next,
> > > > but as it's needed with LTO, I'm including it also here to make testing
> > > > easier.
> > >
> > > I'm very sad that yet again, memory ordering isn't addressed. LTO vastly
> > > increases the range of the optimizer to wreck things.
> >
> > Hi Peter, could you expand on the issue for the folks on the thread?
> > I'm happy to try to hack something up in LLVM if we check that X does
> > or does not happen; maybe we can even come up with some concrete test
> > cases that can be added to LLVM's codebase?
>
> I'm sure Will will respond, but the basic issue is the trainwreck C11
> made of dependent loads.
>
> Anyway, here's a link to the last time this came up:
>
> https://lore.kernel.org/linux-arm-kernel/[email protected]/

Another good read:

https://lore.kernel.org/lkml/[email protected]/

and having (partially) re-read that, I now worry intensily about things
like latch_tree_find(), cyc2ns_read_begin, __ktime_get_fast_ns().

It looks like kernel/time/sched_clock.c uses raw_read_seqcount() which
deviates from the above patterns by, for some reason, using a primitive
that includes an extra smp_rmb().

And this is just the few things I could remember off the top of my head,
who knows what else is out there.

2020-06-25 09:41:57

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Wed, Jun 24, 2020 at 02:30:14PM -0700, Sami Tolvanen wrote:
> On Wed, Jun 24, 2020 at 11:15:40PM +0200, Peter Zijlstra wrote:
> > On Wed, Jun 24, 2020 at 01:31:38PM -0700, Sami Tolvanen wrote:
> > > This patch series adds support for building x86_64 and arm64 kernels
> > > with Clang's Link Time Optimization (LTO).
> > >
> > > In addition to performance, the primary motivation for LTO is to allow
> > > Clang's Control-Flow Integrity (CFI) to be used in the kernel. Google's
> > > Pixel devices have shipped with LTO+CFI kernels since 2018.
> > >
> > > Most of the patches are build system changes for handling LLVM bitcode,
> > > which Clang produces with LTO instead of ELF object files, postponing
> > > ELF processing until a later stage, and ensuring initcall ordering.
> > >
> > > Note that first objtool patch in the series is already in linux-next,
> > > but as it's needed with LTO, I'm including it also here to make testing
> > > easier.
> >
> > I'm very sad that yet again, memory ordering isn't addressed. LTO vastly
> > increases the range of the optimizer to wreck things.
>
> I believe Will has some thoughts about this, and patches, but I'll let
> him talk about it.

Thanks for reminding me! I will get patches out ASAP (I've been avoiding the
rebase from hell, but this is the motivation I need).

Will

2020-06-25 09:49:01

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Thu, Jun 25, 2020 at 10:24:33AM +0200, Peter Zijlstra wrote:
> On Thu, Jun 25, 2020 at 10:03:13AM +0200, Peter Zijlstra wrote:

> > I'm sure Will will respond, but the basic issue is the trainwreck C11
> > made of dependent loads.
> >
> > Anyway, here's a link to the last time this came up:
> >
> > https://lore.kernel.org/linux-arm-kernel/[email protected]/
>
> Another good read:
>
> https://lore.kernel.org/lkml/[email protected]/
>
> and having (partially) re-read that, I now worry intensily about things
> like latch_tree_find(), cyc2ns_read_begin, __ktime_get_fast_ns().
>
> It looks like kernel/time/sched_clock.c uses raw_read_seqcount() which
> deviates from the above patterns by, for some reason, using a primitive
> that includes an extra smp_rmb().
>
> And this is just the few things I could remember off the top of my head,
> who knows what else is out there.

As an example, let us consider __ktime_get_fast_ns(), the critical bit
is:

seq = raw_read_seqcount_latch(&tkf->seq);
tkr = tkf->base + (seq & 0x01);
now = tkr->base;

And we hard rely on that being a dependent load, so:

LOAD seq, (tkf->seq)
LOAD tkr, tkf->base
AND seq, 1
MUL seq, sizeof(tk_read_base)
ADD tkr, seq
LOAD now, (tkr->base)

Such that we obtain 'now' as a direct dependency on 'seq'. This ensures
the loads are ordered.

A compiler can wreck this by translating it into something like:

LOAD seq, (tkf->seq)
LOAD tkr, tkf->base
AND seq, 1
CMP seq, 0
JE 1f
ADD tkr, sizeof(tk_read_base)
1:
LOAD now, (tkr->base)

Because now the machine can speculate and load now before seq, breaking
the ordering.

2020-06-25 18:15:05

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [PATCH 02/22] kbuild: add support for Clang LTO

On Wed, Jun 24, 2020 at 07:26:47PM -0700, Nathan Chancellor wrote:
> Hi Sami,
>
> On Wed, Jun 24, 2020 at 01:31:40PM -0700, 'Sami Tolvanen' via Clang Built Linux wrote:
> > This change adds build system support for Clang's Link Time
> > Optimization (LTO). With -flto, instead of ELF object files, Clang
> > produces LLVM bitcode, which is compiled into native code at link
> > time, allowing the final binary to be optimized globally. For more
> > details, see:
> >
> > https://llvm.org/docs/LinkTimeOptimization.html
> >
> > The Kconfig option CONFIG_LTO_CLANG is implemented as a choice,
> > which defaults to LTO being disabled. To use LTO, the architecture
> > must select ARCH_SUPPORTS_LTO_CLANG and support:
> >
> > - compiling with Clang,
> > - compiling inline assembly with Clang's integrated assembler,
> > - and linking with LLD.
> >
> > While using full LTO results in the best runtime performance, the
> > compilation is not scalable in time or memory. CONFIG_THINLTO
> > enables ThinLTO, which allows parallel optimization and faster
> > incremental builds. ThinLTO is used by default if the architecture
> > also selects ARCH_SUPPORTS_THINLTO:
> >
> > https://clang.llvm.org/docs/ThinLTO.html
> >
> > To enable LTO, LLVM tools must be used to handle bitcode files. The
> > easiest way is to pass the LLVM=1 option to make:
> >
> > $ make LLVM=1 defconfig
> > $ scripts/config -e LTO_CLANG
> > $ make LLVM=1
> >
> > Alternatively, at least the following LLVM tools must be used:
> >
> > CC=clang LD=ld.lld AR=llvm-ar NM=llvm-nm
> >
> > To prepare for LTO support with other compilers, common parts are
> > gated behind the CONFIG_LTO option, and LTO can be disabled for
> > specific files by filtering out CC_FLAGS_LTO.
> >
> > Note that support for DYNAMIC_FTRACE and MODVERSIONS are added in
> > follow-up patches.
> >
> > Signed-off-by: Sami Tolvanen <[email protected]>
> > ---
> > Makefile | 16 ++++++++
> > arch/Kconfig | 66 +++++++++++++++++++++++++++++++
> > include/asm-generic/vmlinux.lds.h | 11 ++++--
> > scripts/Makefile.build | 9 ++++-
> > scripts/Makefile.modfinal | 9 ++++-
> > scripts/Makefile.modpost | 24 ++++++++++-
> > scripts/link-vmlinux.sh | 32 +++++++++++----
> > 7 files changed, 151 insertions(+), 16 deletions(-)
> >
> > diff --git a/Makefile b/Makefile
> > index ac2c61c37a73..0c7fe6fb2143 100644
> > --- a/Makefile
> > +++ b/Makefile
> > @@ -886,6 +886,22 @@ KBUILD_CFLAGS += $(CC_FLAGS_SCS)
> > export CC_FLAGS_SCS
> > endif
> >
> > +ifdef CONFIG_LTO_CLANG
> > +ifdef CONFIG_THINLTO
> > +CC_FLAGS_LTO_CLANG := -flto=thin $(call cc-option, -fsplit-lto-unit)
> > +KBUILD_LDFLAGS += --thinlto-cache-dir=.thinlto-cache
> > +else
> > +CC_FLAGS_LTO_CLANG := -flto
> > +endif
> > +CC_FLAGS_LTO_CLANG += -fvisibility=default
> > +endif
> > +
> > +ifdef CONFIG_LTO
> > +CC_FLAGS_LTO := $(CC_FLAGS_LTO_CLANG)
> > +KBUILD_CFLAGS += $(CC_FLAGS_LTO)
> > +export CC_FLAGS_LTO
> > +endif
> > +
> > # arch Makefile may override CC so keep this after arch Makefile is included
> > NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include)
> >
> > diff --git a/arch/Kconfig b/arch/Kconfig
> > index 8cc35dc556c7..e00b122293f8 100644
> > --- a/arch/Kconfig
> > +++ b/arch/Kconfig
> > @@ -552,6 +552,72 @@ config SHADOW_CALL_STACK
> > reading and writing arbitrary memory may be able to locate them
> > and hijack control flow by modifying the stacks.
> >
> > +config LTO
> > + bool
> > +
> > +config ARCH_SUPPORTS_LTO_CLANG
> > + bool
> > + help
> > + An architecture should select this option if it supports:
> > + - compiling with Clang,
> > + - compiling inline assembly with Clang's integrated assembler,
> > + - and linking with LLD.
> > +
> > +config ARCH_SUPPORTS_THINLTO
> > + bool
> > + help
> > + An architecture should select this option if it supports Clang's
> > + ThinLTO.
> > +
> > +config THINLTO
> > + bool "Clang ThinLTO"
> > + depends on LTO_CLANG && ARCH_SUPPORTS_THINLTO
> > + default y
> > + help
> > + This option enables Clang's ThinLTO, which allows for parallel
> > + optimization and faster incremental compiles. More information
> > + can be found from Clang's documentation:
> > +
> > + https://clang.llvm.org/docs/ThinLTO.html
> > +
> > +choice
> > + prompt "Link Time Optimization (LTO)"
> > + default LTO_NONE
> > + help
> > + This option enables Link Time Optimization (LTO), which allows the
> > + compiler to optimize binaries globally.
> > +
> > + If unsure, select LTO_NONE.
> > +
> > +config LTO_NONE
> > + bool "None"
> > +
> > +config LTO_CLANG
> > + bool "Clang's Link Time Optimization (EXPERIMENTAL)"
> > + depends on CC_IS_CLANG && CLANG_VERSION >= 110000 && LD_IS_LLD
>
> I am curious, what is the reason for gating this at clang 11.0.0?
>
> Presumably this? https://github.com/ClangBuiltLinux/linux/issues/510
>
> It might be nice to notate this so that we do not have to wonder :)

Yes, that's the reason. I'll add a note about it. Thanks!

Sami

2020-06-25 18:15:11

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [PATCH 04/22] kbuild: lto: fix recordmcount

On Thu, Jun 25, 2020 at 09:45:30AM +0200, Peter Zijlstra wrote:
> On Wed, Jun 24, 2020 at 02:45:30PM -0700, Sami Tolvanen wrote:
> > On Wed, Jun 24, 2020 at 11:27:37PM +0200, Peter Zijlstra wrote:
> > > On Wed, Jun 24, 2020 at 01:31:42PM -0700, Sami Tolvanen wrote:
> > > > With LTO, LLVM bitcode won't be compiled into native code until
> > > > modpost_link. This change postpones calls to recordmcount until after
> > > > this step.
> > > >
> > > > In order to exclude specific functions from inspection, we add a new
> > > > code section .text..nomcount, which we tell recordmcount to ignore, and
> > > > a __nomcount attribute for moving functions to this section.
> > >
> > > I'm confused, you only add this to functions in ftrace itself, which is
> > > compiled with:
> > >
> > > KBUILD_CFLAGS = $(subst $(CC_FLAGS_FTRACE),,$(ORIG_CFLAGS))
> > >
> > > and so should not have mcount/fentry sites anyway. So what's the point
> > > of ignoring them further?
> > >
> > > This Changelog does not explain.
> >
> > Normally, recordmcount ignores each ftrace.o file, but since we are
> > running it on vmlinux.o, we need another way to stop it from looking
> > at references to mcount/fentry that are not calls. Here's a comment
> > from recordmcount.c:
> >
> > /*
> > * The file kernel/trace/ftrace.o references the mcount
> > * function but does not call it. Since ftrace.o should
> > * not be traced anyway, we just skip it.
> > */
> >
> > But I agree, the commit message could use more defails. Also +Steven
> > for thoughts about this approach.
>
> Ah, is thi because recordmcount isn't smart enough to know the
> difference between "CALL $mcount" and any other RELA that has mcount?

Yes.

> At least for x86_64 I can do a really quick take for a recordmcount pass
> in objtool, but I suppose you also need this for ARM64 ?

Sure, sounds good. arm64 uses -fpatchable-function-entry with clang, so we
don't need recordmcount there.

Sami

2020-06-25 18:16:23

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [PATCH 05/22] kbuild: lto: postpone objtool

On Thu, Jun 25, 2020 at 09:47:16AM +0200, Peter Zijlstra wrote:
> On Wed, Jun 24, 2020 at 02:49:25PM -0700, Sami Tolvanen wrote:
> > On Wed, Jun 24, 2020 at 11:19:08PM +0200, Peter Zijlstra wrote:
> > > On Wed, Jun 24, 2020 at 01:31:43PM -0700, Sami Tolvanen wrote:
> > > > diff --git a/include/linux/compiler.h b/include/linux/compiler.h
> > > > index 30827f82ad62..12b115152532 100644
> > > > --- a/include/linux/compiler.h
> > > > +++ b/include/linux/compiler.h
> > > > @@ -120,7 +120,7 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
> > > > /* Annotate a C jump table to allow objtool to follow the code flow */
> > > > #define __annotate_jump_table __section(.rodata..c_jump_table)
> > > >
> > > > -#ifdef CONFIG_DEBUG_ENTRY
> > > > +#if defined(CONFIG_DEBUG_ENTRY) || defined(CONFIG_LTO_CLANG)
> > > > /* Begin/end of an instrumentation safe region */
> > > > #define instrumentation_begin() ({ \
> > > > asm volatile("%c0:\n\t" \
> > >
> > > Why would you be doing noinstr validation for lto builds? That doesn't
> > > make sense.
> >
> > This is just to avoid a ton of noinstr warnings when we run objtool on
> > vmlinux.o, but I'm also fine with skipping noinstr validation with LTO.
>
> Right, then we need to make --no-vmlinux work properly when
> !DEBUG_ENTRY, which I think might be buggered due to us overriding the
> argument when the objname ends with "vmlinux.o".

Right. Can we just remove that and pass --vmlinux to objtool in
link-vmlinux.sh, or is the override necessary somewhere else?

> > > > +ifdef CONFIG_STACK_VALIDATION
> > > > +ifneq ($(SKIP_STACK_VALIDATION),1)
> > > > +cmd_ld_ko_o += \
> > > > + $(objtree)/tools/objtool/objtool \
> > > > + $(if $(CONFIG_UNWINDER_ORC),orc generate,check) \
> > > > + --module \
> > > > + $(if $(CONFIG_FRAME_POINTER),,--no-fp) \
> > > > + $(if $(CONFIG_GCOV_KERNEL),--no-unreachable,) \
> > > > + $(if $(CONFIG_RETPOLINE),--retpoline,) \
> > > > + $(if $(CONFIG_X86_SMAP),--uaccess,) \
> > > > + $(@:.ko=$(prelink-ext).o);
> > > > +
> > > > +endif # SKIP_STACK_VALIDATION
> > > > +endif # CONFIG_STACK_VALIDATION
> > >
> > > What about the objtool invocation from link-vmlinux.sh ?
> >
> > What about it? The existing objtool_link invocation in link-vmlinux.sh
> > works fine for our purposes as well.
>
> Well, I was wondering why you're adding yet another objtool invocation
> while we already have one.

Because we can't run objtool until we have compiled bitcode to native
code, so for modules, we're need another invocation after everything has
been compiled.

Sami

2020-06-25 19:05:31

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 05/22] kbuild: lto: postpone objtool

On Thu, Jun 25, 2020 at 09:22:26AM -0700, Sami Tolvanen wrote:
> On Thu, Jun 25, 2020 at 09:47:16AM +0200, Peter Zijlstra wrote:

> > Right, then we need to make --no-vmlinux work properly when
> > !DEBUG_ENTRY, which I think might be buggered due to us overriding the
> > argument when the objname ends with "vmlinux.o".
>
> Right. Can we just remove that and pass --vmlinux to objtool in
> link-vmlinux.sh, or is the override necessary somewhere else?

Think we can remove it; it was just convenient when running manually.

> > > > > +ifdef CONFIG_STACK_VALIDATION
> > > > > +ifneq ($(SKIP_STACK_VALIDATION),1)
> > > > > +cmd_ld_ko_o += \
> > > > > + $(objtree)/tools/objtool/objtool \
> > > > > + $(if $(CONFIG_UNWINDER_ORC),orc generate,check) \
> > > > > + --module \
> > > > > + $(if $(CONFIG_FRAME_POINTER),,--no-fp) \
> > > > > + $(if $(CONFIG_GCOV_KERNEL),--no-unreachable,) \
> > > > > + $(if $(CONFIG_RETPOLINE),--retpoline,) \
> > > > > + $(if $(CONFIG_X86_SMAP),--uaccess,) \
> > > > > + $(@:.ko=$(prelink-ext).o);
> > > > > +
> > > > > +endif # SKIP_STACK_VALIDATION
> > > > > +endif # CONFIG_STACK_VALIDATION
> > > >
> > > > What about the objtool invocation from link-vmlinux.sh ?
> > >
> > > What about it? The existing objtool_link invocation in link-vmlinux.sh
> > > works fine for our purposes as well.
> >
> > Well, I was wondering why you're adding yet another objtool invocation
> > while we already have one.
>
> Because we can't run objtool until we have compiled bitcode to native
> code, so for modules, we're need another invocation after everything has
> been compiled.

Well, that I understand, my question was why we need one in
scripts/link-vmlinux.sh and an additional one. I think we're just
talking past one another and agree we only need one.

2020-06-25 19:33:17

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [PATCH 05/22] kbuild: lto: postpone objtool

On Thu, Jun 25, 2020 at 08:33:51PM +0200, Peter Zijlstra wrote:
> On Thu, Jun 25, 2020 at 09:22:26AM -0700, Sami Tolvanen wrote:
> > On Thu, Jun 25, 2020 at 09:47:16AM +0200, Peter Zijlstra wrote:
>
> > > Right, then we need to make --no-vmlinux work properly when
> > > !DEBUG_ENTRY, which I think might be buggered due to us overriding the
> > > argument when the objname ends with "vmlinux.o".
> >
> > Right. Can we just remove that and pass --vmlinux to objtool in
> > link-vmlinux.sh, or is the override necessary somewhere else?
>
> Think we can remove it; it was just convenient when running manually.

Great, I'll change this in v2.

> > > > > > +ifdef CONFIG_STACK_VALIDATION
> > > > > > +ifneq ($(SKIP_STACK_VALIDATION),1)
> > > > > > +cmd_ld_ko_o += \
> > > > > > + $(objtree)/tools/objtool/objtool \
> > > > > > + $(if $(CONFIG_UNWINDER_ORC),orc generate,check) \
> > > > > > + --module \
> > > > > > + $(if $(CONFIG_FRAME_POINTER),,--no-fp) \
> > > > > > + $(if $(CONFIG_GCOV_KERNEL),--no-unreachable,) \
> > > > > > + $(if $(CONFIG_RETPOLINE),--retpoline,) \
> > > > > > + $(if $(CONFIG_X86_SMAP),--uaccess,) \
> > > > > > + $(@:.ko=$(prelink-ext).o);
> > > > > > +
> > > > > > +endif # SKIP_STACK_VALIDATION
> > > > > > +endif # CONFIG_STACK_VALIDATION
> > > > >
> > > > > What about the objtool invocation from link-vmlinux.sh ?
> > > >
> > > > What about it? The existing objtool_link invocation in link-vmlinux.sh
> > > > works fine for our purposes as well.
> > >
> > > Well, I was wondering why you're adding yet another objtool invocation
> > > while we already have one.
> >
> > Because we can't run objtool until we have compiled bitcode to native
> > code, so for modules, we're need another invocation after everything has
> > been compiled.
>
> Well, that I understand, my question was why we need one in
> scripts/link-vmlinux.sh and an additional one. I think we're just
> talking past one another and agree we only need one.

We need just one for vmlinux.o, but this rule adds an objtool invocation
for kernel modules, which we also couldn't check earlier. We link all
the bitcode for modules into <module>.lto.o and run modpost and objtool
on that before building the final .ko.

Sami

2020-06-25 20:06:31

by Peter Zijlstra

[permalink] [raw]
Subject: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Thu, Jun 25, 2020 at 09:15:03AM -0700, Sami Tolvanen wrote:
> On Thu, Jun 25, 2020 at 09:45:30AM +0200, Peter Zijlstra wrote:

> > At least for x86_64 I can do a really quick take for a recordmcount pass
> > in objtool, but I suppose you also need this for ARM64 ?
>
> Sure, sounds good. arm64 uses -fpatchable-function-entry with clang, so we
> don't need recordmcount there.

This is on top of my local pile:

git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git master

which notably includes the static_call series.

Not boot tested, but it generates the required sections and they look
more or less as expected, ymmv.

---
arch/x86/Kconfig | 1 -
scripts/Makefile.build | 3 ++
scripts/link-vmlinux.sh | 2 +-
tools/objtool/builtin-check.c | 9 ++---
tools/objtool/builtin.h | 2 +-
tools/objtool/check.c | 81 +++++++++++++++++++++++++++++++++++++++++++
tools/objtool/check.h | 1 +
tools/objtool/objtool.h | 1 +
8 files changed, 91 insertions(+), 9 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index a291823f3f26..189575c12434 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -174,7 +174,6 @@ config X86
select HAVE_EXIT_THREAD
select HAVE_FAST_GUP
select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE
- select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER
select HAVE_GCC_PLUGINS
diff --git a/scripts/Makefile.build b/scripts/Makefile.build
index 2e8810b7e5ed..c774befc57da 100644
--- a/scripts/Makefile.build
+++ b/scripts/Makefile.build
@@ -227,6 +227,9 @@ endif
ifdef CONFIG_X86_SMAP
objtool_args += --uaccess
endif
+ifdef CONFIG_DYNAMIC_FTRACE
+ objtool_args += --mcount
+endif

# 'OBJECT_FILES_NON_STANDARD := y': skip objtool checking for a directory
# 'OBJECT_FILES_NON_STANDARD_foo.o := 'y': skip objtool checking for a file
diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
index 92dd745906f4..00c6e4f28a1a 100755
--- a/scripts/link-vmlinux.sh
+++ b/scripts/link-vmlinux.sh
@@ -60,7 +60,7 @@ objtool_link()
local objtoolopt;

if [ -n "${CONFIG_VMLINUX_VALIDATION}" ]; then
- objtoolopt="check"
+ objtoolopt="check --vmlinux"
if [ -z "${CONFIG_FRAME_POINTER}" ]; then
objtoolopt="${objtoolopt} --no-fp"
fi
diff --git a/tools/objtool/builtin-check.c b/tools/objtool/builtin-check.c
index 4896c5a89702..a6c3a3fba67d 100644
--- a/tools/objtool/builtin-check.c
+++ b/tools/objtool/builtin-check.c
@@ -18,7 +18,7 @@
#include "builtin.h"
#include "objtool.h"

-bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, validate_dup, vmlinux, fpu;
+bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, validate_dup, vmlinux, fpu, mcount;

static const char * const check_usage[] = {
"objtool check [<options>] file.o",
@@ -36,12 +36,13 @@ const struct option check_options[] = {
OPT_BOOLEAN('d', "duplicate", &validate_dup, "duplicate validation for vmlinux.o"),
OPT_BOOLEAN('l', "vmlinux", &vmlinux, "vmlinux.o validation"),
OPT_BOOLEAN('F', "fpu", &fpu, "validate FPU context"),
+ OPT_BOOLEAN('M', "mcount", &mcount, "generate __mcount_loc"),
OPT_END(),
};

int cmd_check(int argc, const char **argv)
{
- const char *objname, *s;
+ const char *objname;

argc = parse_options(argc, argv, check_options, check_usage, 0);

@@ -50,9 +51,5 @@ int cmd_check(int argc, const char **argv)

objname = argv[0];

- s = strstr(objname, "vmlinux.o");
- if (s && !s[9])
- vmlinux = true;
-
return check(objname, false);
}
diff --git a/tools/objtool/builtin.h b/tools/objtool/builtin.h
index 7158e09d4cc9..b51d883ec245 100644
--- a/tools/objtool/builtin.h
+++ b/tools/objtool/builtin.h
@@ -8,7 +8,7 @@
#include <subcmd/parse-options.h>

extern const struct option check_options[];
-extern bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, validate_dup, vmlinux, fpu;
+extern bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, validate_dup, vmlinux, fpu, mcount;

extern int cmd_check(int argc, const char **argv);
extern int cmd_orc(int argc, const char **argv);
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index 6647a8d1545b..ee99566bdae9 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -533,6 +533,65 @@ static int create_static_call_sections(struct objtool_file *file)
return 0;
}

+static int create_mcount_loc_sections(struct objtool_file *file)
+{
+ struct section *sec, *reloc_sec;
+ struct reloc *reloc;
+ unsigned long *loc;
+ struct instruction *insn;
+ int idx;
+
+ sec = find_section_by_name(file->elf, "__mcount_loc");
+ if (sec) {
+ INIT_LIST_HEAD(&file->mcount_loc_list);
+ WARN("file already has __mcount_loc section, skipping");
+ return 0;
+ }
+
+ if (list_empty(&file->mcount_loc_list))
+ return 0;
+
+ idx = 0;
+ list_for_each_entry(insn, &file->mcount_loc_list, mcount_loc_node)
+ idx++;
+
+ sec = elf_create_section(file->elf, "__mcount_loc", 0, sizeof(unsigned long), idx);
+ if (!sec)
+ return -1;
+
+ reloc_sec = elf_create_reloc_section(file->elf, sec, SHT_RELA);
+ if (!reloc_sec)
+ return -1;
+
+ idx = 0;
+ list_for_each_entry(insn, &file->mcount_loc_list, mcount_loc_node) {
+
+ loc = (unsigned long *)sec->data->d_buf + idx;
+ memset(loc, 0, sizeof(unsigned long));
+
+ reloc = malloc(sizeof(*reloc));
+ if (!reloc) {
+ perror("malloc");
+ return -1;
+ }
+ memset(reloc, 0, sizeof(*reloc));
+
+ reloc->sym = insn->sec->sym;
+ reloc->addend = insn->offset;
+ reloc->type = R_X86_64_64;
+ reloc->offset = idx * sizeof(unsigned long);
+ reloc->sec = reloc_sec;
+ elf_add_reloc(file->elf, reloc);
+
+ idx++;
+ }
+
+ if (elf_rebuild_reloc_section(file->elf, reloc_sec))
+ return -1;
+
+ return 0;
+}
+
/*
* Warnings shouldn't be reported for ignored functions.
*/
@@ -892,6 +951,22 @@ static int add_call_destinations(struct objtool_file *file)
insn->type = INSN_NOP;
}

+ if (mcount && !strcmp(insn->call_dest->name, "__fentry__")) {
+ if (reloc) {
+ reloc->type = R_NONE;
+ elf_write_reloc(file->elf, reloc);
+ }
+
+ elf_write_insn(file->elf, insn->sec,
+ insn->offset, insn->len,
+ arch_nop_insn(insn->len));
+
+ insn->type = INSN_NOP;
+
+ list_add_tail(&insn->mcount_loc_node,
+ &file->mcount_loc_list);
+ }
+
/*
* Whatever stack impact regular CALLs have, should be undone
* by the RETURN of the called function.
@@ -3004,6 +3079,7 @@ int check(const char *_objname, bool orc)
INIT_LIST_HEAD(&file.insn_list);
hash_init(file.insn_hash);
INIT_LIST_HEAD(&file.static_call_list);
+ INIT_LIST_HEAD(&file.mcount_loc_list);
file.c_file = !vmlinux && find_section_by_name(file.elf, ".comment");
file.ignore_unreachables = no_unreachable;
file.hints = false;
@@ -3056,6 +3132,11 @@ int check(const char *_objname, bool orc)
goto out;
warnings += ret;

+ ret = create_mcount_loc_sections(&file);
+ if (ret < 0)
+ goto out;
+ warnings += ret;
+
if (orc) {
ret = create_orc(&file);
if (ret < 0)
diff --git a/tools/objtool/check.h b/tools/objtool/check.h
index cd95fca0d237..01f11b5da5dd 100644
--- a/tools/objtool/check.h
+++ b/tools/objtool/check.h
@@ -24,6 +24,7 @@ struct instruction {
struct list_head list;
struct hlist_node hash;
struct list_head static_call_node;
+ struct list_head mcount_loc_node;
struct section *sec;
unsigned long offset;
unsigned int len;
diff --git a/tools/objtool/objtool.h b/tools/objtool/objtool.h
index 9a7cd0b88bd8..f604b22d22cc 100644
--- a/tools/objtool/objtool.h
+++ b/tools/objtool/objtool.h
@@ -17,6 +17,7 @@ struct objtool_file {
struct list_head insn_list;
DECLARE_HASHTABLE(insn_hash, 20);
struct list_head static_call_list;
+ struct list_head mcount_loc_list;
bool ignore_unreachables, c_file, hints, rodata;
};

2020-06-25 23:35:36

by Nick Desaulniers

[permalink] [raw]
Subject: Re: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Thu, Jun 25, 2020 at 1:02 PM Peter Zijlstra <[email protected]> wrote:
>
> On Thu, Jun 25, 2020 at 09:15:03AM -0700, Sami Tolvanen wrote:
> > On Thu, Jun 25, 2020 at 09:45:30AM +0200, Peter Zijlstra wrote:
>
> > > At least for x86_64 I can do a really quick take for a recordmcount pass
> > > in objtool, but I suppose you also need this for ARM64 ?
> >
> > Sure, sounds good. arm64 uses -fpatchable-function-entry with clang, so we
> > don't need recordmcount there.
>
> This is on top of my local pile:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git master
>
> which notably includes the static_call series.
>
> Not boot tested, but it generates the required sections and they look
> more or less as expected, ymmv.
>
> ---
> arch/x86/Kconfig | 1 -
> scripts/Makefile.build | 3 ++
> scripts/link-vmlinux.sh | 2 +-
> tools/objtool/builtin-check.c | 9 ++---
> tools/objtool/builtin.h | 2 +-
> tools/objtool/check.c | 81 +++++++++++++++++++++++++++++++++++++++++++
> tools/objtool/check.h | 1 +
> tools/objtool/objtool.h | 1 +
> 8 files changed, 91 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index a291823f3f26..189575c12434 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -174,7 +174,6 @@ config X86
> select HAVE_EXIT_THREAD
> select HAVE_FAST_GUP
> select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE
> - select HAVE_FTRACE_MCOUNT_RECORD
> select HAVE_FUNCTION_GRAPH_TRACER
> select HAVE_FUNCTION_TRACER
> select HAVE_GCC_PLUGINS
> diff --git a/scripts/Makefile.build b/scripts/Makefile.build
> index 2e8810b7e5ed..c774befc57da 100644
> --- a/scripts/Makefile.build
> +++ b/scripts/Makefile.build
> @@ -227,6 +227,9 @@ endif
> ifdef CONFIG_X86_SMAP
> objtool_args += --uaccess
> endif
> +ifdef CONFIG_DYNAMIC_FTRACE
> + objtool_args += --mcount
> +endif
>
> # 'OBJECT_FILES_NON_STANDARD := y': skip objtool checking for a directory
> # 'OBJECT_FILES_NON_STANDARD_foo.o := 'y': skip objtool checking for a file
> diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
> index 92dd745906f4..00c6e4f28a1a 100755
> --- a/scripts/link-vmlinux.sh
> +++ b/scripts/link-vmlinux.sh
> @@ -60,7 +60,7 @@ objtool_link()
> local objtoolopt;
>
> if [ -n "${CONFIG_VMLINUX_VALIDATION}" ]; then
> - objtoolopt="check"
> + objtoolopt="check --vmlinux"
> if [ -z "${CONFIG_FRAME_POINTER}" ]; then
> objtoolopt="${objtoolopt} --no-fp"
> fi
> diff --git a/tools/objtool/builtin-check.c b/tools/objtool/builtin-check.c
> index 4896c5a89702..a6c3a3fba67d 100644
> --- a/tools/objtool/builtin-check.c
> +++ b/tools/objtool/builtin-check.c
> @@ -18,7 +18,7 @@
> #include "builtin.h"
> #include "objtool.h"
>
> -bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, validate_dup, vmlinux, fpu;
> +bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, validate_dup, vmlinux, fpu, mcount;
>
> static const char * const check_usage[] = {
> "objtool check [<options>] file.o",
> @@ -36,12 +36,13 @@ const struct option check_options[] = {
> OPT_BOOLEAN('d', "duplicate", &validate_dup, "duplicate validation for vmlinux.o"),
> OPT_BOOLEAN('l', "vmlinux", &vmlinux, "vmlinux.o validation"),
> OPT_BOOLEAN('F', "fpu", &fpu, "validate FPU context"),
> + OPT_BOOLEAN('M', "mcount", &mcount, "generate __mcount_loc"),
> OPT_END(),
> };
>
> int cmd_check(int argc, const char **argv)
> {
> - const char *objname, *s;
> + const char *objname;
>
> argc = parse_options(argc, argv, check_options, check_usage, 0);
>
> @@ -50,9 +51,5 @@ int cmd_check(int argc, const char **argv)
>
> objname = argv[0];
>
> - s = strstr(objname, "vmlinux.o");
> - if (s && !s[9])
> - vmlinux = true;
> -
> return check(objname, false);
> }
> diff --git a/tools/objtool/builtin.h b/tools/objtool/builtin.h
> index 7158e09d4cc9..b51d883ec245 100644
> --- a/tools/objtool/builtin.h
> +++ b/tools/objtool/builtin.h
> @@ -8,7 +8,7 @@
> #include <subcmd/parse-options.h>
>
> extern const struct option check_options[];
> -extern bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, validate_dup, vmlinux, fpu;
> +extern bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, validate_dup, vmlinux, fpu, mcount;
>
> extern int cmd_check(int argc, const char **argv);
> extern int cmd_orc(int argc, const char **argv);
> diff --git a/tools/objtool/check.c b/tools/objtool/check.c
> index 6647a8d1545b..ee99566bdae9 100644
> --- a/tools/objtool/check.c
> +++ b/tools/objtool/check.c
> @@ -533,6 +533,65 @@ static int create_static_call_sections(struct objtool_file *file)
> return 0;
> }
>
> +static int create_mcount_loc_sections(struct objtool_file *file)
> +{
> + struct section *sec, *reloc_sec;
> + struct reloc *reloc;
> + unsigned long *loc;
> + struct instruction *insn;
> + int idx;
> +
> + sec = find_section_by_name(file->elf, "__mcount_loc");
> + if (sec) {
> + INIT_LIST_HEAD(&file->mcount_loc_list);
> + WARN("file already has __mcount_loc section, skipping");
> + return 0;
> + }
> +
> + if (list_empty(&file->mcount_loc_list))
> + return 0;
> +
> + idx = 0;
> + list_for_each_entry(insn, &file->mcount_loc_list, mcount_loc_node)
> + idx++;
> +
> + sec = elf_create_section(file->elf, "__mcount_loc", 0, sizeof(unsigned long), idx);
> + if (!sec)
> + return -1;
> +
> + reloc_sec = elf_create_reloc_section(file->elf, sec, SHT_RELA);
> + if (!reloc_sec)
> + return -1;
> +
> + idx = 0;
> + list_for_each_entry(insn, &file->mcount_loc_list, mcount_loc_node) {
> +
> + loc = (unsigned long *)sec->data->d_buf + idx;
> + memset(loc, 0, sizeof(unsigned long));
> +
> + reloc = malloc(sizeof(*reloc));
> + if (!reloc) {
> + perror("malloc");
> + return -1;
> + }
> + memset(reloc, 0, sizeof(*reloc));

calloc(1, sizeof(*reloc))?

> +
> + reloc->sym = insn->sec->sym;
> + reloc->addend = insn->offset;
> + reloc->type = R_X86_64_64;
> + reloc->offset = idx * sizeof(unsigned long);
> + reloc->sec = reloc_sec;
> + elf_add_reloc(file->elf, reloc);
> +
> + idx++;
> + }
> +
> + if (elf_rebuild_reloc_section(file->elf, reloc_sec))
> + return -1;
> +
> + return 0;
> +}
> +
> /*
> * Warnings shouldn't be reported for ignored functions.
> */
> @@ -892,6 +951,22 @@ static int add_call_destinations(struct objtool_file *file)
> insn->type = INSN_NOP;
> }
>
> + if (mcount && !strcmp(insn->call_dest->name, "__fentry__")) {
> + if (reloc) {
> + reloc->type = R_NONE;
> + elf_write_reloc(file->elf, reloc);
> + }
> +
> + elf_write_insn(file->elf, insn->sec,
> + insn->offset, insn->len,
> + arch_nop_insn(insn->len));
> +
> + insn->type = INSN_NOP;
> +
> + list_add_tail(&insn->mcount_loc_node,
> + &file->mcount_loc_list);
> + }
> +
> /*
> * Whatever stack impact regular CALLs have, should be undone
> * by the RETURN of the called function.
> @@ -3004,6 +3079,7 @@ int check(const char *_objname, bool orc)
> INIT_LIST_HEAD(&file.insn_list);
> hash_init(file.insn_hash);
> INIT_LIST_HEAD(&file.static_call_list);
> + INIT_LIST_HEAD(&file.mcount_loc_list);
> file.c_file = !vmlinux && find_section_by_name(file.elf, ".comment");
> file.ignore_unreachables = no_unreachable;
> file.hints = false;
> @@ -3056,6 +3132,11 @@ int check(const char *_objname, bool orc)
> goto out;
> warnings += ret;
>
> + ret = create_mcount_loc_sections(&file);
> + if (ret < 0)
> + goto out;
> + warnings += ret;
> +
> if (orc) {
> ret = create_orc(&file);
> if (ret < 0)
> diff --git a/tools/objtool/check.h b/tools/objtool/check.h
> index cd95fca0d237..01f11b5da5dd 100644
> --- a/tools/objtool/check.h
> +++ b/tools/objtool/check.h
> @@ -24,6 +24,7 @@ struct instruction {
> struct list_head list;
> struct hlist_node hash;
> struct list_head static_call_node;
> + struct list_head mcount_loc_node;
> struct section *sec;
> unsigned long offset;
> unsigned int len;
> diff --git a/tools/objtool/objtool.h b/tools/objtool/objtool.h
> index 9a7cd0b88bd8..f604b22d22cc 100644
> --- a/tools/objtool/objtool.h
> +++ b/tools/objtool/objtool.h
> @@ -17,6 +17,7 @@ struct objtool_file {
> struct list_head insn_list;
> DECLARE_HASHTABLE(insn_hash, 20);
> struct list_head static_call_list;
> + struct list_head mcount_loc_list;
> bool ignore_unreachables, c_file, hints, rodata;
> };
>


--
Thanks,
~Nick Desaulniers

2020-06-25 23:50:40

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Thu, Jun 25, 2020 at 10:02:35PM +0200, Peter Zijlstra wrote:
> On Thu, Jun 25, 2020 at 09:15:03AM -0700, Sami Tolvanen wrote:
> > On Thu, Jun 25, 2020 at 09:45:30AM +0200, Peter Zijlstra wrote:
>
> > > At least for x86_64 I can do a really quick take for a recordmcount pass
> > > in objtool, but I suppose you also need this for ARM64 ?
> >
> > Sure, sounds good. arm64 uses -fpatchable-function-entry with clang, so we
> > don't need recordmcount there.
>
> This is on top of my local pile:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git master
>
> which notably includes the static_call series.
>
> Not boot tested, but it generates the required sections and they look
> more or less as expected, ymmv.
>
> ---
> arch/x86/Kconfig | 1 -
> scripts/Makefile.build | 3 ++
> scripts/link-vmlinux.sh | 2 +-
> tools/objtool/builtin-check.c | 9 ++---
> tools/objtool/builtin.h | 2 +-
> tools/objtool/check.c | 81 +++++++++++++++++++++++++++++++++++++++++++
> tools/objtool/check.h | 1 +
> tools/objtool/objtool.h | 1 +
> 8 files changed, 91 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index a291823f3f26..189575c12434 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -174,7 +174,6 @@ config X86
> select HAVE_EXIT_THREAD
> select HAVE_FAST_GUP
> select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE
> - select HAVE_FTRACE_MCOUNT_RECORD
> select HAVE_FUNCTION_GRAPH_TRACER
> select HAVE_FUNCTION_TRACER
> select HAVE_GCC_PLUGINS

This breaks DYNAMIC_FTRACE according to kernel/trace/ftrace.c:

#ifndef CONFIG_FTRACE_MCOUNT_RECORD
# error Dynamic ftrace depends on MCOUNT_RECORD
#endif

And the build errors after that seem to confirm this. It looks like we might
need another flag to skip recordmcount.

Anyway, since objtool is run before recordmcount, I just left this unchanged
for testing and ignored the recordmcount warnings about __mcount_loc already
existing. Something is a bit off still though, I see this at boot:

------------[ ftrace bug ]------------
ftrace failed to modify
[<ffffffff81000660>] __tracepoint_iter_initcall_level+0x0/0x40
actual: 0f:1f:44:00:00
Initializing ftrace call sites
ftrace record flags: 0
(0)
expected tramp: ffffffff81056500
------------[ cut here ]------------

Otherwise, this looks pretty good.

Sami

2020-06-26 12:23:32

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Thu, Jun 25, 2020 at 03:40:42PM -0700, Sami Tolvanen wrote:

> > Not boot tested, but it generates the required sections and they look
> > more or less as expected, ymmv.

> > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> > index a291823f3f26..189575c12434 100644
> > --- a/arch/x86/Kconfig
> > +++ b/arch/x86/Kconfig
> > @@ -174,7 +174,6 @@ config X86
> > select HAVE_EXIT_THREAD
> > select HAVE_FAST_GUP
> > select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE
> > - select HAVE_FTRACE_MCOUNT_RECORD
> > select HAVE_FUNCTION_GRAPH_TRACER
> > select HAVE_FUNCTION_TRACER
> > select HAVE_GCC_PLUGINS
>
> This breaks DYNAMIC_FTRACE according to kernel/trace/ftrace.c:
>
> #ifndef CONFIG_FTRACE_MCOUNT_RECORD
> # error Dynamic ftrace depends on MCOUNT_RECORD
> #endif
>
> And the build errors after that seem to confirm this. It looks like we might
> need another flag to skip recordmcount.

Hurm, Steve, how you want to do that?

> Anyway, since objtool is run before recordmcount, I just left this unchanged
> for testing and ignored the recordmcount warnings about __mcount_loc already
> existing. Something is a bit off still though, I see this at boot:
>
> ------------[ ftrace bug ]------------
> ftrace failed to modify
> [<ffffffff81000660>] __tracepoint_iter_initcall_level+0x0/0x40
> actual: 0f:1f:44:00:00
> Initializing ftrace call sites
> ftrace record flags: 0
> (0)
> expected tramp: ffffffff81056500
> ------------[ cut here ]------------
>
> Otherwise, this looks pretty good.

Ha! it is trying to convert the "CALL __fentry__" into a NOP and not
finding the CALL -- because objtool already made it a NOP...

Weird, I thought recordmcount would also write NOPs, it certainly has
code for that. I suppose we can use CC_USING_NOP_MCOUNT to avoid those,
but I'd rather Steve explain this before I wreck things further.

2020-06-26 12:33:08

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Fri, Jun 26, 2020 at 01:29:31PM +0200, Peter Zijlstra wrote:
> On Thu, Jun 25, 2020 at 03:40:42PM -0700, Sami Tolvanen wrote:

> > Anyway, since objtool is run before recordmcount, I just left this unchanged
> > for testing and ignored the recordmcount warnings about __mcount_loc already
> > existing. Something is a bit off still though, I see this at boot:
> >
> > ------------[ ftrace bug ]------------
> > ftrace failed to modify
> > [<ffffffff81000660>] __tracepoint_iter_initcall_level+0x0/0x40
> > actual: 0f:1f:44:00:00
> > Initializing ftrace call sites
> > ftrace record flags: 0
> > (0)
> > expected tramp: ffffffff81056500
> > ------------[ cut here ]------------
> >
> > Otherwise, this looks pretty good.
>
> Ha! it is trying to convert the "CALL __fentry__" into a NOP and not
> finding the CALL -- because objtool already made it a NOP...
>
> Weird, I thought recordmcount would also write NOPs, it certainly has
> code for that. I suppose we can use CC_USING_NOP_MCOUNT to avoid those,
> but I'd rather Steve explain this before I wreck things further.

Something like so would ignore whatever text is there and rewrite it
with ideal_nop.

---
diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
index c84d28e90a58..98a6a93d7615 100644
--- a/arch/x86/kernel/ftrace.c
+++ b/arch/x86/kernel/ftrace.c
@@ -109,9 +109,11 @@ static int __ref
ftrace_modify_code_direct(unsigned long ip, const char *old_code,
const char *new_code)
{
- int ret = ftrace_verify_code(ip, old_code);
- if (ret)
- return ret;
+ if (old_code) {
+ int ret = ftrace_verify_code(ip, old_code);
+ if (ret)
+ return ret;
+ }

/* replace the text with the new text */
if (ftrace_poke_late)
@@ -124,9 +126,8 @@ ftrace_modify_code_direct(unsigned long ip, const char *old_code,
int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, unsigned long addr)
{
unsigned long ip = rec->ip;
- const char *new, *old;
+ const char *new;

- old = ftrace_call_replace(ip, addr);
new = ftrace_nop_replace();

/*
@@ -138,7 +139,7 @@ int ftrace_make_nop(struct module *mod, struct dyn_ftrace *rec, unsigned long ad
* just modify the code directly.
*/
if (addr == MCOUNT_ADDR)
- return ftrace_modify_code_direct(ip, old, new);
+ return ftrace_modify_code_direct(ip, NULL, new);

/*
* x86 overrides ftrace_replace_code -- this function will never be used

2020-06-28 17:00:37

by Masahiro Yamada

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Thu, Jun 25, 2020 at 5:32 AM 'Sami Tolvanen' via Clang Built Linux
<[email protected]> wrote:
>
> This patch series adds support for building x86_64 and arm64 kernels
> with Clang's Link Time Optimization (LTO).
>
> In addition to performance, the primary motivation for LTO is to allow
> Clang's Control-Flow Integrity (CFI) to be used in the kernel. Google's
> Pixel devices have shipped with LTO+CFI kernels since 2018.
>
> Most of the patches are build system changes for handling LLVM bitcode,
> which Clang produces with LTO instead of ELF object files, postponing
> ELF processing until a later stage, and ensuring initcall ordering.
>
> Note that first objtool patch in the series is already in linux-next,
> but as it's needed with LTO, I'm including it also here to make testing
> easier.


I put this series on a testing branch,
and 0-day bot started reporting some issues.

(but 0-day bot is quieter than I expected.
Perhaps, 0-day bot does not turn on LLVM=1 ?)



I also got an error for
ARCH=arm64 allyesconfig + CONFIG_LTO_CLANG=y



$ make ARCH=arm64 LLVM=1 LLVM_IAS=1
CROSS_COMPILE=~/tools/aarch64-linaro-7.5/bin/aarch64-linux-gnu-
-j24

...

GEN .version
CHK include/generated/compile.h
UPD include/generated/compile.h
CC init/version.o
AR init/built-in.a
GEN .tmp_initcalls.lds
GEN .tmp_symversions.lds
LTO vmlinux.o
MODPOST vmlinux.symvers
MODINFO modules.builtin.modinfo
GEN modules.builtin
LD .tmp_vmlinux.kallsyms1
ld.lld: error: undefined symbol: __compiletime_assert_905
>>> referenced by irqbypass.c
>>> vmlinux.o:(jeq_imm)
make: *** [Makefile:1161: vmlinux] Error 1








> Sami Tolvanen (22):
> objtool: use sh_info to find the base for .rela sections
> kbuild: add support for Clang LTO
> kbuild: lto: fix module versioning
> kbuild: lto: fix recordmcount
> kbuild: lto: postpone objtool
> kbuild: lto: limit inlining
> kbuild: lto: merge module sections
> kbuild: lto: remove duplicate dependencies from .mod files
> init: lto: ensure initcall ordering
> init: lto: fix PREL32 relocations
> pci: lto: fix PREL32 relocations
> modpost: lto: strip .lto from module names
> scripts/mod: disable LTO for empty.c
> efi/libstub: disable LTO
> drivers/misc/lkdtm: disable LTO for rodata.o
> arm64: export CC_USING_PATCHABLE_FUNCTION_ENTRY
> arm64: vdso: disable LTO
> arm64: allow LTO_CLANG and THINLTO to be selected
> x86, vdso: disable LTO only for vDSO
> x86, ftrace: disable recordmcount for ftrace_make_nop
> x86, relocs: Ignore L4_PAGE_OFFSET relocations
> x86, build: allow LTO_CLANG and THINLTO to be selected
>
> .gitignore | 1 +
> Makefile | 27 ++-
> arch/Kconfig | 65 +++++++
> arch/arm64/Kconfig | 2 +
> arch/arm64/Makefile | 1 +
> arch/arm64/kernel/vdso/Makefile | 4 +-
> arch/x86/Kconfig | 2 +
> arch/x86/Makefile | 5 +
> arch/x86/entry/vdso/Makefile | 5 +-
> arch/x86/kernel/ftrace.c | 1 +
> arch/x86/tools/relocs.c | 1 +
> drivers/firmware/efi/libstub/Makefile | 2 +
> drivers/misc/lkdtm/Makefile | 1 +
> include/asm-generic/vmlinux.lds.h | 12 +-
> include/linux/compiler-clang.h | 4 +
> include/linux/compiler.h | 2 +-
> include/linux/compiler_types.h | 4 +
> include/linux/init.h | 78 +++++++-
> include/linux/pci.h | 15 +-
> kernel/trace/ftrace.c | 1 +
> lib/Kconfig.debug | 2 +-
> scripts/Makefile.build | 55 +++++-
> scripts/Makefile.lib | 6 +-
> scripts/Makefile.modfinal | 40 +++-
> scripts/Makefile.modpost | 26 ++-
> scripts/generate_initcall_order.pl | 270 ++++++++++++++++++++++++++
> scripts/link-vmlinux.sh | 100 +++++++++-
> scripts/mod/Makefile | 1 +
> scripts/mod/modpost.c | 16 +-
> scripts/mod/modpost.h | 9 +
> scripts/mod/sumversion.c | 6 +-
> scripts/module-lto.lds | 26 +++
> scripts/recordmcount.c | 3 +-
> tools/objtool/elf.c | 2 +-
> 34 files changed, 737 insertions(+), 58 deletions(-)
> create mode 100755 scripts/generate_initcall_order.pl
> create mode 100644 scripts/module-lto.lds
>
>
> base-commit: 26e122e97a3d0390ebec389347f64f3730fdf48f
> --
> 2.27.0.212.ge8ba1cc988-goog
>
> --
> You received this message because you are subscribed to the Google Groups "Clang Built Linux" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
> To view this discussion on the web visit https://groups.google.com/d/msgid/clang-built-linux/20200624203200.78870-1-samitolvanen%40google.com.



--
Best Regards
Masahiro Yamada

2020-06-29 23:22:20

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

Hi Masahiro,

On Mon, Jun 29, 2020 at 01:56:19AM +0900, Masahiro Yamada wrote:
> On Thu, Jun 25, 2020 at 5:32 AM 'Sami Tolvanen' via Clang Built Linux
> <[email protected]> wrote:
> >
> > This patch series adds support for building x86_64 and arm64 kernels
> > with Clang's Link Time Optimization (LTO).
> >
> > In addition to performance, the primary motivation for LTO is to allow
> > Clang's Control-Flow Integrity (CFI) to be used in the kernel. Google's
> > Pixel devices have shipped with LTO+CFI kernels since 2018.
> >
> > Most of the patches are build system changes for handling LLVM bitcode,
> > which Clang produces with LTO instead of ELF object files, postponing
> > ELF processing until a later stage, and ensuring initcall ordering.
> >
> > Note that first objtool patch in the series is already in linux-next,
> > but as it's needed with LTO, I'm including it also here to make testing
> > easier.
>
>
> I put this series on a testing branch,
> and 0-day bot started reporting some issues.

Yes, I'll fix those issues in v2.

> (but 0-day bot is quieter than I expected.
> Perhaps, 0-day bot does not turn on LLVM=1 ?)

In order for it to test an LTO build, it would need to enable LTO_CLANG
explicitly though, in addition to LLVM=1.

> I also got an error for
> ARCH=arm64 allyesconfig + CONFIG_LTO_CLANG=y
>
>
>
> $ make ARCH=arm64 LLVM=1 LLVM_IAS=1
> CROSS_COMPILE=~/tools/aarch64-linaro-7.5/bin/aarch64-linux-gnu-
> -j24
>
> ...
>
> GEN .version
> CHK include/generated/compile.h
> UPD include/generated/compile.h
> CC init/version.o
> AR init/built-in.a
> GEN .tmp_initcalls.lds
> GEN .tmp_symversions.lds
> LTO vmlinux.o
> MODPOST vmlinux.symvers
> MODINFO modules.builtin.modinfo
> GEN modules.builtin
> LD .tmp_vmlinux.kallsyms1
> ld.lld: error: undefined symbol: __compiletime_assert_905
> >>> referenced by irqbypass.c
> >>> vmlinux.o:(jeq_imm)
> make: *** [Makefile:1161: vmlinux] Error 1

I can reproduce this with ToT LLVM and it's BUILD_BUG_ON_MSG(..., "value
too large for the field") in drivers/net/ethernet/netronome/nfp/bpf/jit.c.
Specifically, the FIELD_FIT / __BF_FIELD_CHECK macro in ur_load_imm_any.

This compiles just fine with an earlier LLVM revision, so it could be a
relatively recent regression. I'll take a look. Thanks for catching this!

Sami

2020-06-30 20:56:52

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

I was asked for input on this, and after a few days digging through some
history, thought I'd comment. Hope you don't mind.

On Thu, Jun 25, 2020 at 10:57AM +0200, Peter Zijlstra wrote:
> On Thu, Jun 25, 2020 at 10:24:33AM +0200, Peter Zijlstra wrote:
> > On Thu, Jun 25, 2020 at 10:03:13AM +0200, Peter Zijlstra wrote:
> > > I'm sure Will will respond, but the basic issue is the trainwreck C11
> > > made of dependent loads.
> > >
> > > Anyway, here's a link to the last time this came up:
> > >
> > > https://lore.kernel.org/linux-arm-kernel/[email protected]/
> >
> > Another good read:
> >
> > https://lore.kernel.org/lkml/[email protected]/
[...]
> Because now the machine can speculate and load now before seq, breaking
> the ordering.

First of all, I agree with the concerns, but not because of LTO.

To set the stage better, and summarize the fundamental problem again:
we're in the unfortunate situation that no compiler today has a way to
_efficiently_ deal with C11's memory_order_consume
[https://lwn.net/Articles/588300/]. If we did, we could just use that
and be done with it. But, sadly, that doesn't seem possible right now --
compilers just say consume==acquire. Will suggests doing the same in the
kernel: https://lkml.kernel.org/r/[email protected]

What we're most worried about right now is the existence of compiler
transformations that could break data dependencies by e.g. turning them
into control dependencies.

If this is a real worry, I don't think LTO is the magical feature that
will uncover those optimizations. If these compiler transformations are
real, they also exist in a normal build! And if we are worried about
them, we need to stop relying on dependent load ordering across the
board; or switch to -O0 for everything. Clearly, we don't want either.

Why do we think LTO is special?

With LTO, Clang just emits LLVM bitcode instead of ELF objects, and
during the linker stage intermodular optimizations across translation
unit boundaries are done that might not be possible otherwise
[https://llvm.org/docs/LinkTimeOptimization.html]. From the memory model
side of things, if we could fully convey our intent to the compiler (the
imaginary consume), there would be no problem, because all optimization
stages from bitcode generation to the final machine code generation
after LTO know about the intended semantics. (Also, keep in mind that
LTO is _not_ doing post link optimization of machine code binaries!)

But as far as we can tell, there is no evidence of the dreaded "data
dependency to control dependency" conversion with LTO that isn't there
in non-LTO builds, if it's even there at all. Has the data to control
dependency conversion been encountered in the wild? If not, is the
resulting reaction an overreaction? If so, we need to be careful blaming
LTO for something that it isn't even guilty of.

So, we are probably better off untangling LTO from the story:

1. LTO or no LTO does not matter. The LTO series should not get tangled
up with memory model issues.

2. The memory model question and problems need to be answered and
addressed separately.

Thoughts?

Thanks,
-- Marco

2020-06-30 21:15:59

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Tue, Jun 30, 2020 at 09:19:31PM +0200, Marco Elver wrote:
> I was asked for input on this, and after a few days digging through some
> history, thought I'd comment. Hope you don't mind.

Not at all, being the one that asked :-)

> First of all, I agree with the concerns, but not because of LTO.
>
> To set the stage better, and summarize the fundamental problem again:
> we're in the unfortunate situation that no compiler today has a way to
> _efficiently_ deal with C11's memory_order_consume
> [https://lwn.net/Articles/588300/]. If we did, we could just use that
> and be done with it. But, sadly, that doesn't seem possible right now --
> compilers just say consume==acquire.

I'm not convinced C11 memory_order_consume would actually work for us,
even if it would work. That is, given:

https://lore.kernel.org/lkml/[email protected]/

only pointers can have consume, but like I pointed out, we have code
that relies on dependent loads from integers.

> Will suggests doing the same in the
> kernel: https://lkml.kernel.org/r/[email protected]

PowerPC would need a similar thing, it too will not preserve causality
for control dependecies.

> What we're most worried about right now is the existence of compiler
> transformations that could break data dependencies by e.g. turning them
> into control dependencies.

Correct.

> If this is a real worry, I don't think LTO is the magical feature that
> will uncover those optimizations. If these compiler transformations are
> real, they also exist in a normal build!

Agreed, _however_ with the caveat that LTO could make them more common.

After all, with whole program analysis, the compiler might be able to
more easily determine that our pointer @ptr is only ever assigned the
values of &A, &B or &C, while without that visibility it would not be
able to determine this.

Once it knows @ptr has a limited number of determined values, the
conversion into control dependencies becomes much more likely.

> And if we are worried about them, we need to stop relying on dependent
> load ordering across the board; or switch to -O0 for everything.
> Clearly, we don't want either.

Agreed.

> Why do we think LTO is special?

As argued above, whole-program analysis would make it more likely. But I
agree the fundamental problem exists independent from LTO.

> But as far as we can tell, there is no evidence of the dreaded "data
> dependency to control dependency" conversion with LTO that isn't there
> in non-LTO builds, if it's even there at all. Has the data to control
> dependency conversion been encountered in the wild? If not, is the
> resulting reaction an overreaction? If so, we need to be careful blaming
> LTO for something that it isn't even guilty of.

It is mostly paranoia; in a large part driven by the fact that even if
such a conversion were to be done, it could go a very long time without
actually causing problems, and longer still for such problems to be
traced back to such an 'optimization'.

That is, the collective hurt from debugging too many ordering issues.

> So, we are probably better off untangling LTO from the story:
>
> 1. LTO or no LTO does not matter. The LTO series should not get tangled
> up with memory model issues.
>
> 2. The memory model question and problems need to be answered and
> addressed separately.
>
> Thoughts?

How hard would it be to creates something that analyzes a build and
looks for all 'dependent load -> control dependency' transformations
headed by a volatile (and/or from asm) load and issues a warning for
them?

This would give us an indication of how valuable this transformation is
for the kernel. I'm hoping/expecting it's vanishingly rare, but what do
I know.

2020-06-30 21:20:03

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Tue, Jun 30, 2020 at 10:12:43PM +0200, Peter Zijlstra wrote:
> On Tue, Jun 30, 2020 at 09:19:31PM +0200, Marco Elver wrote:
> > I was asked for input on this, and after a few days digging through some
> > history, thought I'd comment. Hope you don't mind.
>
> Not at all, being the one that asked :-)
>
> > First of all, I agree with the concerns, but not because of LTO.
> >
> > To set the stage better, and summarize the fundamental problem again:
> > we're in the unfortunate situation that no compiler today has a way to
> > _efficiently_ deal with C11's memory_order_consume
> > [https://lwn.net/Articles/588300/]. If we did, we could just use that
> > and be done with it. But, sadly, that doesn't seem possible right now --
> > compilers just say consume==acquire.
>
> I'm not convinced C11 memory_order_consume would actually work for us,
> even if it would work. That is, given:
>
> https://lore.kernel.org/lkml/[email protected]/
>
> only pointers can have consume, but like I pointed out, we have code
> that relies on dependent loads from integers.

I agree that C11 memory_order_consume is not normally what we want,
given that it is universally promoted to memory_order_acquire.

However, dependent loads from integers are, if anything, more difficult
to defend from the compiler than are control dependencies. This applies
doubly to integers that are used to index two-element arrays, in which
case you are just asking the compiler to destroy your dependent loads
by converting them into control dependencies.

> > Will suggests doing the same in the
> > kernel: https://lkml.kernel.org/r/[email protected]
>
> PowerPC would need a similar thing, it too will not preserve causality
> for control dependecies.
>
> > What we're most worried about right now is the existence of compiler
> > transformations that could break data dependencies by e.g. turning them
> > into control dependencies.
>
> Correct.
>
> > If this is a real worry, I don't think LTO is the magical feature that
> > will uncover those optimizations. If these compiler transformations are
> > real, they also exist in a normal build!
>
> Agreed, _however_ with the caveat that LTO could make them more common.
>
> After all, with whole program analysis, the compiler might be able to
> more easily determine that our pointer @ptr is only ever assigned the
> values of &A, &B or &C, while without that visibility it would not be
> able to determine this.
>
> Once it knows @ptr has a limited number of determined values, the
> conversion into control dependencies becomes much more likely.

Which would of course break dependent loads.

> > And if we are worried about them, we need to stop relying on dependent
> > load ordering across the board; or switch to -O0 for everything.
> > Clearly, we don't want either.
>
> Agreed.
>
> > Why do we think LTO is special?
>
> As argued above, whole-program analysis would make it more likely. But I
> agree the fundamental problem exists independent from LTO.
>
> > But as far as we can tell, there is no evidence of the dreaded "data
> > dependency to control dependency" conversion with LTO that isn't there
> > in non-LTO builds, if it's even there at all. Has the data to control
> > dependency conversion been encountered in the wild? If not, is the
> > resulting reaction an overreaction? If so, we need to be careful blaming
> > LTO for something that it isn't even guilty of.
>
> It is mostly paranoia; in a large part driven by the fact that even if
> such a conversion were to be done, it could go a very long time without
> actually causing problems, and longer still for such problems to be
> traced back to such an 'optimization'.
>
> That is, the collective hurt from debugging too many ordering issues.
>
> > So, we are probably better off untangling LTO from the story:
> >
> > 1. LTO or no LTO does not matter. The LTO series should not get tangled
> > up with memory model issues.
> >
> > 2. The memory model question and problems need to be answered and
> > addressed separately.
> >
> > Thoughts?
>
> How hard would it be to creates something that analyzes a build and
> looks for all 'dependent load -> control dependency' transformations
> headed by a volatile (and/or from asm) load and issues a warning for
> them?
>
> This would give us an indication of how valuable this transformation is
> for the kernel. I'm hoping/expecting it's vanishingly rare, but what do
> I know.
>

This could be quite useful!

Thanx, Paul

2020-07-01 09:13:48

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Tue, Jun 30, 2020 at 01:30:16PM -0700, Paul E. McKenney wrote:
> On Tue, Jun 30, 2020 at 10:12:43PM +0200, Peter Zijlstra wrote:

> > I'm not convinced C11 memory_order_consume would actually work for us,
> > even if it would work. That is, given:
> >
> > https://lore.kernel.org/lkml/[email protected]/
> >
> > only pointers can have consume, but like I pointed out, we have code
> > that relies on dependent loads from integers.
>
> I agree that C11 memory_order_consume is not normally what we want,
> given that it is universally promoted to memory_order_acquire.
>
> However, dependent loads from integers are, if anything, more difficult
> to defend from the compiler than are control dependencies. This applies
> doubly to integers that are used to index two-element arrays, in which
> case you are just asking the compiler to destroy your dependent loads
> by converting them into control dependencies.

Yes, I'm aware. However, as you might know, I'm firmly in the 'C is a
glorified assembler' camp (as I expect most actual OS people are, out of
necessity if nothing else) and if I wanted a control dependency I
would've bloody well written one.

I think an optimizing compiler is awesome, but only in so far as that
optimization is actually helpful -- and yes, I just stepped into a giant
twilight zone there. That is, any optimization that has _any_
controversy should be controllable (like -fno-strict-overflow
-fno-strict-aliasing) and I'd very much like the same here.

In a larger context, I still think that eliminating speculative stores
is both necessary and sufficient to avoid out-of-thin-air. So I'd also
love to get some control on that.

2020-07-01 09:42:26

by Marco Elver

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Tue, 30 Jun 2020 at 22:30, Paul E. McKenney <[email protected]> wrote:
> On Tue, Jun 30, 2020 at 10:12:43PM +0200, Peter Zijlstra wrote:
> > On Tue, Jun 30, 2020 at 09:19:31PM +0200, Marco Elver wrote:

> > > First of all, I agree with the concerns, but not because of LTO.
> > >
> > > To set the stage better, and summarize the fundamental problem again:
> > > we're in the unfortunate situation that no compiler today has a way to
> > > _efficiently_ deal with C11's memory_order_consume
> > > [https://lwn.net/Articles/588300/]. If we did, we could just use that
> > > and be done with it. But, sadly, that doesn't seem possible right now --
> > > compilers just say consume==acquire.
> >
> > I'm not convinced C11 memory_order_consume would actually work for us,
> > even if it would work. That is, given:
> >
> > https://lore.kernel.org/lkml/[email protected]/
> >
> > only pointers can have consume, but like I pointed out, we have code
> > that relies on dependent loads from integers.
>
> I agree that C11 memory_order_consume is not normally what we want,
> given that it is universally promoted to memory_order_acquire.
>
> However, dependent loads from integers are, if anything, more difficult
> to defend from the compiler than are control dependencies. This applies
> doubly to integers that are used to index two-element arrays, in which
> case you are just asking the compiler to destroy your dependent loads
> by converting them into control dependencies.
>
> > > Will suggests doing the same in the
> > > kernel: https://lkml.kernel.org/r/[email protected]
> >
> > PowerPC would need a similar thing, it too will not preserve causality
> > for control dependecies.
> >
> > > What we're most worried about right now is the existence of compiler
> > > transformations that could break data dependencies by e.g. turning them
> > > into control dependencies.
> >
> > Correct.
> >
> > > If this is a real worry, I don't think LTO is the magical feature that
> > > will uncover those optimizations. If these compiler transformations are
> > > real, they also exist in a normal build!
> >
> > Agreed, _however_ with the caveat that LTO could make them more common.
> >
> > After all, with whole program analysis, the compiler might be able to
> > more easily determine that our pointer @ptr is only ever assigned the
> > values of &A, &B or &C, while without that visibility it would not be
> > able to determine this.
> >
> > Once it knows @ptr has a limited number of determined values, the
> > conversion into control dependencies becomes much more likely.
>
> Which would of course break dependent loads.
>
> > > And if we are worried about them, we need to stop relying on dependent
> > > load ordering across the board; or switch to -O0 for everything.
> > > Clearly, we don't want either.
> >
> > Agreed.
> >
> > > Why do we think LTO is special?
> >
> > As argued above, whole-program analysis would make it more likely. But I
> > agree the fundamental problem exists independent from LTO.
> >
> > > But as far as we can tell, there is no evidence of the dreaded "data
> > > dependency to control dependency" conversion with LTO that isn't there
> > > in non-LTO builds, if it's even there at all. Has the data to control
> > > dependency conversion been encountered in the wild? If not, is the
> > > resulting reaction an overreaction? If so, we need to be careful blaming
> > > LTO for something that it isn't even guilty of.
> >
> > It is mostly paranoia; in a large part driven by the fact that even if
> > such a conversion were to be done, it could go a very long time without
> > actually causing problems, and longer still for such problems to be
> > traced back to such an 'optimization'.
> >
> > That is, the collective hurt from debugging too many ordering issues.
> >
> > > So, we are probably better off untangling LTO from the story:
> > >
> > > 1. LTO or no LTO does not matter. The LTO series should not get tangled
> > > up with memory model issues.
> > >
> > > 2. The memory model question and problems need to be answered and
> > > addressed separately.
> > >
> > > Thoughts?
> >
> > How hard would it be to creates something that analyzes a build and
> > looks for all 'dependent load -> control dependency' transformations
> > headed by a volatile (and/or from asm) load and issues a warning for
> > them?

I was thinking about this, but in the context of the "auto-promote to
acquire" which you didn't like. Issuing a warning should certainly be
simpler.

I think there is no one place where we know these transformations
happen, but rather, need to analyze the IR before transformations,
take note of all the dependent loads headed by volatile+asm, and then
run an analysis after optimizations checking the dependencies are
still there.

> > This would give us an indication of how valuable this transformation is
> > for the kernel. I'm hoping/expecting it's vanishingly rare, but what do
> > I know.
>
> This could be quite useful!

We might then even be able to say, "if you get this warning, turn on
CONFIG_ACQUIRE_READ_DEPENDENCIES" (or however the option will be
named). Or some other tricks, like automatically recompile the TU
where this happens with the option. But again, this is not something
that should specifically block LTO, because if we have this, we'll
need to turn it on for everything.

Thanks,
-- Marco

2020-07-01 10:07:00

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Wed, Jul 01, 2020 at 11:41:17AM +0200, Marco Elver wrote:
> On Tue, 30 Jun 2020 at 22:30, Paul E. McKenney <[email protected]> wrote:
> > On Tue, Jun 30, 2020 at 10:12:43PM +0200, Peter Zijlstra wrote:
> > > On Tue, Jun 30, 2020 at 09:19:31PM +0200, Marco Elver wrote:
> > > > So, we are probably better off untangling LTO from the story:
> > > >
> > > > 1. LTO or no LTO does not matter. The LTO series should not get tangled
> > > > up with memory model issues.
> > > >
> > > > 2. The memory model question and problems need to be answered and
> > > > addressed separately.
> > > >
> > > > Thoughts?
> > >
> > > How hard would it be to creates something that analyzes a build and
> > > looks for all 'dependent load -> control dependency' transformations
> > > headed by a volatile (and/or from asm) load and issues a warning for
> > > them?
>
> I was thinking about this, but in the context of the "auto-promote to
> acquire" which you didn't like. Issuing a warning should certainly be
> simpler.
>
> I think there is no one place where we know these transformations
> happen, but rather, need to analyze the IR before transformations,
> take note of all the dependent loads headed by volatile+asm, and then
> run an analysis after optimizations checking the dependencies are
> still there.
>
> > > This would give us an indication of how valuable this transformation is
> > > for the kernel. I'm hoping/expecting it's vanishingly rare, but what do
> > > I know.
> >
> > This could be quite useful!
>
> We might then even be able to say, "if you get this warning, turn on
> CONFIG_ACQUIRE_READ_DEPENDENCIES" (or however the option will be
> named). Or some other tricks, like automatically recompile the TU
> where this happens with the option. But again, this is not something
> that should specifically block LTO, because if we have this, we'll
> need to turn it on for everything.

I'm not especially keen on solving this with additional config options --
all it does it further fragment the number of kernels we have to care about
and distributions really won't be in a position to know whether this should
be enabled or not. I would prefer that the build fails, and we figure out
which compiler switch we need to stop the harmful optimisation taking place.
As Peter says, it _should_ be a rare thing to see (empirically, the kernel
seems to be getting away with it so far).

The problem, as I see it, is that the C language doesn't provide us with
a way to express dependency ordering and so we're at the mercy of the
compiler when we roll our own implementation. Paul continues to fight the
good fight at committee meetings to improve the situation, but in the
meantime we'd benefit from two things:

1. A way to disable any compiler optimisations that break our dependency
ordering in spite of READ_ONCE()

2. A way to detect at build time if these harmful optimisations are taking
place

Finally, while I agree that this problem isn't limited to LTO, my fear is
that LTO provides enough information for address dependencies headed by
a READ_ONCE() to be converted to control dependencies when some common
values of the pointer can be determined by the compiler. If we can rule
this sort of thing out, then great, but in the absence of (2) I think
throwing in an acquire is a sensible safety measure. Doesn't CFI rely
on LTO to do something similar for indirect branch targets, or have I
got that totally mixed up?

Will

2020-07-01 11:43:07

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Wed, Jul 01, 2020 at 11:41:17AM +0200, Marco Elver wrote:
> On Tue, 30 Jun 2020 at 22:30, Paul E. McKenney <[email protected]> wrote:
> > On Tue, Jun 30, 2020 at 10:12:43PM +0200, Peter Zijlstra wrote:
> > > On Tue, Jun 30, 2020 at 09:19:31PM +0200, Marco Elver wrote:

> > > > Thoughts?
> > >
> > > How hard would it be to creates something that analyzes a build and
> > > looks for all 'dependent load -> control dependency' transformations
> > > headed by a volatile (and/or from asm) load and issues a warning for
> > > them?
>
> I was thinking about this, but in the context of the "auto-promote to
> acquire" which you didn't like. Issuing a warning should certainly be
> simpler.
>
> I think there is no one place where we know these transformations
> happen, but rather, need to analyze the IR before transformations,
> take note of all the dependent loads headed by volatile+asm, and then
> run an analysis after optimizations checking the dependencies are
> still there.

Urgh, that sounds nasty. The thing is, as I've hinted at in my other
reply, I would really like a compiler switch to disable this
optimization entirely -- knowing how relevant the trnaformation is, is
simply a first step towards that.

In order to control the tranformation, you have to actually know where
in the optimization passes it happens.

Also, if (big if in my book) we find the optimization is actually
beneficial, we can invert the warning when using the switch and warn
about lost optimization possibilities and manually re-write the code to
use control deps.

> > > This would give us an indication of how valuable this transformation is
> > > for the kernel. I'm hoping/expecting it's vanishingly rare, but what do
> > > I know.
> >
> > This could be quite useful!
>
> We might then even be able to say, "if you get this warning, turn on
> CONFIG_ACQUIRE_READ_DEPENDENCIES" (or however the option will be
> named).

I was going to suggest: if this happens, employ -fno-wreck-dependencies
:-)

2020-07-01 14:07:46

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Wed, Jul 01, 2020 at 01:40:27PM +0200, Peter Zijlstra wrote:
> On Wed, Jul 01, 2020 at 11:41:17AM +0200, Marco Elver wrote:
> > On Tue, 30 Jun 2020 at 22:30, Paul E. McKenney <[email protected]> wrote:
> > > On Tue, Jun 30, 2020 at 10:12:43PM +0200, Peter Zijlstra wrote:
> > > > On Tue, Jun 30, 2020 at 09:19:31PM +0200, Marco Elver wrote:
>
> > > > > Thoughts?
> > > >
> > > > How hard would it be to creates something that analyzes a build and
> > > > looks for all 'dependent load -> control dependency' transformations
> > > > headed by a volatile (and/or from asm) load and issues a warning for
> > > > them?
> >
> > I was thinking about this, but in the context of the "auto-promote to
> > acquire" which you didn't like. Issuing a warning should certainly be
> > simpler.
> >
> > I think there is no one place where we know these transformations
> > happen, but rather, need to analyze the IR before transformations,
> > take note of all the dependent loads headed by volatile+asm, and then
> > run an analysis after optimizations checking the dependencies are
> > still there.
>
> Urgh, that sounds nasty. The thing is, as I've hinted at in my other
> reply, I would really like a compiler switch to disable this
> optimization entirely -- knowing how relevant the trnaformation is, is
> simply a first step towards that.
>
> In order to control the tranformation, you have to actually know where
> in the optimization passes it happens.
>
> Also, if (big if in my book) we find the optimization is actually
> beneficial, we can invert the warning when using the switch and warn
> about lost optimization possibilities and manually re-write the code to
> use control deps.

There are lots of optimization passes and any of them might decide to
destroy dependencies. :-(

> > > > This would give us an indication of how valuable this transformation is
> > > > for the kernel. I'm hoping/expecting it's vanishingly rare, but what do
> > > > I know.
> > >
> > > This could be quite useful!
> >
> > We might then even be able to say, "if you get this warning, turn on
> > CONFIG_ACQUIRE_READ_DEPENDENCIES" (or however the option will be
> > named).
>
> I was going to suggest: if this happens, employ -fno-wreck-dependencies
> :-)

The current state in the C++ committee is that marking variables
carrying dependencies is the way forward. This is of course not what
the Linux kernel community does, but it should not be hard to have a
-fall-variables-dependent or some such that causes all variables to be
treated as if they were marked. Though I was hoping for only pointers.
Are they -sure- that they -absolutely- need to carry dependencies
through integers???

Anyway, the next step is to provide this functionality in one of the
major compilers. Akshat Garg started this in GCC as a GSoC project
by duplicating "volatile" functionality with a _Dependent_ptr keyword.
Next steps would include removing "volatile" functionality not required
for dependencies. Here is a random posting, which if I remember correctly
raised some doubts as to whether "volatile" was really carried through
everywhere that it needs to for things like LTO:

https://gcc.gnu.org/legacy-ml/gcc/2019-07/msg00139.html

What happened to this effort? Akshat graduated and got an unrelated
job, you know, the usual. ;-)

Thanx, Paul

2020-07-01 14:21:34

by David Laight

[permalink] [raw]
Subject: RE: [PATCH 00/22] add support for Clang LTO

From: Peter Zijlstra
> Sent: 01 July 2020 10:11
> On Tue, Jun 30, 2020 at 01:30:16PM -0700, Paul E. McKenney wrote:
> > On Tue, Jun 30, 2020 at 10:12:43PM +0200, Peter Zijlstra wrote:
>
> > > I'm not convinced C11 memory_order_consume would actually work for us,
> > > even if it would work. That is, given:
> > >
> > > https://lore.kernel.org/lkml/[email protected]/
> > >
> > > only pointers can have consume, but like I pointed out, we have code
> > > that relies on dependent loads from integers.
> >
> > I agree that C11 memory_order_consume is not normally what we want,
> > given that it is universally promoted to memory_order_acquire.
> >
> > However, dependent loads from integers are, if anything, more difficult
> > to defend from the compiler than are control dependencies. This applies
> > doubly to integers that are used to index two-element arrays, in which
> > case you are just asking the compiler to destroy your dependent loads
> > by converting them into control dependencies.
>
> Yes, I'm aware. However, as you might know, I'm firmly in the 'C is a
> glorified assembler' camp (as I expect most actual OS people are, out of
> necessity if nothing else) and if I wanted a control dependency I
> would've bloody well written one.

I write in C because doing register tracking is hard :-)
I've got an hdlc implementation in C that is carefully adjusted
so that the worst case path is bounded.
I probably know every one of the 1000 instructions in it.

Would an asm statement that uses the same 'register' for input and
output but doesn't actually do anything help?
It won't generate any code, but the compiler ought to assume that
it might change the value - so can't do optimisations that track
the value across the call.

> I think an optimizing compiler is awesome, but only in so far as that
> optimization is actually helpful -- and yes, I just stepped into a giant
> twilight zone there. That is, any optimization that has _any_
> controversy should be controllable (like -fno-strict-overflow
> -fno-strict-aliasing) and I'd very much like the same here.

I'm fed up of gcc generating the code that uses SIMD instructions
for the 'tail' loop at the end of a function that is already doing
SIMD operations for the main part of the loop.
And compilers that convert a byte copy loop to 'rep movsb'.
If I'm copying 3 or 4 bytes I don't want a 40 clock overhead.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

2020-07-01 15:06:02

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Wed, Jul 01, 2020 at 07:06:54AM -0700, Paul E. McKenney wrote:

> The current state in the C++ committee is that marking variables
> carrying dependencies is the way forward. This is of course not what
> the Linux kernel community does, but it should not be hard to have a
> -fall-variables-dependent or some such that causes all variables to be
> treated as if they were marked. Though I was hoping for only pointers.
> Are they -sure- that they -absolutely- need to carry dependencies
> through integers???

What's 'need'? :-)

I'm thinking __ktime_get_fast_ns() is better off with a dependent load
than it is with an extra smp_rmb().

Yes we can stick an smp_rmb() in there, but I don't like it. Like I
wrote earlier, if I wanted a control dependency, I'd have written one.

2020-07-01 16:04:55

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Wed, Jul 01, 2020 at 05:05:12PM +0200, Peter Zijlstra wrote:
> On Wed, Jul 01, 2020 at 07:06:54AM -0700, Paul E. McKenney wrote:
>
> > The current state in the C++ committee is that marking variables
> > carrying dependencies is the way forward. This is of course not what
> > the Linux kernel community does, but it should not be hard to have a
> > -fall-variables-dependent or some such that causes all variables to be
> > treated as if they were marked. Though I was hoping for only pointers.
> > Are they -sure- that they -absolutely- need to carry dependencies
> > through integers???
>
> What's 'need'? :-)

Turning off all dependency-killing optimizations on all pointers is
likely a non-event. Turning off all dependency-killing optimizations
on all integers is not the road to happiness.

So whatever "need" might be, it would need to be rather earthshaking. ;-)
It is probably not -that- hard to convert to pointers, even if they
are indexing multiple arrays.

> I'm thinking __ktime_get_fast_ns() is better off with a dependent load
> than it is with an extra smp_rmb().
>
> Yes we can stick an smp_rmb() in there, but I don't like it. Like I
> wrote earlier, if I wanted a control dependency, I'd have written one.

No argument here.

But it looks like we are going to have to tell the compiler.

Thanx, Paul

2020-07-01 16:07:28

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Wed, Jul 01, 2020 at 02:20:13PM +0000, David Laight wrote:
> From: Peter Zijlstra
> > Sent: 01 July 2020 10:11
> > On Tue, Jun 30, 2020 at 01:30:16PM -0700, Paul E. McKenney wrote:
> > > On Tue, Jun 30, 2020 at 10:12:43PM +0200, Peter Zijlstra wrote:
> >
> > > > I'm not convinced C11 memory_order_consume would actually work for us,
> > > > even if it would work. That is, given:
> > > >
> > > > https://lore.kernel.org/lkml/[email protected]/
> > > >
> > > > only pointers can have consume, but like I pointed out, we have code
> > > > that relies on dependent loads from integers.
> > >
> > > I agree that C11 memory_order_consume is not normally what we want,
> > > given that it is universally promoted to memory_order_acquire.
> > >
> > > However, dependent loads from integers are, if anything, more difficult
> > > to defend from the compiler than are control dependencies. This applies
> > > doubly to integers that are used to index two-element arrays, in which
> > > case you are just asking the compiler to destroy your dependent loads
> > > by converting them into control dependencies.
> >
> > Yes, I'm aware. However, as you might know, I'm firmly in the 'C is a
> > glorified assembler' camp (as I expect most actual OS people are, out of
> > necessity if nothing else) and if I wanted a control dependency I
> > would've bloody well written one.
>
> I write in C because doing register tracking is hard :-)
> I've got an hdlc implementation in C that is carefully adjusted
> so that the worst case path is bounded.
> I probably know every one of the 1000 instructions in it.
>
> Would an asm statement that uses the same 'register' for input and
> output but doesn't actually do anything help?
> It won't generate any code, but the compiler ought to assume that
> it might change the value - so can't do optimisations that track
> the value across the call.

It might replace the volatile load, but there are optimizations that
apply to the downstream code as well.

Or are you suggesting periodically pushing the dependent variable
through this asm? That might work, but it would be easier and
more maintainable to just mark the variable.

> > I think an optimizing compiler is awesome, but only in so far as that
> > optimization is actually helpful -- and yes, I just stepped into a giant
> > twilight zone there. That is, any optimization that has _any_
> > controversy should be controllable (like -fno-strict-overflow
> > -fno-strict-aliasing) and I'd very much like the same here.
>
> I'm fed up of gcc generating the code that uses SIMD instructions
> for the 'tail' loop at the end of a function that is already doing
> SIMD operations for the main part of the loop.
> And compilers that convert a byte copy loop to 'rep movsb'.
> If I'm copying 3 or 4 bytes I don't want a 40 clock overhead.

Agreed, compilers can often be all too "helpful". :-(

Thanx, Paul

2020-07-02 08:21:28

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Wed, Jul 01, 2020 at 09:03:38AM -0700, Paul E. McKenney wrote:

> But it looks like we are going to have to tell the compiler.

What does the current proposal look like? I can certainly annotate the
seqcount latch users, but who knows what other code is out there....

2020-07-02 09:38:16

by David Laight

[permalink] [raw]
Subject: RE: [PATCH 00/22] add support for Clang LTO

From: Paul E. McKenney
> Sent: 01 July 2020 17:06
...
> > Would an asm statement that uses the same 'register' for input and
> > output but doesn't actually do anything help?
> > It won't generate any code, but the compiler ought to assume that
> > it might change the value - so can't do optimisations that track
> > the value across the call.
>
> It might replace the volatile load, but there are optimizations that
> apply to the downstream code as well.
>
> Or are you suggesting periodically pushing the dependent variable
> through this asm? That might work, but it would be easier and
> more maintainable to just mark the variable.

Marking the variable requires compiler support.
Although what 'volatile register int foo;' means might be interesting.

So I was thinking that in the case mentioned earlier you do:
ptr += LAUNDER(offset & 1);
to ensure the compiler didn't convert to:
if (offset & 1) ptr++;
(Which is probably a pessimisation - the reverse is likely better.)

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

2020-07-02 18:00:17

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Thu, Jul 02, 2020 at 10:20:40AM +0200, Peter Zijlstra wrote:
> On Wed, Jul 01, 2020 at 09:03:38AM -0700, Paul E. McKenney wrote:
>
> > But it looks like we are going to have to tell the compiler.
>
> What does the current proposal look like? I can certainly annotate the
> seqcount latch users, but who knows what other code is out there....

For pointers, yes, within the Linux kernel it is hopeless, thus the
thought of a -fall-dependent-ptr or some such that makes the compiler
pretend that each and every pointer is marked with the _Dependent_ptr
qualifier.

New non-Linux-kernel code might want to use his qualifier explicitly,
perhaps something like the following:

_Dependent_ptr struct foo *p; // Or maybe after the "*"?

rcu_read_lock();
p = rcu_dereference(gp);
// And so on...

If a function is to take a dependent pointer as a function argument,
then the corresponding parameter need the _Dependent_ptr marking.
Ditto for return values.

The proposal did not cover integers due to concerns about the number of
optimization passes that would need to be reviewed to make that work.
Nevertheless, using a marked integer would be safer than using an unmarked
one, and if the review can be carried out, why not? Maybe something
like this:

_Dependent_ptr int idx;

rcu_read_lock();
idx = READ_ONCE(gidx);
d = rcuarray[idx];
rcu_read_unlock();
do_something_with(d);

So use of this qualifier is quite reasonable.

The prototype for GCC is here: https://github.com/AKG001/gcc/

Thanx, Paul

2020-07-02 18:02:09

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Thu, Jul 02, 2020 at 09:37:26AM +0000, David Laight wrote:
> From: Paul E. McKenney
> > Sent: 01 July 2020 17:06
> ...
> > > Would an asm statement that uses the same 'register' for input and
> > > output but doesn't actually do anything help?
> > > It won't generate any code, but the compiler ought to assume that
> > > it might change the value - so can't do optimisations that track
> > > the value across the call.
> >
> > It might replace the volatile load, but there are optimizations that
> > apply to the downstream code as well.
> >
> > Or are you suggesting periodically pushing the dependent variable
> > through this asm? That might work, but it would be easier and
> > more maintainable to just mark the variable.
>
> Marking the variable requires compiler support.
> Although what 'volatile register int foo;' means might be interesting.
>
> So I was thinking that in the case mentioned earlier you do:
> ptr += LAUNDER(offset & 1);
> to ensure the compiler didn't convert to:
> if (offset & 1) ptr++;
> (Which is probably a pessimisation - the reverse is likely better.)

Indeed, Akshat's prototype follows the "volatile" qualifier in many
ways. https://github.com/AKG001/gcc/

Thanx, Paul

2020-07-03 13:14:10

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Thu, Jul 02, 2020 at 10:59:48AM -0700, Paul E. McKenney wrote:
> On Thu, Jul 02, 2020 at 10:20:40AM +0200, Peter Zijlstra wrote:
> > On Wed, Jul 01, 2020 at 09:03:38AM -0700, Paul E. McKenney wrote:
> >
> > > But it looks like we are going to have to tell the compiler.
> >
> > What does the current proposal look like? I can certainly annotate the
> > seqcount latch users, but who knows what other code is out there....
>
> For pointers, yes, within the Linux kernel it is hopeless, thus the
> thought of a -fall-dependent-ptr or some such that makes the compiler
> pretend that each and every pointer is marked with the _Dependent_ptr
> qualifier.
>
> New non-Linux-kernel code might want to use his qualifier explicitly,
> perhaps something like the following:
>
> _Dependent_ptr struct foo *p; // Or maybe after the "*"?

After, as you've written it, it's a pointer to a '_Dependent struct
foo'.

>
> rcu_read_lock();
> p = rcu_dereference(gp);
> // And so on...
>
> If a function is to take a dependent pointer as a function argument,
> then the corresponding parameter need the _Dependent_ptr marking.
> Ditto for return values.
>
> The proposal did not cover integers due to concerns about the number of
> optimization passes that would need to be reviewed to make that work.
> Nevertheless, using a marked integer would be safer than using an unmarked
> one, and if the review can be carried out, why not? Maybe something
> like this:
>
> _Dependent_ptr int idx;
>
> rcu_read_lock();
> idx = READ_ONCE(gidx);
> d = rcuarray[idx];
> rcu_read_unlock();
> do_something_with(d);
>
> So use of this qualifier is quite reasonable.

The above usage might warrant a rename of the qualifier though, since
clearly there isn't anything ptr around.

> The prototype for GCC is here: https://github.com/AKG001/gcc/

Thanks! Those test cases are somewhat over qualified though:

static volatile _Atomic (TYPE) * _Dependent_ptr a; \

Also, if C goes and specifies load dependencies, in any form, is then
not the corrolary that they need to specify control dependencies? How
else can they exclude the transformation.

And of course, once we're there, can we get explicit support for control
dependencies too? :-) :-)

2020-07-03 13:27:59

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Fri, Jul 03, 2020 at 03:13:30PM +0200, Peter Zijlstra wrote:
> > The prototype for GCC is here: https://github.com/AKG001/gcc/
>
> Thanks! Those test cases are somewhat over qualified though:
>
> static volatile _Atomic (TYPE) * _Dependent_ptr a; \

One question though; since its a qualifier, and we've recently spend a
whole lot of effort to strip qualifiers in say READ_ONCE(), how does,
and how do we want, this qualifier to behave.

C++ has very convenient means of manipulating qualifiers, so it's not
much of a problem there, but for C it is, as we've found, really quite
cumbersome. Even with _Generic() we can't manipulate individual
qualifiers afaict.

2020-07-03 14:43:02

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Fri, Jul 03, 2020 at 03:13:30PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 02, 2020 at 10:59:48AM -0700, Paul E. McKenney wrote:
> > On Thu, Jul 02, 2020 at 10:20:40AM +0200, Peter Zijlstra wrote:
> > > On Wed, Jul 01, 2020 at 09:03:38AM -0700, Paul E. McKenney wrote:
> > >
> > > > But it looks like we are going to have to tell the compiler.
> > >
> > > What does the current proposal look like? I can certainly annotate the
> > > seqcount latch users, but who knows what other code is out there....
> >
> > For pointers, yes, within the Linux kernel it is hopeless, thus the
> > thought of a -fall-dependent-ptr or some such that makes the compiler
> > pretend that each and every pointer is marked with the _Dependent_ptr
> > qualifier.
> >
> > New non-Linux-kernel code might want to use his qualifier explicitly,
> > perhaps something like the following:
> >
> > _Dependent_ptr struct foo *p; // Or maybe after the "*"?
>
> After, as you've written it, it's a pointer to a '_Dependent struct
> foo'.

Yeah, I have to look that up every time. :-/

Thank you for checking!

> > rcu_read_lock();
> > p = rcu_dereference(gp);
> > // And so on...
> >
> > If a function is to take a dependent pointer as a function argument,
> > then the corresponding parameter need the _Dependent_ptr marking.
> > Ditto for return values.
> >
> > The proposal did not cover integers due to concerns about the number of
> > optimization passes that would need to be reviewed to make that work.
> > Nevertheless, using a marked integer would be safer than using an unmarked
> > one, and if the review can be carried out, why not? Maybe something
> > like this:
> >
> > _Dependent_ptr int idx;
> >
> > rcu_read_lock();
> > idx = READ_ONCE(gidx);
> > d = rcuarray[idx];
> > rcu_read_unlock();
> > do_something_with(d);
> >
> > So use of this qualifier is quite reasonable.
>
> The above usage might warrant a rename of the qualifier though, since
> clearly there isn't anything ptr around.

Given the large number of additional optimizations that need to be
suppressed in the non-pointer case, any discouragement based on the "_ptr"
at the end of the name is all to the good.

And if that line of reasoning is unconvincing, please look at the program
at the end of this email, which compiles without errors with -Wall and
gives the expected output. ;-)

> > The prototype for GCC is here: https://github.com/AKG001/gcc/
>
> Thanks! Those test cases are somewhat over qualified though:
>
> static volatile _Atomic (TYPE) * _Dependent_ptr a; \

Especially given that in C, _Atomic operations are implicitly volatile.
But this is likely a holdover from Akshat's implementation strategy,
which was to pattern _Dependent_ptr after the volatile keyword.

> Also, if C goes and specifies load dependencies, in any form, is then
> not the corrolary that they need to specify control dependencies? How
> else can they exclude the transformation.

By requiring that any temporaries generated from variables that are
marked _Dependent_ptr also be marked _Dependent_ptr. This is of course
one divergence of _Dependent_ptr from the volatile keyword.

> And of course, once we're there, can we get explicit support for control
> dependencies too? :-) :-)

Keep talking like this and I am going to make sure that you attend a
standards committee meeting. If need be, by arranging for you to be
physically dragged there. ;-)

More seriously, for control dependencies, the variable that would need
to be marked would be the program counter, which might require some
additional syntax.

Thanx, Paul

------------------------------------------------------------------------

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int foo(int *p, int i)
{
return i[p];
}

int arr[] = { 0, 1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, };

int main(int argc, char *argv[])
{
int i = atoi(argv[1]);

printf("%d[arr] = %d\n", i, foo(arr, i));
return 0;
}

2020-07-03 14:54:25

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Fri, Jul 03, 2020 at 03:25:23PM +0200, Peter Zijlstra wrote:
> On Fri, Jul 03, 2020 at 03:13:30PM +0200, Peter Zijlstra wrote:
> > > The prototype for GCC is here: https://github.com/AKG001/gcc/
> >
> > Thanks! Those test cases are somewhat over qualified though:
> >
> > static volatile _Atomic (TYPE) * _Dependent_ptr a; \
>
> One question though; since its a qualifier, and we've recently spend a
> whole lot of effort to strip qualifiers in say READ_ONCE(), how does,
> and how do we want, this qualifier to behave.

Dereferencing a _Dependent_ptr pointer gives you something that is not
_Dependent_ptr, unless the declaration was like this:

_Dependent_ptr _Atomic (TYPE) * _Dependent_ptr a;

And if I recall correctly, the current state is that assigning a
_Dependent_ptr variable to a non-_Dependent_ptr variable strips this
marking (though the thought was to be able to ask for a warning).

So, yes, it would be nice to be able to explicitly strip the
_Dependent_ptr, perhaps the kill_dependency() macro, which is already
in the C standard.

> C++ has very convenient means of manipulating qualifiers, so it's not
> much of a problem there, but for C it is, as we've found, really quite
> cumbersome. Even with _Generic() we can't manipulate individual
> qualifiers afaict.

Fair point, and in C++ this is a templated class, at least in the same
sense that std::atomic<> is a templated class.

But in this case, would kill_dependency do what you want?

Thanx, Paul

2020-07-06 16:27:10

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Fri, Jul 03, 2020 at 07:42:28AM -0700, Paul E. McKenney wrote:
> On Fri, Jul 03, 2020 at 03:13:30PM +0200, Peter Zijlstra wrote:
> > On Thu, Jul 02, 2020 at 10:59:48AM -0700, Paul E. McKenney wrote:
> > > On Thu, Jul 02, 2020 at 10:20:40AM +0200, Peter Zijlstra wrote:
> > > > On Wed, Jul 01, 2020 at 09:03:38AM -0700, Paul E. McKenney wrote:

[ . . . ]

> > Also, if C goes and specifies load dependencies, in any form, is then
> > not the corrolary that they need to specify control dependencies? How
> > else can they exclude the transformation.
>
> By requiring that any temporaries generated from variables that are
> marked _Dependent_ptr also be marked _Dependent_ptr. This is of course
> one divergence of _Dependent_ptr from the volatile keyword.
>
> > And of course, once we're there, can we get explicit support for control
> > dependencies too? :-) :-)
>
> Keep talking like this and I am going to make sure that you attend a
> standards committee meeting. If need be, by arranging for you to be
> physically dragged there. ;-)
>
> More seriously, for control dependencies, the variable that would need
> to be marked would be the program counter, which might require some
> additional syntax.

And perhaps more constructively, we do need to prioritize address and data
dependencies over control dependencies. For one thing, there are a lot
more address/data dependencies in existing code than there are control
dependencies, and (sadly, perhaps more importantly) there are a lot more
people who are convinced that address/data dependencies are important.

For another (admittedly more theoretical) thing, the OOTA scenarios
stemming from control dependencies are a lot less annoying than those
from address/data dependencies.

And address/data dependencies are as far as I know vulnerable to things
like conditional-move instructions that can cause problems for control
dependencies.

Nevertheless, yes, control dependencies also need attention.

Thanx, Paul

2020-07-06 18:32:30

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Mon, Jul 06, 2020 at 09:26:33AM -0700, Paul E. McKenney wrote:

> And perhaps more constructively, we do need to prioritize address and data
> dependencies over control dependencies. For one thing, there are a lot
> more address/data dependencies in existing code than there are control
> dependencies, and (sadly, perhaps more importantly) there are a lot more
> people who are convinced that address/data dependencies are important.

If they do not consider their Linux OS running correctly :-)

> For another (admittedly more theoretical) thing, the OOTA scenarios
> stemming from control dependencies are a lot less annoying than those
> from address/data dependencies.
>
> And address/data dependencies are as far as I know vulnerable to things
> like conditional-move instructions that can cause problems for control
> dependencies.
>
> Nevertheless, yes, control dependencies also need attention.

Today I added one more \o/

2020-07-06 18:42:31

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Mon, Jul 06, 2020 at 08:29:26PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 06, 2020 at 09:26:33AM -0700, Paul E. McKenney wrote:
>
> > And perhaps more constructively, we do need to prioritize address and data
> > dependencies over control dependencies. For one thing, there are a lot
> > more address/data dependencies in existing code than there are control
> > dependencies, and (sadly, perhaps more importantly) there are a lot more
> > people who are convinced that address/data dependencies are important.
>
> If they do not consider their Linux OS running correctly :-)

Many of them really do not care at all. In fact, some would consider
Linux failing to run as an added bonus.

> > For another (admittedly more theoretical) thing, the OOTA scenarios
> > stemming from control dependencies are a lot less annoying than those
> > from address/data dependencies.
> >
> > And address/data dependencies are as far as I know vulnerable to things
> > like conditional-move instructions that can cause problems for control
> > dependencies.
> >
> > Nevertheless, yes, control dependencies also need attention.
>
> Today I added one more \o/

Just make sure you continually check to make sure that compilers
don't break it, along with the others you have added. ;-)

Thanx, Paul

2020-07-06 19:41:10

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Mon, Jul 06, 2020 at 11:39:33AM -0700, Paul E. McKenney wrote:
> On Mon, Jul 06, 2020 at 08:29:26PM +0200, Peter Zijlstra wrote:
> > On Mon, Jul 06, 2020 at 09:26:33AM -0700, Paul E. McKenney wrote:

> > If they do not consider their Linux OS running correctly :-)
>
> Many of them really do not care at all. In fact, some would consider
> Linux failing to run as an added bonus.

This I think is why we have compiler people in the thread that care a
lot more.

> > > Nevertheless, yes, control dependencies also need attention.
> >
> > Today I added one more \o/
>
> Just make sure you continually check to make sure that compilers
> don't break it, along with the others you have added. ;-)

There's:

kernel/locking/mcs_spinlock.h: smp_cond_load_acquire(l, VAL); \
kernel/sched/core.c: smp_cond_load_acquire(&p->on_cpu, !VAL);
kernel/smp.c: smp_cond_load_acquire(&csd->node.u_flags, !(VAL & CSD_FLAG_LOCK));

arch/x86/kernel/alternative.c: atomic_cond_read_acquire(&desc.refs, !VAL);
kernel/locking/qrwlock.c: atomic_cond_read_acquire(&lock->cnts, !(VAL & _QW_LOCKED));
kernel/locking/qrwlock.c: atomic_cond_read_acquire(&lock->cnts, !(VAL & _QW_LOCKED));
kernel/locking/qrwlock.c: atomic_cond_read_acquire(&lock->cnts, VAL == _QW_WAITING);
kernel/locking/qspinlock.c: atomic_cond_read_acquire(&lock->val, !(VAL & _Q_LOCKED_MASK));
kernel/locking/qspinlock.c: val = atomic_cond_read_acquire(&lock->val, !(VAL & _Q_LOCKED_PENDING_MASK));

include/linux/refcount.h: smp_acquire__after_ctrl_dep();
ipc/mqueue.c: smp_acquire__after_ctrl_dep();
ipc/msg.c: smp_acquire__after_ctrl_dep();
ipc/sem.c: smp_acquire__after_ctrl_dep();
kernel/locking/rwsem.c: smp_acquire__after_ctrl_dep();
kernel/sched/core.c: smp_acquire__after_ctrl_dep();

kernel/events/ring_buffer.c:__perf_output_begin()

And I'm fairly sure I'm forgetting some... One could argue there's too
many of them to check already.

Both GCC and CLANG had better think about it.

2020-07-06 23:44:24

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Mon, Jul 06, 2020 at 09:40:12PM +0200, Peter Zijlstra wrote:
> On Mon, Jul 06, 2020 at 11:39:33AM -0700, Paul E. McKenney wrote:
> > On Mon, Jul 06, 2020 at 08:29:26PM +0200, Peter Zijlstra wrote:
> > > On Mon, Jul 06, 2020 at 09:26:33AM -0700, Paul E. McKenney wrote:
>
> > > If they do not consider their Linux OS running correctly :-)
> >
> > Many of them really do not care at all. In fact, some would consider
> > Linux failing to run as an added bonus.
>
> This I think is why we have compiler people in the thread that care a
> lot more.

Here is hoping! ;-)

> > > > Nevertheless, yes, control dependencies also need attention.
> > >
> > > Today I added one more \o/
> >
> > Just make sure you continually check to make sure that compilers
> > don't break it, along with the others you have added. ;-)
>
> There's:
>
> kernel/locking/mcs_spinlock.h: smp_cond_load_acquire(l, VAL); \
> kernel/sched/core.c: smp_cond_load_acquire(&p->on_cpu, !VAL);
> kernel/smp.c: smp_cond_load_acquire(&csd->node.u_flags, !(VAL & CSD_FLAG_LOCK));
>
> arch/x86/kernel/alternative.c: atomic_cond_read_acquire(&desc.refs, !VAL);
> kernel/locking/qrwlock.c: atomic_cond_read_acquire(&lock->cnts, !(VAL & _QW_LOCKED));
> kernel/locking/qrwlock.c: atomic_cond_read_acquire(&lock->cnts, !(VAL & _QW_LOCKED));
> kernel/locking/qrwlock.c: atomic_cond_read_acquire(&lock->cnts, VAL == _QW_WAITING);
> kernel/locking/qspinlock.c: atomic_cond_read_acquire(&lock->val, !(VAL & _Q_LOCKED_MASK));
> kernel/locking/qspinlock.c: val = atomic_cond_read_acquire(&lock->val, !(VAL & _Q_LOCKED_PENDING_MASK));
>
> include/linux/refcount.h: smp_acquire__after_ctrl_dep();
> ipc/mqueue.c: smp_acquire__after_ctrl_dep();
> ipc/msg.c: smp_acquire__after_ctrl_dep();
> ipc/sem.c: smp_acquire__after_ctrl_dep();
> kernel/locking/rwsem.c: smp_acquire__after_ctrl_dep();
> kernel/sched/core.c: smp_acquire__after_ctrl_dep();
>
> kernel/events/ring_buffer.c:__perf_output_begin()
>
> And I'm fairly sure I'm forgetting some... One could argue there's too
> many of them to check already.
>
> Both GCC and CLANG had better think about it.

That would be good!

I won't list the number of address/data dependencies given that there
are well over a thousand of them.

Thanx, Paul

2020-07-07 15:54:12

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Mon, Jun 29, 2020 at 04:20:59PM -0700, Sami Tolvanen wrote:
> Hi Masahiro,
>
> On Mon, Jun 29, 2020 at 01:56:19AM +0900, Masahiro Yamada wrote:
> > I also got an error for
> > ARCH=arm64 allyesconfig + CONFIG_LTO_CLANG=y
> >
> >
> >
> > $ make ARCH=arm64 LLVM=1 LLVM_IAS=1
> > CROSS_COMPILE=~/tools/aarch64-linaro-7.5/bin/aarch64-linux-gnu-
> > -j24
> >
> > ...
> >
> > GEN .version
> > CHK include/generated/compile.h
> > UPD include/generated/compile.h
> > CC init/version.o
> > AR init/built-in.a
> > GEN .tmp_initcalls.lds
> > GEN .tmp_symversions.lds
> > LTO vmlinux.o
> > MODPOST vmlinux.symvers
> > MODINFO modules.builtin.modinfo
> > GEN modules.builtin
> > LD .tmp_vmlinux.kallsyms1
> > ld.lld: error: undefined symbol: __compiletime_assert_905
> > >>> referenced by irqbypass.c
> > >>> vmlinux.o:(jeq_imm)
> > make: *** [Makefile:1161: vmlinux] Error 1
>
> I can reproduce this with ToT LLVM and it's BUILD_BUG_ON_MSG(..., "value
> too large for the field") in drivers/net/ethernet/netronome/nfp/bpf/jit.c.
> Specifically, the FIELD_FIT / __BF_FIELD_CHECK macro in ur_load_imm_any.
>
> This compiles just fine with an earlier LLVM revision, so it could be a
> relatively recent regression. I'll take a look. Thanks for catching this!

After spending some time debugging this with Nick, it looks like the
error is caused by a recent optimization change in LLVM, which together
with the inlining of ur_load_imm_any into jeq_imm, changes a runtime
check in FIELD_FIT that would always fail, to a compile-time check that
breaks the build. In jeq_imm, we have:

/* struct bpf_insn: _s32 imm */
u64 imm = insn->imm; /* sign extend */
...
if (imm >> 32) { /* non-zero only if insn->imm is negative */
/* inlined from ur_load_imm_any */
u32 __imm = imm >> 32; /* therefore, always 0xffffffff */

/*
* __imm has a value known at compile-time, which means
* __builtin_constant_p(__imm) is true and we end up with
* essentially this in __BF_FIELD_CHECK:
*/
if (__builtin_constant_p(__imm) && __imm <= 255)
__compiletime_assert_N();

The compile-time check comes from the following BUILD_BUG_ON_MSG:

#define __BF_FIELD_CHECK(_mask, _reg, _val, _pfx) \
...
BUILD_BUG_ON_MSG(__builtin_constant_p(_val) ? \
~((_mask) >> __bf_shf(_mask)) & (_val) : 0, \
_pfx "value too large for the field"); \

While we could stop the compiler from performing this optimization by
telling it to never inline ur_load_imm_any, we feel like a better fix
might be to replace FIELD_FIT(UR_REG_IMM_MAX, imm) with a simple imm
<= UR_REG_IMM_MAX check that won't trip a compile-time assertion even
when the condition is known to fail.

Jiong, Jakub, do you see any issues here?

Sami

2020-07-07 16:08:57

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Tue, Jul 07, 2020 at 08:51:07AM -0700, Sami Tolvanen wrote:
> After spending some time debugging this with Nick, it looks like the
> error is caused by a recent optimization change in LLVM, which together
> with the inlining of ur_load_imm_any into jeq_imm, changes a runtime
> check in FIELD_FIT that would always fail, to a compile-time check that
> breaks the build. In jeq_imm, we have:
>
> /* struct bpf_insn: _s32 imm */
> u64 imm = insn->imm; /* sign extend */
> ...
> if (imm >> 32) { /* non-zero only if insn->imm is negative */
> /* inlined from ur_load_imm_any */
> u32 __imm = imm >> 32; /* therefore, always 0xffffffff */
>
> /*
> * __imm has a value known at compile-time, which means
> * __builtin_constant_p(__imm) is true and we end up with
> * essentially this in __BF_FIELD_CHECK:
> */
> if (__builtin_constant_p(__imm) && __imm <= 255)

Should be __imm > 255, of course, which means the compiler will generate
a call to __compiletime_assert.

> Jiong, Jakub, do you see any issues here?

(Jiong's email bounced, so removing from the recipient list.)

Sami

2020-07-07 16:57:56

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Tue, 7 Jul 2020 09:05:28 -0700 Sami Tolvanen wrote:
> On Tue, Jul 07, 2020 at 08:51:07AM -0700, Sami Tolvanen wrote:
> > After spending some time debugging this with Nick, it looks like the
> > error is caused by a recent optimization change in LLVM, which together
> > with the inlining of ur_load_imm_any into jeq_imm, changes a runtime
> > check in FIELD_FIT that would always fail, to a compile-time check that
> > breaks the build. In jeq_imm, we have:
> >
> > /* struct bpf_insn: _s32 imm */
> > u64 imm = insn->imm; /* sign extend */
> > ...
> > if (imm >> 32) { /* non-zero only if insn->imm is negative */
> > /* inlined from ur_load_imm_any */
> > u32 __imm = imm >> 32; /* therefore, always 0xffffffff */
> >
> > /*
> > * __imm has a value known at compile-time, which means
> > * __builtin_constant_p(__imm) is true and we end up with
> > * essentially this in __BF_FIELD_CHECK:
> > */
> > if (__builtin_constant_p(__imm) && __imm <= 255)
>
> Should be __imm > 255, of course, which means the compiler will generate
> a call to __compiletime_assert.

I think FIELD_FIT() should not pass the value into __BF_FIELD_CHECK().

So:

diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h
index 48ea093ff04c..4e035aca6f7e 100644
--- a/include/linux/bitfield.h
+++ b/include/linux/bitfield.h
@@ -77,7 +77,7 @@
*/
#define FIELD_FIT(_mask, _val) \
({ \
- __BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_FIT: "); \
+ __BF_FIELD_CHECK(_mask, 0ULL, 0ULL, "FIELD_FIT: "); \
!((((typeof(_mask))_val) << __bf_shf(_mask)) & ~(_mask)); \
})

It's perfectly legal to pass a constant which does not fit, in which
case FIELD_FIT() should just return false not break the build.

Right?

2020-07-07 17:18:06

by Nick Desaulniers

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Tue, Jul 7, 2020 at 9:56 AM Jakub Kicinski <[email protected]> wrote:
>
> > On Tue, Jul 07, 2020 at 08:51:07AM -0700, Sami Tolvanen wrote:
> > > After spending some time debugging this with Nick, it looks like the
> > > error is caused by a recent optimization change in LLVM, which together
> > > with the inlining of ur_load_imm_any into jeq_imm, changes a runtime
> > > check in FIELD_FIT that would always fail, to a compile-time check that
> > > breaks the build. In jeq_imm, we have:
> > >
> > > /* struct bpf_insn: _s32 imm */
> > > u64 imm = insn->imm; /* sign extend */
> > > ...
> > > if (imm >> 32) { /* non-zero only if insn->imm is negative */
> > > /* inlined from ur_load_imm_any */
> > > u32 __imm = imm >> 32; /* therefore, always 0xffffffff */
> > >
> > > /*
> > > * __imm has a value known at compile-time, which means
> > > * __builtin_constant_p(__imm) is true and we end up with
> > > * essentially this in __BF_FIELD_CHECK:
> > > */
> > > if (__builtin_constant_p(__imm) && __imm > 255)
>
> I think FIELD_FIT() should not pass the value into __BF_FIELD_CHECK().
>
> So:
>
> diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h
> index 48ea093ff04c..4e035aca6f7e 100644
> --- a/include/linux/bitfield.h
> +++ b/include/linux/bitfield.h
> @@ -77,7 +77,7 @@
> */
> #define FIELD_FIT(_mask, _val) \
> ({ \
> - __BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_FIT: "); \
> + __BF_FIELD_CHECK(_mask, 0ULL, 0ULL, "FIELD_FIT: "); \
> !((((typeof(_mask))_val) << __bf_shf(_mask)) & ~(_mask)); \
> })
>
> It's perfectly legal to pass a constant which does not fit, in which
> case FIELD_FIT() should just return false not break the build.
>
> Right?

I see the value of the __builtin_constant_p check; this is just a very
interesting case where rather than an integer literal appearing in the
source, the compiler is able to deduce that the parameter can only
have one value in one case, and allows __builtin_constant_p to
evaluate to true for it.

I had definitely asked Sami about the comment above FIELD_FIT:
"""
76 * Return: true if @_val can fit inside @_mask, false if @_val is
too big.
"""
in which FIELD_FIT doesn't return false if @_val is too big and a
compile time constant. (Rather it breaks the build).

Of the 14 expansion sites of FIELD_FIT I see in mainline, it doesn't
look like any integral literals are passed, so maybe the compile time
checks of _val are of little value for FIELD_FIT.

So I think your suggested diff is the most concise fix.
--
Thanks,
~Nick Desaulniers

2020-07-07 17:33:32

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Tue, 7 Jul 2020 10:17:25 -0700 Nick Desaulniers wrote:
> On Tue, Jul 7, 2020 at 9:56 AM Jakub Kicinski <[email protected]> wrote:
> >
> > > On Tue, Jul 07, 2020 at 08:51:07AM -0700, Sami Tolvanen wrote:
> > > > After spending some time debugging this with Nick, it looks like the
> > > > error is caused by a recent optimization change in LLVM, which together
> > > > with the inlining of ur_load_imm_any into jeq_imm, changes a runtime
> > > > check in FIELD_FIT that would always fail, to a compile-time check that
> > > > breaks the build. In jeq_imm, we have:
> > > >
> > > > /* struct bpf_insn: _s32 imm */
> > > > u64 imm = insn->imm; /* sign extend */
> > > > ...
> > > > if (imm >> 32) { /* non-zero only if insn->imm is negative */
> > > > /* inlined from ur_load_imm_any */
> > > > u32 __imm = imm >> 32; /* therefore, always 0xffffffff */
> > > >
> > > > /*
> > > > * __imm has a value known at compile-time, which means
> > > > * __builtin_constant_p(__imm) is true and we end up with
> > > > * essentially this in __BF_FIELD_CHECK:
> > > > */
> > > > if (__builtin_constant_p(__imm) && __imm > 255)
> >
> > I think FIELD_FIT() should not pass the value into __BF_FIELD_CHECK().
> >
> > So:
> >
> > diff --git a/include/linux/bitfield.h b/include/linux/bitfield.h
> > index 48ea093ff04c..4e035aca6f7e 100644
> > --- a/include/linux/bitfield.h
> > +++ b/include/linux/bitfield.h
> > @@ -77,7 +77,7 @@
> > */
> > #define FIELD_FIT(_mask, _val) \
> > ({ \
> > - __BF_FIELD_CHECK(_mask, 0ULL, _val, "FIELD_FIT: "); \
> > + __BF_FIELD_CHECK(_mask, 0ULL, 0ULL, "FIELD_FIT: "); \
> > !((((typeof(_mask))_val) << __bf_shf(_mask)) & ~(_mask)); \
> > })
> >
> > It's perfectly legal to pass a constant which does not fit, in which
> > case FIELD_FIT() should just return false not break the build.
> >
> > Right?
>
> I see the value of the __builtin_constant_p check; this is just a very
> interesting case where rather than an integer literal appearing in the
> source, the compiler is able to deduce that the parameter can only
> have one value in one case, and allows __builtin_constant_p to
> evaluate to true for it.
>
> I had definitely asked Sami about the comment above FIELD_FIT:
> """
> 76 * Return: true if @_val can fit inside @_mask, false if @_val is
> too big.
> """
> in which FIELD_FIT doesn't return false if @_val is too big and a
> compile time constant. (Rather it breaks the build).
>
> Of the 14 expansion sites of FIELD_FIT I see in mainline, it doesn't
> look like any integral literals are passed, so maybe the compile time
> checks of _val are of little value for FIELD_FIT.

Also I just double checked and all FIELD_FIT() uses check the return
value.

> So I think your suggested diff is the most concise fix.

Feel free to submit that officially as a patch if it fixes the build
for you, here's my sign-off:

Signed-off-by: Jakub Kicinski <[email protected]>

2020-07-11 16:33:24

by Paul Menzel

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

Dear Sami,


Am 24.06.20 um 22:31 schrieb Sami Tolvanen:
> This patch series adds support for building x86_64 and arm64 kernels
> with Clang's Link Time Optimization (LTO).
>
> In addition to performance, the primary motivation for LTO is to allow
> Clang's Control-Flow Integrity (CFI) to be used in the kernel. Google's
> Pixel devices have shipped with LTO+CFI kernels since 2018.
>
> Most of the patches are build system changes for handling LLVM bitcode,
> which Clang produces with LTO instead of ELF object files, postponing
> ELF processing until a later stage, and ensuring initcall ordering.
>
> Note that first objtool patch in the series is already in linux-next,
> but as it's needed with LTO, I'm including it also here to make testing
> easier.

[…]

Thank you very much for sending these changes.

Do you have a branch, where your current work can be pulled from? Your
branch on GitHub [1] seems 15 months old.

Out of curiosity, I applied the changes, allowed the selection for i386
(x86), and with Clang 1:11~++20200701093119+ffee8040534-1~exp1 from
Debian experimental, it failed with `Invalid absolute R_386_32
relocation: KERNEL_PAGES`:

> make -f ./scripts/Makefile.build obj=arch/x86/boot arch/x86/boot/bzImage
> make -f ./scripts/Makefile.build obj=arch/x86/boot/compressed arch/x86/boot/compressed/vmlinux
> llvm-nm vmlinux | sed -n -e 's/^\([0-9a-fA-F]*\) [ABCDGRSTVW] \(_text\|__bss_start\|_end\)$/#define VO_ _AC(0x,UL)/p' > arch/x86/boot/compressed/../voffset.h
> clang -Wp,-MMD,arch/x86/boot/compressed/.misc.o.d -nostdinc -isystem /usr/lib/llvm-11/lib/clang/11.0.0/include -I./arch/x86/include -I./arch/x86/include/generated -I./include -I./arch/x86/include/uapi -I./arch/x86/include/generated/uapi -I./include/uapi -I./include/generated/uapi -include ./include/linux/kconfig.h -include ./include/linux/compiler_types.h -D__KERNEL__ -Qunused-arguments -m32 -O2 -fno-strict-aliasing -fPIE -DDISABLE_BRANCH_PROFILING -march=i386 -mno-mmx -mno-sse -ffreestanding -fno-stack-protector -Wno-address-of-packed-member -Wno-gnu -Wno-pointer-sign -fmacro-prefix-map=./= -fno-asynchronous-unwind-tables -DKBUILD_MODFILE='"arch/x86/boot/compressed/misc"' -DKBUILD_BASENAME='"misc"' -DKBUILD_MODNAME='"misc"' -D__KBUILD_MODNAME=misc -c -o arch/x86/boot/compressed/misc.o arch/x86/boot/compressed/misc.c
> llvm-objcopy -R .comment -S vmlinux arch/x86/boot/compressed/vmlinux.bin
> arch/x86/tools/relocs vmlinux > arch/x86/boot/compressed/vmlinux.relocs;arch/x86/tools/relocs --abs-relocs vmlinux
> Invalid absolute R_386_32 relocation: KERNEL_PAGES
> make[2]: *** [arch/x86/boot/compressed/Makefile:134: arch/x86/boot/compressed/vmlinux.relocs] Error 1
> make[2]: *** Deleting file 'arch/x86/boot/compressed/vmlinux.relocs'
> make[1]: *** [arch/x86/boot/Makefile:115: arch/x86/boot/compressed/vmlinux] Error 2
> make: *** [arch/x86/Makefile:268: bzImage] Error 2


Kind regards,

Paul



[1]: https://github.com/samitolvanen/linux/tree/clang-lto

2020-07-12 09:02:18

by Sedat Dilek

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Sat, Jul 11, 2020 at 6:32 PM Paul Menzel <[email protected]> wrote:
>
> Dear Sami,
>
>
> Am 24.06.20 um 22:31 schrieb Sami Tolvanen:
> > This patch series adds support for building x86_64 and arm64 kernels
> > with Clang's Link Time Optimization (LTO).
> >
> > In addition to performance, the primary motivation for LTO is to allow
> > Clang's Control-Flow Integrity (CFI) to be used in the kernel. Google's
> > Pixel devices have shipped with LTO+CFI kernels since 2018.
> >
> > Most of the patches are build system changes for handling LLVM bitcode,
> > which Clang produces with LTO instead of ELF object files, postponing
> > ELF processing until a later stage, and ensuring initcall ordering.
> >
> > Note that first objtool patch in the series is already in linux-next,
> > but as it's needed with LTO, I'm including it also here to make testing
> > easier.
>
> […]
>
> Thank you very much for sending these changes.
>
> Do you have a branch, where your current work can be pulled from? Your
> branch on GitHub [1] seems 15 months old.
>

Agreed it's easier to git-pull.
I have seen [1] - not sure if this is the latest version.
Alternatively, you can check patchwork LKML by searching for $submitter.
( You can open patch 01/22 and download the whole patch-series by
following the link "series", see [3]. )

- Sedat -

[1] https://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild.git/log/?h=lto
[2] https://lore.kernel.org/patchwork/project/lkml/list/?series=&submitter=19676
[3] https://lore.kernel.org/patchwork/series/450026/mbox/

2020-07-12 18:42:32

by Nathan Chancellor

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Sun, Jul 12, 2020 at 10:59:17AM +0200, Sedat Dilek wrote:
> On Sat, Jul 11, 2020 at 6:32 PM Paul Menzel <[email protected]> wrote:
> >
> > Dear Sami,
> >
> >
> > Am 24.06.20 um 22:31 schrieb Sami Tolvanen:
> > > This patch series adds support for building x86_64 and arm64 kernels
> > > with Clang's Link Time Optimization (LTO).
> > >
> > > In addition to performance, the primary motivation for LTO is to allow
> > > Clang's Control-Flow Integrity (CFI) to be used in the kernel. Google's
> > > Pixel devices have shipped with LTO+CFI kernels since 2018.
> > >
> > > Most of the patches are build system changes for handling LLVM bitcode,
> > > which Clang produces with LTO instead of ELF object files, postponing
> > > ELF processing until a later stage, and ensuring initcall ordering.
> > >
> > > Note that first objtool patch in the series is already in linux-next,
> > > but as it's needed with LTO, I'm including it also here to make testing
> > > easier.
> >
> > […]
> >
> > Thank you very much for sending these changes.
> >
> > Do you have a branch, where your current work can be pulled from? Your
> > branch on GitHub [1] seems 15 months old.
> >
>
> Agreed it's easier to git-pull.
> I have seen [1] - not sure if this is the latest version.
> Alternatively, you can check patchwork LKML by searching for $submitter.
> ( You can open patch 01/22 and download the whole patch-series by
> following the link "series", see [3]. )
>
> - Sedat -
>
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild.git/log/?h=lto
> [2] https://lore.kernel.org/patchwork/project/lkml/list/?series=&submitter=19676
> [3] https://lore.kernel.org/patchwork/series/450026/mbox/
>

Sami tagged this series on his GitHub:

https://github.com/samitolvanen/linux/releases/tag/lto-v1

git pull https://github.com/samitolvanen/linux lto-v1

Otherwise, he is updating the clang-cfi branch that includes both the
LTO and CFI patchsets. You can pull that and just turn on
CONFIG_LTO_CLANG.

Lastly, for the future, I would recommend grabbing b4 to easily apply
patches (specifically full series) from lore.kernel.org.

https://git.kernel.org/pub/scm/utils/b4/b4.git/
https://git.kernel.org/pub/scm/utils/b4/b4.git/tree/README.rst

You could grab this series and apply it easily by either downloading the
mbox file and following the instructions it gives for applying the mbox
file:

$ b4 am [email protected]

or I prefer piping so that I don't have to clean up later:

$ b4 am -o - [email protected] | git am

Cheers,
Nathan

2020-07-12 23:35:26

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Sat, Jul 11, 2020 at 9:32 AM Paul Menzel <[email protected]> wrote:
> Thank you very much for sending these changes.
>
> Do you have a branch, where your current work can be pulled from? Your
> branch on GitHub [1] seems 15 months old.

The clang-lto branch is rebased regularly on top of Linus' tree.
GitHub just looks at the commit date of the last commit in the tree,
which isn't all that informative.

> Out of curiosity, I applied the changes, allowed the selection for i386
> (x86), and with Clang 1:11~++20200701093119+ffee8040534-1~exp1 from
> Debian experimental, it failed with `Invalid absolute R_386_32
> relocation: KERNEL_PAGES`:

I haven't looked at getting this to work on i386, which is why we only
select ARCH_SUPPORTS_LTO for x86_64. I would expect there to be a few
issues to address.

> > arch/x86/tools/relocs vmlinux > arch/x86/boot/compressed/vmlinux.relocs;arch/x86/tools/relocs --abs-relocs vmlinux
> > Invalid absolute R_386_32 relocation: KERNEL_PAGES

KERNEL_PAGES looks like a constant, so it's probably safe to ignore
the absolute relocation in tools/relocs.c.

Sami

2020-07-14 09:46:25

by Sedat Dilek

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Sun, Jul 12, 2020 at 8:40 PM Nathan Chancellor
<[email protected]> wrote:
>
> On Sun, Jul 12, 2020 at 10:59:17AM +0200, Sedat Dilek wrote:
> > On Sat, Jul 11, 2020 at 6:32 PM Paul Menzel <[email protected]> wrote:
> > >
> > > Dear Sami,
> > >
> > >
> > > Am 24.06.20 um 22:31 schrieb Sami Tolvanen:
> > > > This patch series adds support for building x86_64 and arm64 kernels
> > > > with Clang's Link Time Optimization (LTO).
> > > >
> > > > In addition to performance, the primary motivation for LTO is to allow
> > > > Clang's Control-Flow Integrity (CFI) to be used in the kernel. Google's
> > > > Pixel devices have shipped with LTO+CFI kernels since 2018.
> > > >
> > > > Most of the patches are build system changes for handling LLVM bitcode,
> > > > which Clang produces with LTO instead of ELF object files, postponing
> > > > ELF processing until a later stage, and ensuring initcall ordering.
> > > >
> > > > Note that first objtool patch in the series is already in linux-next,
> > > > but as it's needed with LTO, I'm including it also here to make testing
> > > > easier.
> > >
> > > […]
> > >
> > > Thank you very much for sending these changes.
> > >
> > > Do you have a branch, where your current work can be pulled from? Your
> > > branch on GitHub [1] seems 15 months old.
> > >
> >
> > Agreed it's easier to git-pull.
> > I have seen [1] - not sure if this is the latest version.
> > Alternatively, you can check patchwork LKML by searching for $submitter.
> > ( You can open patch 01/22 and download the whole patch-series by
> > following the link "series", see [3]. )
> >
> > - Sedat -
> >
> > [1] https://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild.git/log/?h=lto
> > [2] https://lore.kernel.org/patchwork/project/lkml/list/?series=&submitter=19676
> > [3] https://lore.kernel.org/patchwork/series/450026/mbox/
> >
>
> Sami tagged this series on his GitHub:
>
> https://github.com/samitolvanen/linux/releases/tag/lto-v1
>
> git pull https://github.com/samitolvanen/linux lto-v1
>
> Otherwise, he is updating the clang-cfi branch that includes both the
> LTO and CFI patchsets. You can pull that and just turn on
> CONFIG_LTO_CLANG.
>
> Lastly, for the future, I would recommend grabbing b4 to easily apply
> patches (specifically full series) from lore.kernel.org.
>
> https://git.kernel.org/pub/scm/utils/b4/b4.git/
> https://git.kernel.org/pub/scm/utils/b4/b4.git/tree/README.rst
>
> You could grab this series and apply it easily by either downloading the
> mbox file and following the instructions it gives for applying the mbox
> file:
>
> $ b4 am [email protected]
>
> or I prefer piping so that I don't have to clean up later:
>
> $ b4 am -o - [email protected] | git am
>

It is always a pleasure to read your replies and enrich my know-how
beyond Linux-kernel hacking :-).

Thanks for the tip with "b4" tool.
Might add this to our ClangBuiltLinux wiki "Command line tips and tricks"?

- Sedat -

[1] https://github.com/ClangBuiltLinux/linux/wiki/Command-line-tips-and-tricks

2020-07-14 12:19:30

by Paul Menzel

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

Dear Sami,


Am 13.07.20 um 01:34 schrieb Sami Tolvanen:
> On Sat, Jul 11, 2020 at 9:32 AM Paul Menzel <[email protected]> wrote:
>> Thank you very much for sending these changes.
>>
>> Do you have a branch, where your current work can be pulled from? Your
>> branch on GitHub [1] seems 15 months old.
>
> The clang-lto branch is rebased regularly on top of Linus' tree.
> GitHub just looks at the commit date of the last commit in the tree,
> which isn't all that informative.

Thank you for clearing this up, and sorry for not checking myself.

>> Out of curiosity, I applied the changes, allowed the selection for i386
>> (x86), and with Clang 1:11~++20200701093119+ffee8040534-1~exp1 from
>> Debian experimental, it failed with `Invalid absolute R_386_32
>> relocation: KERNEL_PAGES`:
>
> I haven't looked at getting this to work on i386, which is why we only
> select ARCH_SUPPORTS_LTO for x86_64. I would expect there to be a few
> issues to address.
>
>>> arch/x86/tools/relocs vmlinux > arch/x86/boot/compressed/vmlinux.relocs;arch/x86/tools/relocs --abs-relocs vmlinux
>>> Invalid absolute R_386_32 relocation: KERNEL_PAGES
>
> KERNEL_PAGES looks like a constant, so it's probably safe to ignore
> the absolute relocation in tools/relocs.c.

Thank you for pointing me to the right direction. I am happy to report,
that with the diff below (no idea to what list to add the string), Linux
5.8-rc5 with the LLVM/Clang/LTO patches on top, builds and boots on the
ASRock E350M1.

```
diff --git a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c
index 8f3bf34840cef..e91af127ed3c0 100644
--- a/arch/x86/tools/relocs.c
+++ b/arch/x86/tools/relocs.c
@@ -79,6 +79,7 @@ static const char * const
sym_regex_kernel[S_NSYMTYPES] = {
"__end_rodata_hpage_align|"
#endif
"__vvar_page|"
+ "KERNEL_PAGES|"
"_end)$"
};
```


Kind regards,

Paul

2020-07-14 12:36:50

by Sedat Dilek

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Tue, Jul 14, 2020 at 2:16 PM Paul Menzel <[email protected]> wrote:
>
> Dear Sami,
>
>
> Am 13.07.20 um 01:34 schrieb Sami Tolvanen:
> > On Sat, Jul 11, 2020 at 9:32 AM Paul Menzel <[email protected]> wrote:
> >> Thank you very much for sending these changes.
> >>
> >> Do you have a branch, where your current work can be pulled from? Your
> >> branch on GitHub [1] seems 15 months old.
> >
> > The clang-lto branch is rebased regularly on top of Linus' tree.
> > GitHub just looks at the commit date of the last commit in the tree,
> > which isn't all that informative.
>
> Thank you for clearing this up, and sorry for not checking myself.
>
> >> Out of curiosity, I applied the changes, allowed the selection for i386
> >> (x86), and with Clang 1:11~++20200701093119+ffee8040534-1~exp1 from
> >> Debian experimental, it failed with `Invalid absolute R_386_32
> >> relocation: KERNEL_PAGES`:
> >
> > I haven't looked at getting this to work on i386, which is why we only
> > select ARCH_SUPPORTS_LTO for x86_64. I would expect there to be a few
> > issues to address.
> >
> >>> arch/x86/tools/relocs vmlinux > arch/x86/boot/compressed/vmlinux.relocs;arch/x86/tools/relocs --abs-relocs vmlinux
> >>> Invalid absolute R_386_32 relocation: KERNEL_PAGES
> >
> > KERNEL_PAGES looks like a constant, so it's probably safe to ignore
> > the absolute relocation in tools/relocs.c.
>
> Thank you for pointing me to the right direction. I am happy to report,
> that with the diff below (no idea to what list to add the string), Linux
> 5.8-rc5 with the LLVM/Clang/LTO patches on top, builds and boots on the
> ASRock E350M1.
>
> ```
> diff --git a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c
> index 8f3bf34840cef..e91af127ed3c0 100644
> --- a/arch/x86/tools/relocs.c
> +++ b/arch/x86/tools/relocs.c
> @@ -79,6 +79,7 @@ static const char * const
> sym_regex_kernel[S_NSYMTYPES] = {
> "__end_rodata_hpage_align|"
> #endif
> "__vvar_page|"
> + "KERNEL_PAGES|"
> "_end)$"
> };
> ```
>

What llvm-toolchain and version did you use?

Can you post your linux-config?

Thanks.

- Sedat -

2020-07-14 13:42:17

by Paul Menzel

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

Dear Sedat,


Am 14.07.20 um 14:35 schrieb Sedat Dilek:
> On Tue, Jul 14, 2020 at 2:16 PM Paul Menzel <[email protected]> wrote:

>> Am 13.07.20 um 01:34 schrieb Sami Tolvanen:
>>> On Sat, Jul 11, 2020 at 9:32 AM Paul Menzel <[email protected]> wrote:
>>>> Thank you very much for sending these changes.
>>>>
>>>> Do you have a branch, where your current work can be pulled from? Your
>>>> branch on GitHub [1] seems 15 months old.
>>>
>>> The clang-lto branch is rebased regularly on top of Linus' tree.
>>> GitHub just looks at the commit date of the last commit in the tree,
>>> which isn't all that informative.
>>
>> Thank you for clearing this up, and sorry for not checking myself.
>>
>>>> Out of curiosity, I applied the changes, allowed the selection for i386
>>>> (x86), and with Clang 1:11~++20200701093119+ffee8040534-1~exp1 from
>>>> Debian experimental, it failed with `Invalid absolute R_386_32
>>>> relocation: KERNEL_PAGES`:
>>>
>>> I haven't looked at getting this to work on i386, which is why we only
>>> select ARCH_SUPPORTS_LTO for x86_64. I would expect there to be a few
>>> issues to address.
>>>
>>>>> arch/x86/tools/relocs vmlinux > arch/x86/boot/compressed/vmlinux.relocs;arch/x86/tools/relocs --abs-relocs vmlinux
>>>>> Invalid absolute R_386_32 relocation: KERNEL_PAGES
>>>
>>> KERNEL_PAGES looks like a constant, so it's probably safe to ignore
>>> the absolute relocation in tools/relocs.c.
>>
>> Thank you for pointing me to the right direction. I am happy to report,
>> that with the diff below (no idea to what list to add the string), Linux
>> 5.8-rc5 with the LLVM/Clang/LTO patches on top, builds and boots on the
>> ASRock E350M1.
>>
>> ```
>> diff --git a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c
>> index 8f3bf34840cef..e91af127ed3c0 100644
>> --- a/arch/x86/tools/relocs.c
>> +++ b/arch/x86/tools/relocs.c
>> @@ -79,6 +79,7 @@ static const char * const
>> sym_regex_kernel[S_NSYMTYPES] = {
>> "__end_rodata_hpage_align|"
>> #endif
>> "__vvar_page|"
>> + "KERNEL_PAGES|"
>> "_end)$"
>> };
>> ```
>
> What llvm-toolchain and version did you use?

See above. ;-) Clang 1:11~++20200701093119+ffee8040534-1~exp1 from
Debian experimental and the build command below.

make bindeb-pkg -j32 ARCH=i386 LLVM=1

> Can you post your linux-config?

Sure. Attached.

$ grep _LTO config-5.8.0-rc5-00107-g5b634a79fc2fa
CONFIG_LTO=y
CONFIG_ARCH_SUPPORTS_LTO_CLANG=y
# CONFIG_LTO_NONE is not set
CONFIG_LTO_CLANG=y


Kind regards,

Paul


Attachments:
config-5.8.0-rc5-00107-g5b634a79fc2fa (139.26 kB)

2020-07-14 17:55:37

by Nick Desaulniers

[permalink] [raw]
Subject: Re: [PATCH 00/22] add support for Clang LTO

On Tue, Jul 14, 2020 at 2:44 AM Sedat Dilek <[email protected]> wrote:
>
> On Sun, Jul 12, 2020 at 8:40 PM Nathan Chancellor
> <[email protected]> wrote:
> >
> > Lastly, for the future, I would recommend grabbing b4 to easily apply
> > patches (specifically full series) from lore.kernel.org.
> >
> > https://git.kernel.org/pub/scm/utils/b4/b4.git/
> > https://git.kernel.org/pub/scm/utils/b4/b4.git/tree/README.rst
> >
> > You could grab this series and apply it easily by either downloading the
> > mbox file and following the instructions it gives for applying the mbox
> > file:
> >
> > $ b4 am [email protected]
> >
> > or I prefer piping so that I don't have to clean up later:
> >
> > $ b4 am -o - [email protected] | git am
> >
>
> It is always a pleasure to read your replies and enrich my know-how
> beyond Linux-kernel hacking :-).
>
> Thanks for the tip with "b4" tool.
> Might add this to our ClangBuiltLinux wiki "Command line tips and tricks"?
>
> - Sedat -
>
> [1] https://github.com/ClangBuiltLinux/linux/wiki/Command-line-tips-and-tricks

Good idea, done.

--
Thanks,
~Nick Desaulniers

2020-07-17 17:31:13

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Fri, Jun 26, 2020 at 4:29 AM Peter Zijlstra <[email protected]> wrote:
>
> On Thu, Jun 25, 2020 at 03:40:42PM -0700, Sami Tolvanen wrote:
>
> > > Not boot tested, but it generates the required sections and they look
> > > more or less as expected, ymmv.
>
> > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> > > index a291823f3f26..189575c12434 100644
> > > --- a/arch/x86/Kconfig
> > > +++ b/arch/x86/Kconfig
> > > @@ -174,7 +174,6 @@ config X86
> > > select HAVE_EXIT_THREAD
> > > select HAVE_FAST_GUP
> > > select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE
> > > - select HAVE_FTRACE_MCOUNT_RECORD
> > > select HAVE_FUNCTION_GRAPH_TRACER
> > > select HAVE_FUNCTION_TRACER
> > > select HAVE_GCC_PLUGINS
> >
> > This breaks DYNAMIC_FTRACE according to kernel/trace/ftrace.c:
> >
> > #ifndef CONFIG_FTRACE_MCOUNT_RECORD
> > # error Dynamic ftrace depends on MCOUNT_RECORD
> > #endif
> >
> > And the build errors after that seem to confirm this. It looks like we might
> > need another flag to skip recordmcount.
>
> Hurm, Steve, how you want to do that?

Steven, did you have any thoughts about this? Moving recordmcount to
an objtool pass that knows about call sites feels like a much cleaner
solution than annotating kernel code to avoid unwanted relocations.

Sami

2020-07-17 17:37:32

by Steven Rostedt

[permalink] [raw]
Subject: Re: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Fri, 17 Jul 2020 10:28:13 -0700
Sami Tolvanen <[email protected]> wrote:

> On Fri, Jun 26, 2020 at 4:29 AM Peter Zijlstra <[email protected]> wrote:
> >
> > On Thu, Jun 25, 2020 at 03:40:42PM -0700, Sami Tolvanen wrote:
> >
> > > > Not boot tested, but it generates the required sections and they look
> > > > more or less as expected, ymmv.
> >
> > > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> > > > index a291823f3f26..189575c12434 100644
> > > > --- a/arch/x86/Kconfig
> > > > +++ b/arch/x86/Kconfig
> > > > @@ -174,7 +174,6 @@ config X86
> > > > select HAVE_EXIT_THREAD
> > > > select HAVE_FAST_GUP
> > > > select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE
> > > > - select HAVE_FTRACE_MCOUNT_RECORD
> > > > select HAVE_FUNCTION_GRAPH_TRACER
> > > > select HAVE_FUNCTION_TRACER
> > > > select HAVE_GCC_PLUGINS
> > >
> > > This breaks DYNAMIC_FTRACE according to kernel/trace/ftrace.c:
> > >
> > > #ifndef CONFIG_FTRACE_MCOUNT_RECORD
> > > # error Dynamic ftrace depends on MCOUNT_RECORD
> > > #endif
> > >
> > > And the build errors after that seem to confirm this. It looks like we might
> > > need another flag to skip recordmcount.
> >
> > Hurm, Steve, how you want to do that?
>
> Steven, did you have any thoughts about this? Moving recordmcount to
> an objtool pass that knows about call sites feels like a much cleaner
> solution than annotating kernel code to avoid unwanted relocations.
>

Bah, I started to reply to this then went to look for details, got
distracted, forgot about it, my laptop crashed (due to a zoom call),
and I lost the email I was writing (haven't looked in the drafts
folder, but my idea about this has changed since anyway).

So the problem is that we process mcount references in other areas and
that confuses the ftrace modification portion?

Someone just submitted a patch for arm64 for this:

https://lore.kernel.org/r/[email protected]

Is that what you want?

-- Steve

2020-07-17 17:48:46

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Fri, Jul 17, 2020 at 10:36 AM Steven Rostedt <[email protected]> wrote:
>
> On Fri, 17 Jul 2020 10:28:13 -0700
> Sami Tolvanen <[email protected]> wrote:
>
> > On Fri, Jun 26, 2020 at 4:29 AM Peter Zijlstra <[email protected]> wrote:
> > >
> > > On Thu, Jun 25, 2020 at 03:40:42PM -0700, Sami Tolvanen wrote:
> > >
> > > > > Not boot tested, but it generates the required sections and they look
> > > > > more or less as expected, ymmv.
> > >
> > > > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> > > > > index a291823f3f26..189575c12434 100644
> > > > > --- a/arch/x86/Kconfig
> > > > > +++ b/arch/x86/Kconfig
> > > > > @@ -174,7 +174,6 @@ config X86
> > > > > select HAVE_EXIT_THREAD
> > > > > select HAVE_FAST_GUP
> > > > > select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE
> > > > > - select HAVE_FTRACE_MCOUNT_RECORD
> > > > > select HAVE_FUNCTION_GRAPH_TRACER
> > > > > select HAVE_FUNCTION_TRACER
> > > > > select HAVE_GCC_PLUGINS
> > > >
> > > > This breaks DYNAMIC_FTRACE according to kernel/trace/ftrace.c:
> > > >
> > > > #ifndef CONFIG_FTRACE_MCOUNT_RECORD
> > > > # error Dynamic ftrace depends on MCOUNT_RECORD
> > > > #endif
> > > >
> > > > And the build errors after that seem to confirm this. It looks like we might
> > > > need another flag to skip recordmcount.
> > >
> > > Hurm, Steve, how you want to do that?
> >
> > Steven, did you have any thoughts about this? Moving recordmcount to
> > an objtool pass that knows about call sites feels like a much cleaner
> > solution than annotating kernel code to avoid unwanted relocations.
> >
>
> Bah, I started to reply to this then went to look for details, got
> distracted, forgot about it, my laptop crashed (due to a zoom call),
> and I lost the email I was writing (haven't looked in the drafts
> folder, but my idea about this has changed since anyway).
>
> So the problem is that we process mcount references in other areas and
> that confuses the ftrace modification portion?

Correct.

> Someone just submitted a patch for arm64 for this:
>
> https://lore.kernel.org/r/[email protected]
>
> Is that what you want?

That looks like the same issue, but we need to fix this on x86 instead.

Sami

2020-07-17 18:06:19

by Steven Rostedt

[permalink] [raw]
Subject: Re: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Fri, 17 Jul 2020 10:47:51 -0700
Sami Tolvanen <[email protected]> wrote:

> > Someone just submitted a patch for arm64 for this:
> >
> > https://lore.kernel.org/r/[email protected]
> >
> > Is that what you want?
>
> That looks like the same issue, but we need to fix this on x86 instead.

Does x86 have a way to differentiate between the two that record mcount
can check?

-- Steve

2020-07-17 20:26:58

by Bjorn Helgaas

[permalink] [raw]
Subject: Re: [PATCH 11/22] pci: lto: fix PREL32 relocations

OK by me, but please update the subject to match convention:

PCI: Fix PREL32 relocations for LTO

and include a hint in the commit log about what LTO is. At least
expand the initialism once. Googling for "LTO" isn't very useful.

With Clang's Link Time Optimization (LTO), the compiler ... ?

On Wed, Jun 24, 2020 at 01:31:49PM -0700, Sami Tolvanen wrote:
> With LTO, the compiler can rename static functions to avoid global
> naming collisions. As PCI fixup functions are typically static,
> renaming can break references to them in inline assembly. This
> change adds a global stub to DECLARE_PCI_FIXUP_SECTION to fix the
> issue when PREL32 relocations are used.
>
> Signed-off-by: Sami Tolvanen <[email protected]>

Acked-by: Bjorn Helgaas <[email protected]>

> ---
> include/linux/pci.h | 15 ++++++++++-----
> 1 file changed, 10 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/pci.h b/include/linux/pci.h
> index c79d83304e52..1e65e16f165a 100644
> --- a/include/linux/pci.h
> +++ b/include/linux/pci.h
> @@ -1909,19 +1909,24 @@ enum pci_fixup_pass {
> };
>
> #ifdef CONFIG_HAVE_ARCH_PREL32_RELOCATIONS
> -#define __DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class, \
> - class_shift, hook) \
> - __ADDRESSABLE(hook) \
> +#define ___DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class, \
> + class_shift, hook, stub) \
> + void stub(struct pci_dev *dev) { hook(dev); } \
> asm(".section " #sec ", \"a\" \n" \
> ".balign 16 \n" \
> ".short " #vendor ", " #device " \n" \
> ".long " #class ", " #class_shift " \n" \
> - ".long " #hook " - . \n" \
> + ".long " #stub " - . \n" \
> ".previous \n");
> +
> +#define __DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class, \
> + class_shift, hook, stub) \
> + ___DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class, \
> + class_shift, hook, stub)
> #define DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class, \
> class_shift, hook) \
> __DECLARE_PCI_FIXUP_SECTION(sec, name, vendor, device, class, \
> - class_shift, hook)
> + class_shift, hook, __UNIQUE_ID(hook))
> #else
> /* Anonymous variables would be nice... */
> #define DECLARE_PCI_FIXUP_SECTION(section, name, vendor, device, class, \
> --
> 2.27.0.212.ge8ba1cc988-goog
>

2020-07-20 16:55:23

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Fri, Jul 17, 2020 at 11:05 AM Steven Rostedt <[email protected]> wrote:
>
> On Fri, 17 Jul 2020 10:47:51 -0700
> Sami Tolvanen <[email protected]> wrote:
>
> > > Someone just submitted a patch for arm64 for this:
> > >
> > > https://lore.kernel.org/r/[email protected]
> > >
> > > Is that what you want?
> >
> > That looks like the same issue, but we need to fix this on x86 instead.
>
> Does x86 have a way to differentiate between the two that record mcount
> can check?

I'm not sure if looking at the relocation alone is sufficient on x86,
we might also have to decode the instruction, which is what objtool
does. Did you have any thoughts on Peter's patch, or my initial
suggestion, which adds a __nomcount attribute to affected functions?

Sami

2020-07-22 17:56:29

by Steven Rostedt

[permalink] [raw]
Subject: Re: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Fri, 26 Jun 2020 13:29:31 +0200
Peter Zijlstra <[email protected]> wrote:

> On Thu, Jun 25, 2020 at 03:40:42PM -0700, Sami Tolvanen wrote:
>
> > > Not boot tested, but it generates the required sections and they look
> > > more or less as expected, ymmv.
>
> > > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> > > index a291823f3f26..189575c12434 100644
> > > --- a/arch/x86/Kconfig
> > > +++ b/arch/x86/Kconfig
> > > @@ -174,7 +174,6 @@ config X86
> > > select HAVE_EXIT_THREAD
> > > select HAVE_FAST_GUP
> > > select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE
> > > - select HAVE_FTRACE_MCOUNT_RECORD
> > > select HAVE_FUNCTION_GRAPH_TRACER
> > > select HAVE_FUNCTION_TRACER
> > > select HAVE_GCC_PLUGINS
> >
> > This breaks DYNAMIC_FTRACE according to kernel/trace/ftrace.c:
> >
> > #ifndef CONFIG_FTRACE_MCOUNT_RECORD
> > # error Dynamic ftrace depends on MCOUNT_RECORD
> > #endif
> >
> > And the build errors after that seem to confirm this. It looks like we might
> > need another flag to skip recordmcount.
>
> Hurm, Steve, how you want to do that?

That was added when we removed that dangerous daemon that did the
updates, and was added to make sure it didn't come back.

We can probably just get rid of it.


>
> > Anyway, since objtool is run before recordmcount, I just left this unchanged
> > for testing and ignored the recordmcount warnings about __mcount_loc already
> > existing. Something is a bit off still though, I see this at boot:
> >
> > ------------[ ftrace bug ]------------
> > ftrace failed to modify
> > [<ffffffff81000660>] __tracepoint_iter_initcall_level+0x0/0x40
> > actual: 0f:1f:44:00:00
> > Initializing ftrace call sites
> > ftrace record flags: 0
> > (0)
> > expected tramp: ffffffff81056500
> > ------------[ cut here ]------------
> >
> > Otherwise, this looks pretty good.
>
> Ha! it is trying to convert the "CALL __fentry__" into a NOP and not
> finding the CALL -- because objtool already made it a NOP...
>
> Weird, I thought recordmcount would also write NOPs, it certainly has
> code for that. I suppose we can use CC_USING_NOP_MCOUNT to avoid those,
> but I'd rather Steve explain this before I wreck things further.

The reason for not having recordmcount insert all the nops, is because
x86 has more than one optimal nop which is determined by the machine it
runs on, and not at compile time. So we figured just updated it then.

We can change it to be a nop on boot, and just modify it if it's not
the optimal nop already.

That said, Andi Kleen added an option to gcc called -mnop-mcount which
will have gcc do both create the mcount section and convert the calls
into nops. When doing so, it defines CC_USING_NOP_MCOUNT which will
tell ftrace to expect the calls to already be converted.

-- Steve

2020-07-22 17:59:27

by Steven Rostedt

[permalink] [raw]
Subject: Re: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Mon, 20 Jul 2020 09:52:37 -0700
Sami Tolvanen <[email protected]> wrote:

> > Does x86 have a way to differentiate between the two that record mcount
> > can check?
>
> I'm not sure if looking at the relocation alone is sufficient on x86,
> we might also have to decode the instruction, which is what objtool
> does. Did you have any thoughts on Peter's patch, or my initial
> suggestion, which adds a __nomcount attribute to affected functions?

There's a lot of code in this thread. Can you give me the message-id of
Peter's patch in question.

Thanks,

-- Steve

2020-07-22 18:10:10

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Wed, Jul 22, 2020 at 10:58 AM Steven Rostedt <[email protected]> wrote:
>
> On Mon, 20 Jul 2020 09:52:37 -0700
> Sami Tolvanen <[email protected]> wrote:
>
> > > Does x86 have a way to differentiate between the two that record mcount
> > > can check?
> >
> > I'm not sure if looking at the relocation alone is sufficient on x86,
> > we might also have to decode the instruction, which is what objtool
> > does. Did you have any thoughts on Peter's patch, or my initial
> > suggestion, which adds a __nomcount attribute to affected functions?
>
> There's a lot of code in this thread. Can you give me the message-id of
> Peter's patch in question.

Sure, I was referring to the objtool patch in this message:

https://lore.kernel.org/lkml/[email protected]/

Sami

2020-07-22 18:18:34

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [PATCH 11/22] pci: lto: fix PREL32 relocations

Hi Bjorn,

On Fri, Jul 17, 2020 at 1:26 PM Bjorn Helgaas <[email protected]> wrote:
>
> OK by me, but please update the subject to match convention:
>
> PCI: Fix PREL32 relocations for LTO
>
> and include a hint in the commit log about what LTO is. At least
> expand the initialism once. Googling for "LTO" isn't very useful.
>
> With Clang's Link Time Optimization (LTO), the compiler ... ?

Sure, I'll change this in the next version. Thanks for taking a look!

Sami

2020-07-22 18:45:01

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Wed, Jul 22, 2020 at 01:55:42PM -0400, Steven Rostedt wrote:

> > Ha! it is trying to convert the "CALL __fentry__" into a NOP and not
> > finding the CALL -- because objtool already made it a NOP...
> >
> > Weird, I thought recordmcount would also write NOPs, it certainly has
> > code for that. I suppose we can use CC_USING_NOP_MCOUNT to avoid those,
> > but I'd rather Steve explain this before I wreck things further.
>
> The reason for not having recordmcount insert all the nops, is because
> x86 has more than one optimal nop which is determined by the machine it
> runs on, and not at compile time. So we figured just updated it then.
>
> We can change it to be a nop on boot, and just modify it if it's not
> the optimal nop already.

Right, I throught that's what we'd be doing already, anyway:

> That said, Andi Kleen added an option to gcc called -mnop-mcount which
> will have gcc do both create the mcount section and convert the calls
> into nops. When doing so, it defines CC_USING_NOP_MCOUNT which will
> tell ftrace to expect the calls to already be converted.

That seems like the much easier solution, then we can forget about
recordmcount / objtool entirely for this.

2020-07-22 19:10:23

by Steven Rostedt

[permalink] [raw]
Subject: Re: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Wed, 22 Jul 2020 20:41:37 +0200
Peter Zijlstra <[email protected]> wrote:

> > That said, Andi Kleen added an option to gcc called -mnop-mcount which
> > will have gcc do both create the mcount section and convert the calls
> > into nops. When doing so, it defines CC_USING_NOP_MCOUNT which will
> > tell ftrace to expect the calls to already be converted.
>
> That seems like the much easier solution, then we can forget about
> recordmcount / objtool entirely for this.

Of course that was only for some gcc compilers, and I'm not sure if
clang can do this.

Or do you just see all compilers doing this in the future, and not
worrying about record-mcount at all, and bothering with objtool?

-- Steve

2020-07-22 20:04:03

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Wed, Jul 22, 2020 at 12:09 PM Steven Rostedt <[email protected]> wrote:
>
> On Wed, 22 Jul 2020 20:41:37 +0200
> Peter Zijlstra <[email protected]> wrote:
>
> > > That said, Andi Kleen added an option to gcc called -mnop-mcount which
> > > will have gcc do both create the mcount section and convert the calls
> > > into nops. When doing so, it defines CC_USING_NOP_MCOUNT which will
> > > tell ftrace to expect the calls to already be converted.
> >
> > That seems like the much easier solution, then we can forget about
> > recordmcount / objtool entirely for this.
>
> Of course that was only for some gcc compilers, and I'm not sure if
> clang can do this.
>
> Or do you just see all compilers doing this in the future, and not
> worrying about record-mcount at all, and bothering with objtool?

Clang appears to only support -mrecord-mcount and -mnop-mcount for
s390, so we still need recordmcount / objtool for x86.

Sami

2020-07-22 23:58:57

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Wed, Jul 22, 2020 at 03:09:43PM -0400, Steven Rostedt wrote:
> On Wed, 22 Jul 2020 20:41:37 +0200
> Peter Zijlstra <[email protected]> wrote:
>
> > > That said, Andi Kleen added an option to gcc called -mnop-mcount which
> > > will have gcc do both create the mcount section and convert the calls
> > > into nops. When doing so, it defines CC_USING_NOP_MCOUNT which will
> > > tell ftrace to expect the calls to already be converted.
> >
> > That seems like the much easier solution, then we can forget about
> > recordmcount / objtool entirely for this.
>
> Of course that was only for some gcc compilers, and I'm not sure if
> clang can do this.
>
> Or do you just see all compilers doing this in the future, and not
> worrying about record-mcount at all, and bothering with objtool?

I got the GCC version wrong :/ Both -mnop-mcount and -mrecord-mcount
landed in GCC-5, where our minimum GCC is now at 4.9.

Anyway, what do you prefer, I suppose I can make objtool whatever we
need, that patch is trivial. Simply recording the sites and not
rewriting them should be simple enough.

2020-07-23 00:06:41

by Steven Rostedt

[permalink] [raw]
Subject: Re: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Thu, 23 Jul 2020 01:56:20 +0200
Peter Zijlstra <[email protected]> wrote:

> Anyway, what do you prefer, I suppose I can make objtool whatever we
> need, that patch is trivial. Simply recording the sites and not
> rewriting them should be simple enough.

Either way. If objtool turns it into nops, just make it where we can
enable -DCC_USING_NOP_MCOUNT set, and the kernel will be unaware.

Or if you just add the locations, then that would work too.

-- Steve

2020-08-06 22:12:56

by Sami Tolvanen

[permalink] [raw]
Subject: Re: [RFC][PATCH] objtool,x86_64: Replace recordmcount with objtool

On Wed, Jul 22, 2020 at 08:06:08PM -0400, Steven Rostedt wrote:
> On Thu, 23 Jul 2020 01:56:20 +0200
> Peter Zijlstra <[email protected]> wrote:
>
> > Anyway, what do you prefer, I suppose I can make objtool whatever we
> > need, that patch is trivial. Simply recording the sites and not
> > rewriting them should be simple enough.
>
> Either way. If objtool turns it into nops, just make it where we can
> enable -DCC_USING_NOP_MCOUNT set, and the kernel will be unaware.
>
> Or if you just add the locations, then that would work too.

I took Peter's earlier patch, rebased it on top of the current mainline
tree for easier testing, and tweaked the makefiles to only use objtool
--mcount when CONFIG_STACK_VALIDATION is enabled and the compiler
supports -mfentry. This works for me with both gcc and clang. Thoughts?

Sami


---
Makefile | 38 ++++++++++++----
arch/x86/Kconfig | 1 +
kernel/trace/Kconfig | 5 +++
scripts/Makefile.build | 9 ++--
tools/objtool/builtin-check.c | 3 +-
tools/objtool/builtin.h | 2 +-
tools/objtool/check.c | 83 +++++++++++++++++++++++++++++++++++
tools/objtool/check.h | 1 +
tools/objtool/objtool.h | 1 +
9 files changed, 129 insertions(+), 14 deletions(-)

diff --git a/Makefile b/Makefile
index 5cfc3481207f..2d23b6b6c4c9 100644
--- a/Makefile
+++ b/Makefile
@@ -864,17 +864,34 @@ ifdef CONFIG_HAVE_FENTRY
ifeq ($(call cc-option-yn, -mfentry),y)
CC_FLAGS_FTRACE += -mfentry
CC_FLAGS_USING += -DCC_USING_FENTRY
+ export CC_USING_FENTRY := 1
endif
endif
export CC_FLAGS_FTRACE
-KBUILD_CFLAGS += $(CC_FLAGS_FTRACE) $(CC_FLAGS_USING)
-KBUILD_AFLAGS += $(CC_FLAGS_USING)
ifdef CONFIG_DYNAMIC_FTRACE
- ifdef CONFIG_HAVE_C_RECORDMCOUNT
- BUILD_C_RECORDMCOUNT := y
- export BUILD_C_RECORDMCOUNT
- endif
+ ifndef CC_USING_RECORD_MCOUNT
+ ifndef CC_USING_PATCHABLE_FUNCTION_ENTRY
+ # use objtool or recordmcount to generate mcount tables
+ ifdef CONFIG_HAVE_OBJTOOL_MCOUNT
+ ifdef CC_USING_FENTRY
+ USE_OBJTOOL_MCOUNT := y
+ CC_FLAGS_USING += -DCC_USING_NOP_MCOUNT
+ export USE_OBJTOOL_MCOUNT
+ endif
+ endif
+ ifndef USE_OBJTOOL_MCOUNT
+ USE_RECORDMCOUNT := y
+ export USE_RECORDMCOUNT
+ ifdef CONFIG_HAVE_C_RECORDMCOUNT
+ BUILD_C_RECORDMCOUNT := y
+ export BUILD_C_RECORDMCOUNT
+ endif
+ endif
+ endif
+ endif
endif
+KBUILD_CFLAGS += $(CC_FLAGS_FTRACE) $(CC_FLAGS_USING)
+KBUILD_AFLAGS += $(CC_FLAGS_USING)
endif

# We trigger additional mismatches with less inlining
@@ -1211,11 +1228,16 @@ uapi-asm-generic:
PHONY += prepare-objtool prepare-resolve_btfids
prepare-objtool: $(objtool_target)
ifeq ($(SKIP_STACK_VALIDATION),1)
+objtool-lib-prompt := "please install libelf-dev, libelf-devel or elfutils-libelf-devel"
+ifdef USE_OBJTOOL_MCOUNT
+ @echo "error: Cannot generate __mcount_loc for CONFIG_DYNAMIC_FTRACE=y, $(objtool-lib-prompt)" >&2
+ @false
+endif
ifdef CONFIG_UNWINDER_ORC
- @echo "error: Cannot generate ORC metadata for CONFIG_UNWINDER_ORC=y, please install libelf-dev, libelf-devel or elfutils-libelf-devel" >&2
+ @echo "error: Cannot generate ORC metadata for CONFIG_UNWINDER_ORC=y, $(objtool-lib-prompt)" >&2
@false
else
- @echo "warning: Cannot use CONFIG_STACK_VALIDATION=y, please install libelf-dev, libelf-devel or elfutils-libelf-devel" >&2
+ @echo "warning: Cannot use CONFIG_STACK_VALIDATION=y, $(objtool-lib-prompt)" >&2
endif
endif

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 9a2849527dd7..149c94a44cf0 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -163,6 +163,7 @@ config X86
select HAVE_CMPXCHG_LOCAL
select HAVE_CONTEXT_TRACKING if X86_64
select HAVE_C_RECORDMCOUNT
+ select HAVE_OBJTOOL_MCOUNT if STACK_VALIDATION
select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_CONTIGUOUS
select HAVE_DYNAMIC_FTRACE
diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
index a4020c0b4508..b510af5b216c 100644
--- a/kernel/trace/Kconfig
+++ b/kernel/trace/Kconfig
@@ -56,6 +56,11 @@ config HAVE_C_RECORDMCOUNT
help
C version of recordmcount available?

+config HAVE_OBJTOOL_MCOUNT
+ bool
+ help
+ Arch supports objtool --mcount
+
config TRACER_MAX_TRACE
bool

diff --git a/scripts/Makefile.build b/scripts/Makefile.build
index 2e8810b7e5ed..f66f8c0ef294 100644
--- a/scripts/Makefile.build
+++ b/scripts/Makefile.build
@@ -175,8 +175,7 @@ cmd_modversions_c = \
fi
endif

-ifdef CONFIG_FTRACE_MCOUNT_RECORD
-ifndef CC_USING_RECORD_MCOUNT
+ifdef USE_RECORDMCOUNT
# compiler will not generate __mcount_loc use recordmcount or recordmcount.pl
ifdef BUILD_C_RECORDMCOUNT
ifeq ("$(origin RECORDMCOUNT_WARN)", "command line")
@@ -203,8 +202,7 @@ recordmcount_source := $(srctree)/scripts/recordmcount.pl
endif # BUILD_C_RECORDMCOUNT
cmd_record_mcount = $(if $(findstring $(strip $(CC_FLAGS_FTRACE)),$(_c_flags)), \
$(sub_cmd_record_mcount))
-endif # CC_USING_RECORD_MCOUNT
-endif # CONFIG_FTRACE_MCOUNT_RECORD
+endif # USE_RECORDMCOUNT

ifdef CONFIG_STACK_VALIDATION
ifneq ($(SKIP_STACK_VALIDATION),1)
@@ -227,6 +225,9 @@ endif
ifdef CONFIG_X86_SMAP
objtool_args += --uaccess
endif
+ifdef USE_OBJTOOL_MCOUNT
+ objtool_args += --mcount
+endif

# 'OBJECT_FILES_NON_STANDARD := y': skip objtool checking for a directory
# 'OBJECT_FILES_NON_STANDARD_foo.o := 'y': skip objtool checking for a file
diff --git a/tools/objtool/builtin-check.c b/tools/objtool/builtin-check.c
index 7a44174967b5..71595cf4946d 100644
--- a/tools/objtool/builtin-check.c
+++ b/tools/objtool/builtin-check.c
@@ -18,7 +18,7 @@
#include "builtin.h"
#include "objtool.h"

-bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, validate_dup, vmlinux;
+bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, validate_dup, vmlinux, mcount;

static const char * const check_usage[] = {
"objtool check [<options>] file.o",
@@ -35,6 +35,7 @@ const struct option check_options[] = {
OPT_BOOLEAN('s', "stats", &stats, "print statistics"),
OPT_BOOLEAN('d', "duplicate", &validate_dup, "duplicate validation for vmlinux.o"),
OPT_BOOLEAN('l', "vmlinux", &vmlinux, "vmlinux.o validation"),
+ OPT_BOOLEAN('M', "mcount", &mcount, "generate __mcount_loc"),
OPT_END(),
};

diff --git a/tools/objtool/builtin.h b/tools/objtool/builtin.h
index 85c979caa367..94565a72b701 100644
--- a/tools/objtool/builtin.h
+++ b/tools/objtool/builtin.h
@@ -8,7 +8,7 @@
#include <subcmd/parse-options.h>

extern const struct option check_options[];
-extern bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, validate_dup, vmlinux;
+extern bool no_fp, no_unreachable, retpoline, module, backtrace, uaccess, stats, validate_dup, vmlinux, mcount;

extern int cmd_check(int argc, const char **argv);
extern int cmd_orc(int argc, const char **argv);
diff --git a/tools/objtool/check.c b/tools/objtool/check.c
index e034a8f24f46..6e0b478dc065 100644
--- a/tools/objtool/check.c
+++ b/tools/objtool/check.c
@@ -433,6 +433,65 @@ static int add_dead_ends(struct objtool_file *file)
return 0;
}

+static int create_mcount_loc_sections(struct objtool_file *file)
+{
+ struct section *sec, *reloc_sec;
+ struct reloc *reloc;
+ unsigned long *loc;
+ struct instruction *insn;
+ int idx;
+
+ sec = find_section_by_name(file->elf, "__mcount_loc");
+ if (sec) {
+ INIT_LIST_HEAD(&file->mcount_loc_list);
+ WARN("file already has __mcount_loc section, skipping");
+ return 0;
+ }
+
+ if (list_empty(&file->mcount_loc_list))
+ return 0;
+
+ idx = 0;
+ list_for_each_entry(insn, &file->mcount_loc_list, mcount_loc_node)
+ idx++;
+
+ sec = elf_create_section(file->elf, "__mcount_loc", sizeof(unsigned long), idx);
+ if (!sec)
+ return -1;
+
+ reloc_sec = elf_create_reloc_section(file->elf, sec, SHT_RELA);
+ if (!reloc_sec)
+ return -1;
+
+ idx = 0;
+ list_for_each_entry(insn, &file->mcount_loc_list, mcount_loc_node) {
+
+ loc = (unsigned long *)sec->data->d_buf + idx;
+ memset(loc, 0, sizeof(unsigned long));
+
+ reloc = malloc(sizeof(*reloc));
+ if (!reloc) {
+ perror("malloc");
+ return -1;
+ }
+ memset(reloc, 0, sizeof(*reloc));
+
+ reloc->sym = insn->sec->sym;
+ reloc->addend = insn->offset;
+ reloc->type = R_X86_64_64;
+ reloc->offset = idx * sizeof(unsigned long);
+ reloc->sec = reloc_sec;
+ elf_add_reloc(file->elf, reloc);
+
+ idx++;
+ }
+
+ if (elf_rebuild_reloc_section(file->elf, reloc_sec))
+ return -1;
+
+ return 0;
+}
+
/*
* Warnings shouldn't be reported for ignored functions.
*/
@@ -784,6 +843,22 @@ static int add_call_destinations(struct objtool_file *file)
insn->type = INSN_NOP;
}

+ if (mcount && !strcmp(insn->call_dest->name, "__fentry__")) {
+ if (reloc) {
+ reloc->type = R_NONE;
+ elf_write_reloc(file->elf, reloc);
+ }
+
+ elf_write_insn(file->elf, insn->sec,
+ insn->offset, insn->len,
+ arch_nop_insn(insn->len));
+
+ insn->type = INSN_NOP;
+
+ list_add_tail(&insn->mcount_loc_node,
+ &file->mcount_loc_list);
+ }
+
/*
* Whatever stack impact regular CALLs have, should be undone
* by the RETURN of the called function.
@@ -2791,6 +2866,7 @@ int check(const char *_objname, bool orc)

INIT_LIST_HEAD(&file.insn_list);
hash_init(file.insn_hash);
+ INIT_LIST_HEAD(&file.mcount_loc_list);
file.c_file = !vmlinux && find_section_by_name(file.elf, ".comment");
file.ignore_unreachables = no_unreachable;
file.hints = false;
@@ -2838,6 +2914,13 @@ int check(const char *_objname, bool orc)
warnings += ret;
}

+ if (mcount) {
+ ret = create_mcount_loc_sections(&file);
+ if (ret < 0)
+ goto out;
+ warnings += ret;
+ }
+
if (orc) {
ret = create_orc(&file);
if (ret < 0)
diff --git a/tools/objtool/check.h b/tools/objtool/check.h
index 061aa96e15d3..b62afd3d970b 100644
--- a/tools/objtool/check.h
+++ b/tools/objtool/check.h
@@ -22,6 +22,7 @@ struct insn_state {
struct instruction {
struct list_head list;
struct hlist_node hash;
+ struct list_head mcount_loc_node;
struct section *sec;
unsigned long offset;
unsigned int len;
diff --git a/tools/objtool/objtool.h b/tools/objtool/objtool.h
index 528028a66816..427806079540 100644
--- a/tools/objtool/objtool.h
+++ b/tools/objtool/objtool.h
@@ -16,6 +16,7 @@ struct objtool_file {
struct elf *elf;
struct list_head insn_list;
DECLARE_HASHTABLE(insn_hash, 20);
+ struct list_head mcount_loc_list;
bool ignore_unreachables, c_file, hints, rodata;
};