2018-05-18 09:18:08

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 00/28] New macros for assembler symbols

This series introduces new macros for assembly as was discussed [1].
The macros are introduced in the first patch of the series. The rest
of patches start using these new macros in x86, converting *all* uses
of the old macros to the new ones throughout the last patch. With
every last user of some old macro, the macro is immediatelly made
forbidden for x86.

When this settles down, conversion of other architectures can be done
too.

For introduction, documentation, use and examples, please see
Documentation/asm-annotations.rst from the first patch of the series.

[1] https://lkml.org/lkml/2017/3/1/742


Jiri Slaby (28):
linkage: new macros for assembler symbols
x86/asm/suspend: drop ENTRY from local data
x86/asm/suspend: use SYM_DATA for data
x86/asm: annotate relocate_kernel
x86/asm/entry: annotate THUNKs
x86/asm: annotate local pseudo-functions
x86/asm/crypto: annotate local functions
x86/boot/compressed: annotate local functions
x86/asm: annotate aliases
x86/asm/entry: annotate interrupt symbols properly
x86/asm/head: annotate data appropriatelly
x86/boot/compressed: annotate data appropriatelly
um: annotate data appropriatelly
xen/pvh: annotate data appropriatelly
x86/asm/purgatory: start using annotations
x86/asm: do not annotate functions by GLOBAL
x86/asm: use SYM_INNER_LABEL instead of GLOBAL
x86/asm/realmode: use SYM_DATA_* instead of GLOBAL
x86/asm: kill the last GLOBAL user and remove the macro
x86/asm: make some functions local
x86/asm/ftrace: mark function_hook as function
x86_64/asm: add ENDs to some functions and relabel with SYM_CODE_*
x86_64/asm: change all ENTRY+END to SYM_CODE_*
x86_64/asm: change all ENTRY+ENDPROC to SYM_FUNC_*
x86_32/asm: add ENDs to some functions and relabel with SYM_CODE_*
x86_32/asm: change all ENTRY+END to SYM_CODE_*
x86_32/asm: change all ENTRY+ENDPROC to SYM_FUNC_*
x86/asm: replace WEAK uses by SYM_INNER_LABEL

Documentation/asm-annotations.rst | 217 ++++++++++++++++++
arch/x86/boot/compressed/efi_stub_32.S | 4 +-
arch/x86/boot/compressed/efi_thunk_64.S | 33 +--
arch/x86/boot/compressed/head_32.S | 15 +-
arch/x86/boot/compressed/head_64.S | 60 ++---
arch/x86/boot/compressed/mem_encrypt.S | 14 +-
arch/x86/boot/copy.S | 16 +-
arch/x86/boot/pmjump.S | 8 +-
arch/x86/crypto/aes-i586-asm_32.S | 8 +-
arch/x86/crypto/aes-x86_64-asm_64.S | 4 +-
arch/x86/crypto/aes_ctrby8_avx-x86_64.S | 12 +-
arch/x86/crypto/aesni-intel_asm.S | 114 +++++-----
arch/x86/crypto/aesni-intel_avx-x86_64.S | 24 +-
arch/x86/crypto/blowfish-x86_64-asm_64.S | 16 +-
arch/x86/crypto/camellia-aesni-avx-asm_64.S | 44 ++--
arch/x86/crypto/camellia-aesni-avx2-asm_64.S | 44 ++--
arch/x86/crypto/camellia-x86_64-asm_64.S | 16 +-
arch/x86/crypto/cast5-avx-x86_64-asm_64.S | 24 +-
arch/x86/crypto/cast6-avx-x86_64-asm_64.S | 32 +--
arch/x86/crypto/chacha20-avx2-x86_64.S | 4 +-
arch/x86/crypto/chacha20-ssse3-x86_64.S | 8 +-
arch/x86/crypto/crc32-pclmul_asm.S | 4 +-
arch/x86/crypto/crc32c-pcl-intel-asm_64.S | 4 +-
arch/x86/crypto/crct10dif-pcl-asm_64.S | 4 +-
arch/x86/crypto/des3_ede-asm_64.S | 8 +-
arch/x86/crypto/ghash-clmulni-intel_asm.S | 12 +-
arch/x86/crypto/poly1305-avx2-x86_64.S | 4 +-
arch/x86/crypto/poly1305-sse2-x86_64.S | 8 +-
arch/x86/crypto/salsa20-i586-asm_32.S | 4 +-
arch/x86/crypto/salsa20-x86_64-asm_64.S | 4 +-
arch/x86/crypto/serpent-avx-x86_64-asm_64.S | 32 +--
arch/x86/crypto/serpent-avx2-asm_64.S | 32 +--
arch/x86/crypto/serpent-sse2-i586-asm_32.S | 8 +-
arch/x86/crypto/serpent-sse2-x86_64-asm_64.S | 8 +-
arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S | 8 +-
arch/x86/crypto/sha1-mb/sha1_mb_mgr_submit_avx2.S | 4 +-
arch/x86/crypto/sha1-mb/sha1_x8_avx2.S | 4 +-
arch/x86/crypto/sha1_avx2_x86_64_asm.S | 4 +-
arch/x86/crypto/sha1_ni_asm.S | 4 +-
arch/x86/crypto/sha1_ssse3_asm.S | 4 +-
arch/x86/crypto/sha256-avx-asm.S | 4 +-
arch/x86/crypto/sha256-avx2-asm.S | 4 +-
.../crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S | 8 +-
.../crypto/sha256-mb/sha256_mb_mgr_submit_avx2.S | 4 +-
arch/x86/crypto/sha256-mb/sha256_x8_avx2.S | 4 +-
arch/x86/crypto/sha256-ssse3-asm.S | 4 +-
arch/x86/crypto/sha256_ni_asm.S | 4 +-
arch/x86/crypto/sha512-avx-asm.S | 4 +-
arch/x86/crypto/sha512-avx2-asm.S | 4 +-
.../crypto/sha512-mb/sha512_mb_mgr_flush_avx2.S | 8 +-
.../crypto/sha512-mb/sha512_mb_mgr_submit_avx2.S | 4 +-
arch/x86/crypto/sha512-mb/sha512_x4_avx2.S | 4 +-
arch/x86/crypto/sha512-ssse3-asm.S | 4 +-
arch/x86/crypto/twofish-avx-x86_64-asm_64.S | 32 +--
arch/x86/crypto/twofish-i586-asm_32.S | 8 +-
arch/x86/crypto/twofish-x86_64-asm_64-3way.S | 8 +-
arch/x86/crypto/twofish-x86_64-asm_64.S | 8 +-
arch/x86/entry/entry_32.S | 155 ++++++-------
arch/x86/entry/entry_64.S | 107 ++++-----
arch/x86/entry/entry_64_compat.S | 16 +-
arch/x86/entry/thunk_32.S | 4 +-
arch/x86/entry/thunk_64.S | 8 +-
arch/x86/entry/vdso/vdso32/system_call.S | 2 +-
arch/x86/include/asm/linkage.h | 4 -
arch/x86/kernel/acpi/wakeup_32.S | 11 +-
arch/x86/kernel/acpi/wakeup_64.S | 25 ++-
arch/x86/kernel/ftrace_32.S | 23 +-
arch/x86/kernel/ftrace_64.S | 42 ++--
arch/x86/kernel/head_32.S | 60 ++---
arch/x86/kernel/head_64.S | 106 ++++-----
arch/x86/kernel/relocate_kernel_32.S | 13 +-
arch/x86/kernel/relocate_kernel_64.S | 13 +-
arch/x86/kernel/verify_cpu.S | 4 +-
arch/x86/lib/atomic64_386_32.S | 4 +-
arch/x86/lib/atomic64_cx8_32.S | 32 +--
arch/x86/lib/checksum_32.S | 16 +-
arch/x86/lib/clear_page_64.S | 12 +-
arch/x86/lib/cmpxchg16b_emu.S | 4 +-
arch/x86/lib/cmpxchg8b_emu.S | 4 +-
arch/x86/lib/copy_page_64.S | 8 +-
arch/x86/lib/copy_user_64.S | 16 +-
arch/x86/lib/csum-copy_64.S | 4 +-
arch/x86/lib/getuser.S | 24 +-
arch/x86/lib/hweight.S | 8 +-
arch/x86/lib/iomap_copy_64.S | 4 +-
arch/x86/lib/memcpy_64.S | 20 +-
arch/x86/lib/memmove_64.S | 8 +-
arch/x86/lib/memset_64.S | 16 +-
arch/x86/lib/msr-reg.S | 8 +-
arch/x86/lib/putuser.S | 20 +-
arch/x86/lib/retpoline.S | 4 +-
arch/x86/lib/rwsem.S | 24 +-
arch/x86/math-emu/div_Xsig.S | 4 +-
arch/x86/math-emu/div_small.S | 4 +-
arch/x86/math-emu/mul_Xsig.S | 12 +-
arch/x86/math-emu/polynom_Xsig.S | 4 +-
arch/x86/math-emu/reg_norm.S | 8 +-
arch/x86/math-emu/reg_round.S | 4 +-
arch/x86/math-emu/reg_u_add.S | 4 +-
arch/x86/math-emu/reg_u_div.S | 4 +-
arch/x86/math-emu/reg_u_mul.S | 4 +-
arch/x86/math-emu/reg_u_sub.S | 4 +-
arch/x86/math-emu/round_Xsig.S | 8 +-
arch/x86/math-emu/shr_Xsig.S | 4 +-
arch/x86/math-emu/wm_shrx.S | 8 +-
arch/x86/math-emu/wm_sqrt.S | 4 +-
arch/x86/mm/mem_encrypt_boot.S | 8 +-
arch/x86/platform/efi/efi_stub_32.S | 4 +-
arch/x86/platform/efi/efi_stub_64.S | 4 +-
arch/x86/platform/efi/efi_thunk_64.S | 16 +-
arch/x86/platform/olpc/xo1-wakeup.S | 3 +-
arch/x86/power/hibernate_asm_32.S | 6 +-
arch/x86/power/hibernate_asm_64.S | 14 +-
arch/x86/purgatory/entry64.S | 21 +-
arch/x86/purgatory/setup-x86_64.S | 14 +-
arch/x86/purgatory/stack.S | 7 +-
arch/x86/realmode/rm/header.S | 8 +-
arch/x86/realmode/rm/reboot.S | 13 +-
arch/x86/realmode/rm/stack.S | 14 +-
arch/x86/realmode/rm/trampoline_32.S | 16 +-
arch/x86/realmode/rm/trampoline_64.S | 29 +--
arch/x86/realmode/rm/trampoline_common.S | 4 +-
arch/x86/realmode/rm/wakeup_asm.S | 15 +-
arch/x86/realmode/rmpiggy.S | 10 +-
arch/x86/um/vdso/vdso.S | 6 +-
arch/x86/xen/xen-asm.S | 20 +-
arch/x86/xen/xen-asm_32.S | 7 +-
arch/x86/xen/xen-asm_64.S | 34 +--
arch/x86/xen/xen-head.S | 8 +-
arch/x86/xen/xen-pvh.S | 15 +-
include/linux/linkage.h | 247 ++++++++++++++++++++-
131 files changed, 1459 insertions(+), 982 deletions(-)
create mode 100644 Documentation/asm-annotations.rst

--
2.16.3



2018-05-18 09:18:21

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 16/28] x86/asm: do not annotate functions by GLOBAL

GLOBAL is an x86's custom macro and is going to die very soon. It was
meant for global symbols, but here, it was used for functions. Instead,
use the new macros SYM_FUNC_START* and SYM_CODE_START* (depending on the
type of a function) which are dedicated for global functions. And since
they both require a closing by SYM_*_END, we do this here too.

startup_64, which does not use GLOBAL, but uses .globl explicitly, is
converted too.

in_pm32 should not be global at all as it is used only locally, so
switch to SYM_FUNC_START_LOCAL_NOALIGN.

"No alignments" are preserved.

Signed-off-by: Jiri Slaby <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: <[email protected]>
---
arch/x86/boot/copy.S | 16 ++++++++--------
arch/x86/boot/pmjump.S | 8 ++++----
arch/x86/kernel/head_64.S | 5 +++--
3 files changed, 15 insertions(+), 14 deletions(-)

diff --git a/arch/x86/boot/copy.S b/arch/x86/boot/copy.S
index 15d9f74b0008..73aa8307a10f 100644
--- a/arch/x86/boot/copy.S
+++ b/arch/x86/boot/copy.S
@@ -17,7 +17,7 @@
.code16
.text

-GLOBAL(memcpy)
+SYM_FUNC_START_NOALIGN(memcpy)
pushw %si
pushw %di
movw %ax, %di
@@ -31,9 +31,9 @@ GLOBAL(memcpy)
popw %di
popw %si
retl
-ENDPROC(memcpy)
+SYM_FUNC_END(memcpy)

-GLOBAL(memset)
+SYM_FUNC_START_NOALIGN(memset)
pushw %di
movw %ax, %di
movzbl %dl, %eax
@@ -46,22 +46,22 @@ GLOBAL(memset)
rep; stosb
popw %di
retl
-ENDPROC(memset)
+SYM_FUNC_END(memset)

-GLOBAL(copy_from_fs)
+SYM_FUNC_START_NOALIGN(copy_from_fs)
pushw %ds
pushw %fs
popw %ds
calll memcpy
popw %ds
retl
-ENDPROC(copy_from_fs)
+SYM_FUNC_END(copy_from_fs)

-GLOBAL(copy_to_fs)
+SYM_FUNC_START_NOALIGN(copy_to_fs)
pushw %es
pushw %fs
popw %es
calll memcpy
popw %es
retl
-ENDPROC(copy_to_fs)
+SYM_FUNC_END(copy_to_fs)
diff --git a/arch/x86/boot/pmjump.S b/arch/x86/boot/pmjump.S
index 3e0edc6d2a20..b90e42eb1a62 100644
--- a/arch/x86/boot/pmjump.S
+++ b/arch/x86/boot/pmjump.S
@@ -23,7 +23,7 @@
/*
* void protected_mode_jump(u32 entrypoint, u32 bootparams);
*/
-GLOBAL(protected_mode_jump)
+SYM_FUNC_START_NOALIGN(protected_mode_jump)
movl %edx, %esi # Pointer to boot_params table

xorl %ebx, %ebx
@@ -44,11 +44,11 @@ GLOBAL(protected_mode_jump)
.byte 0x66, 0xea # ljmpl opcode
2: .long in_pm32 # offset
.word __BOOT_CS # segment
-ENDPROC(protected_mode_jump)
+SYM_FUNC_END(protected_mode_jump)

.code32
.section ".text32","ax"
-GLOBAL(in_pm32)
+SYM_FUNC_START_LOCAL_NOALIGN(in_pm32)
# Set up data segments for flat 32-bit mode
movl %ecx, %ds
movl %ecx, %es
@@ -74,4 +74,4 @@ GLOBAL(in_pm32)
lldt %cx

jmpl *%eax # Jump to the 32-bit entrypoint
-ENDPROC(in_pm32)
+SYM_FUNC_END(in_pm32)
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 80b620c824eb..48e71043b99c 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -50,8 +50,7 @@ L3_START_KERNEL = pud_index(__START_KERNEL_map)
.text
__HEAD
.code64
- .globl startup_64
-startup_64:
+SYM_CODE_START_NOALIGN(startup_64)
UNWIND_HINT_EMPTY
/*
* At this point the CPU runs in 64bit mode CS.L = 1 CS.D = 0,
@@ -91,6 +90,8 @@ startup_64:
/* Form the CR3 value being sure to include the CR3 modifier */
addq $(early_top_pgt - __START_KERNEL_map), %rax
jmp 1f
+SYM_CODE_END(startup_64)
+
ENTRY(secondary_startup_64)
UNWIND_HINT_EMPTY
/*
--
2.16.3


2018-05-18 09:18:29

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 28/28] x86/asm: replace WEAK uses by SYM_INNER_LABEL

Use the new SYM_INNER_LABEL for WEAK entries in the middle of x86
assembly functions.

And make sure WEAK is not defined for x86 anymore as these were the last
users.

Signed-off-by: Jiri Slaby <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
---
arch/x86/kernel/ftrace_32.S | 2 +-
arch/x86/kernel/ftrace_64.S | 2 +-
arch/x86/kernel/head_32.S | 2 +-
include/linux/linkage.h | 2 +-
4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/ftrace_32.S b/arch/x86/kernel/ftrace_32.S
index f519c22f6f9e..c98512e59c9b 100644
--- a/arch/x86/kernel/ftrace_32.S
+++ b/arch/x86/kernel/ftrace_32.S
@@ -98,7 +98,7 @@ ftrace_graph_call:
#endif

/* This is weak to keep gas from relaxing the jumps */
-WEAK(ftrace_stub)
+SYM_INNER_LABEL(ftrace_stub, SYM_L_WEAK)
ret
SYM_CODE_END(ftrace_caller)

diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
index 68b8c4b3e543..1902ee43eac8 100644
--- a/arch/x86/kernel/ftrace_64.S
+++ b/arch/x86/kernel/ftrace_64.S
@@ -186,7 +186,7 @@ SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL)
#endif

/* This is weak to keep gas from relaxing the jumps */
-WEAK(ftrace_stub)
+SYM_INNER_LABEL(ftrace_stub, SYM_L_WEAK)
retq
SYM_FUNC_END(ftrace_caller)

diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
index 7840ab96bee3..5e9715bafbbf 100644
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -156,7 +156,7 @@ SYM_CODE_START(startup_32)
jmp *%eax

.Lbad_subarch:
-WEAK(xen_entry)
+SYM_INNER_LABEL(xen_entry, SYM_L_WEAK)
/* Unknown implementation; there's really
nothing we can do at this point. */
ud2a
diff --git a/include/linux/linkage.h b/include/linux/linkage.h
index 25cba413ee71..e4cbdc034127 100644
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -114,13 +114,13 @@
#endif /* CONFIG_X86 */
#endif /* LINKER_SCRIPT */

+#ifndef CONFIG_X86
#ifndef WEAK
/* deprecated, use SYM_FUNC_START_WEAK* */
#define WEAK(name) \
SYM_FUNC_START_WEAK_NOALIGN(name)
#endif

-#ifndef CONFIG_X86
#ifndef END
/* deprecated, use SYM_FUNC_END, SYM_DATA_END, or SYM_END */
#define END(name) \
--
2.16.3


2018-05-18 09:18:40

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 03/28] x86/asm/suspend: use SYM_DATA for data

Some global data in the suspend code were marked as `ENTRY'. ENTRY was
intended for functions and shall be paired with ENDPROC. ENTRY also
aligns symbols which creates unnecessary holes here between data. Since
we are dropping historical markings, make proper use of newly added
SYM_DATA in this code.

Signed-off-by: Jiri Slaby <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Len Brown <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
arch/x86/kernel/acpi/wakeup_32.S | 2 +-
arch/x86/kernel/acpi/wakeup_64.S | 2 +-
arch/x86/kernel/head_32.S | 6 ++----
arch/x86/kernel/head_64.S | 5 ++---
4 files changed, 6 insertions(+), 9 deletions(-)

diff --git a/arch/x86/kernel/acpi/wakeup_32.S b/arch/x86/kernel/acpi/wakeup_32.S
index 4203d4f0c68d..feac1e5ecba0 100644
--- a/arch/x86/kernel/acpi/wakeup_32.S
+++ b/arch/x86/kernel/acpi/wakeup_32.S
@@ -89,7 +89,7 @@ ret_point:

.data
ALIGN
-ENTRY(saved_magic) .long 0
+SYM_DATA(saved_magic, .long 0)
saved_eip: .long 0

# saved registers
diff --git a/arch/x86/kernel/acpi/wakeup_64.S b/arch/x86/kernel/acpi/wakeup_64.S
index 510fa12aab73..551758f48eb7 100644
--- a/arch/x86/kernel/acpi/wakeup_64.S
+++ b/arch/x86/kernel/acpi/wakeup_64.S
@@ -133,4 +133,4 @@ saved_rbx: .quad 0
saved_rip: .quad 0
saved_rsp: .quad 0

-ENTRY(saved_magic) .quad 0
+SYM_DATA(saved_magic, .quad 0)
diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
index b59e4fb40fd9..80965fd75fea 100644
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -507,10 +507,8 @@ GLOBAL(early_recursion_flag)

__REFDATA
.align 4
-ENTRY(initial_code)
- .long i386_start_kernel
-ENTRY(setup_once_ref)
- .long setup_once
+SYM_DATA(initial_code, .long i386_start_kernel)
+SYM_DATA(setup_once_ref, .long setup_once)

/*
* BSS section
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 8344dd2f310a..17543533642d 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -463,9 +463,8 @@ early_gdt_descr:
early_gdt_descr_base:
.quad INIT_PER_CPU_VAR(gdt_page)

-ENTRY(phys_base)
- /* This must match the first entry in level2_kernel_pgt */
- .quad 0x0000000000000000
+/* This must match the first entry in level2_kernel_pgt */
+SYM_DATA(phys_base, .quad 0x0000000000000000)
EXPORT_SYMBOL(phys_base)

#include "../../x86/xen/xen-head.S"
--
2.16.3


2018-05-18 09:18:40

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 27/28] x86_32/asm: change all ENTRY+ENDPROC to SYM_FUNC_*

These are all functions which are invoked from elsewhere, so we annotate
them as global using the new SYM_FUNC_START. And their ENDPROC's by
SYM_FUNC_END.

Now, we can finally force ENTRY/ENDPROC to be undefined on X86.

Signed-off-by: Jiri Slaby <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: [email protected]
Cc: Herbert Xu <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Bill Metzenthen <[email protected]>
Cc: Matt Fleming <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
arch/x86/boot/compressed/efi_stub_32.S | 4 ++--
arch/x86/boot/compressed/head_32.S | 12 +++++------
arch/x86/crypto/salsa20-i586-asm_32.S | 4 ++--
arch/x86/crypto/serpent-sse2-i586-asm_32.S | 8 ++++----
arch/x86/crypto/twofish-i586-asm_32.S | 8 ++++----
arch/x86/entry/entry_32.S | 24 +++++++++++-----------
arch/x86/kernel/head_32.S | 16 +++++++--------
arch/x86/lib/atomic64_386_32.S | 4 ++--
arch/x86/lib/atomic64_cx8_32.S | 32 +++++++++++++++---------------
arch/x86/lib/checksum_32.S | 8 ++++----
arch/x86/math-emu/div_Xsig.S | 4 ++--
arch/x86/math-emu/div_small.S | 4 ++--
arch/x86/math-emu/mul_Xsig.S | 12 +++++------
arch/x86/math-emu/polynom_Xsig.S | 4 ++--
arch/x86/math-emu/reg_norm.S | 8 ++++----
arch/x86/math-emu/reg_round.S | 4 ++--
arch/x86/math-emu/reg_u_add.S | 4 ++--
arch/x86/math-emu/reg_u_div.S | 4 ++--
arch/x86/math-emu/reg_u_mul.S | 4 ++--
arch/x86/math-emu/reg_u_sub.S | 4 ++--
arch/x86/math-emu/round_Xsig.S | 8 ++++----
arch/x86/math-emu/shr_Xsig.S | 4 ++--
arch/x86/math-emu/wm_shrx.S | 8 ++++----
arch/x86/math-emu/wm_sqrt.S | 4 ++--
arch/x86/platform/efi/efi_stub_32.S | 4 ++--
include/linux/linkage.h | 8 +++-----
26 files changed, 103 insertions(+), 105 deletions(-)

diff --git a/arch/x86/boot/compressed/efi_stub_32.S b/arch/x86/boot/compressed/efi_stub_32.S
index 257e341fd2c8..ed6c351d34ed 100644
--- a/arch/x86/boot/compressed/efi_stub_32.S
+++ b/arch/x86/boot/compressed/efi_stub_32.S
@@ -24,7 +24,7 @@
*/

.text
-ENTRY(efi_call_phys)
+SYM_FUNC_START(efi_call_phys)
/*
* 0. The function can only be called in Linux kernel. So CS has been
* set to 0x0010, DS and SS have been set to 0x0018. In EFI, I found
@@ -77,7 +77,7 @@ ENTRY(efi_call_phys)
movl saved_return_addr(%edx), %ecx
pushl %ecx
ret
-ENDPROC(efi_call_phys)
+SYM_FUNC_END(efi_call_phys)
.previous

.data
diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S
index 7e8ab0bb6968..3fa36496af12 100644
--- a/arch/x86/boot/compressed/head_32.S
+++ b/arch/x86/boot/compressed/head_32.S
@@ -61,7 +61,7 @@
.hidden _egot

__HEAD
-ENTRY(startup_32)
+SYM_FUNC_START(startup_32)
cld
/*
* Test KEEP_SEGMENTS flag to see if the bootloader is asking
@@ -142,14 +142,14 @@ ENTRY(startup_32)
*/
leal relocated(%ebx), %eax
jmp *%eax
-ENDPROC(startup_32)
+SYM_FUNC_END(startup_32)

#ifdef CONFIG_EFI_STUB
/*
* We don't need the return address, so set up the stack so efi_main() can find
* its arguments.
*/
-ENTRY(efi_pe_entry)
+SYM_FUNC_START(efi_pe_entry)
add $0x4, %esp

call 1f
@@ -174,9 +174,9 @@ ENTRY(efi_pe_entry)
pushl %eax
pushl %ecx
jmp 2f /* Skip efi_config initialization */
-ENDPROC(efi_pe_entry)
+SYM_FUNC_END(efi_pe_entry)

-ENTRY(efi32_stub_entry)
+SYM_FUNC_START(efi32_stub_entry)
add $0x4, %esp
popl %ecx
popl %edx
@@ -205,7 +205,7 @@ fail:
movl BP_code32_start(%esi), %eax
leal startup_32(%eax), %eax
jmp *%eax
-ENDPROC(efi32_stub_entry)
+SYM_FUNC_END(efi32_stub_entry)
#endif

.text
diff --git a/arch/x86/crypto/salsa20-i586-asm_32.S b/arch/x86/crypto/salsa20-i586-asm_32.S
index 6014b7b9e52a..edeb4c3e7389 100644
--- a/arch/x86/crypto/salsa20-i586-asm_32.S
+++ b/arch/x86/crypto/salsa20-i586-asm_32.S
@@ -8,7 +8,7 @@
.text

# enter salsa20_encrypt_bytes
-ENTRY(salsa20_encrypt_bytes)
+SYM_FUNC_START(salsa20_encrypt_bytes)
mov %esp,%eax
and $31,%eax
add $256,%eax
@@ -935,4 +935,4 @@ ENTRY(salsa20_encrypt_bytes)
add $64,%esi
# goto bytesatleast1
jmp ._bytesatleast1
-ENDPROC(salsa20_encrypt_bytes)
+SYM_FUNC_END(salsa20_encrypt_bytes)
diff --git a/arch/x86/crypto/serpent-sse2-i586-asm_32.S b/arch/x86/crypto/serpent-sse2-i586-asm_32.S
index d348f1553a79..f3cebd3c6739 100644
--- a/arch/x86/crypto/serpent-sse2-i586-asm_32.S
+++ b/arch/x86/crypto/serpent-sse2-i586-asm_32.S
@@ -512,7 +512,7 @@
pxor t0, x3; \
movdqu x3, (3*4*4)(out);

-ENTRY(__serpent_enc_blk_4way)
+SYM_FUNC_START(__serpent_enc_blk_4way)
/* input:
* arg_ctx(%esp): ctx, CTX
* arg_dst(%esp): dst
@@ -574,9 +574,9 @@ ENTRY(__serpent_enc_blk_4way)
xor_blocks(%eax, RA, RB, RC, RD, RT0, RT1, RE);

ret;
-ENDPROC(__serpent_enc_blk_4way)
+SYM_FUNC_END(__serpent_enc_blk_4way)

-ENTRY(serpent_dec_blk_4way)
+SYM_FUNC_START(serpent_dec_blk_4way)
/* input:
* arg_ctx(%esp): ctx, CTX
* arg_dst(%esp): dst
@@ -628,4 +628,4 @@ ENTRY(serpent_dec_blk_4way)
write_blocks(%eax, RC, RD, RB, RE, RT0, RT1, RA);

ret;
-ENDPROC(serpent_dec_blk_4way)
+SYM_FUNC_END(serpent_dec_blk_4way)
diff --git a/arch/x86/crypto/twofish-i586-asm_32.S b/arch/x86/crypto/twofish-i586-asm_32.S
index 694ea4587ba7..8ecb5234b2b3 100644
--- a/arch/x86/crypto/twofish-i586-asm_32.S
+++ b/arch/x86/crypto/twofish-i586-asm_32.S
@@ -220,7 +220,7 @@
xor %esi, d ## D;\
ror $1, d ## D;

-ENTRY(twofish_enc_blk)
+SYM_FUNC_START(twofish_enc_blk)
push %ebp /* save registers according to calling convention*/
push %ebx
push %esi
@@ -274,9 +274,9 @@ ENTRY(twofish_enc_blk)
pop %ebp
mov $1, %eax
ret
-ENDPROC(twofish_enc_blk)
+SYM_FUNC_END(twofish_enc_blk)

-ENTRY(twofish_dec_blk)
+SYM_FUNC_START(twofish_dec_blk)
push %ebp /* save registers according to calling convention*/
push %ebx
push %esi
@@ -331,4 +331,4 @@ ENTRY(twofish_dec_blk)
pop %ebp
mov $1, %eax
ret
-ENDPROC(twofish_dec_blk)
+SYM_FUNC_END(twofish_dec_blk)
diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index ec2ea6379582..f82608268523 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -273,7 +273,7 @@ SYM_CODE_END(__switch_to_asm)
* asmlinkage function so its argument has to be pushed on the stack. This
* wrapper creates a proper "end of stack" frame header before the call.
*/
-ENTRY(schedule_tail_wrapper)
+SYM_FUNC_START(schedule_tail_wrapper)
FRAME_BEGIN

pushl %eax
@@ -282,7 +282,7 @@ ENTRY(schedule_tail_wrapper)

FRAME_END
ret
-ENDPROC(schedule_tail_wrapper)
+SYM_FUNC_END(schedule_tail_wrapper)
/*
* A newly forked process directly context switches into this address.
*
@@ -414,7 +414,7 @@ SYM_CODE_END(xen_sysenter_target)
* ebp user stack
* 0(%ebp) arg6
*/
-ENTRY(entry_SYSENTER_32)
+SYM_FUNC_START(entry_SYSENTER_32)
movl TSS_sysenter_sp0(%esp), %esp
.Lsysenter_past_esp:
pushl $__USER_DS /* pt_regs->ss */
@@ -504,7 +504,7 @@ ENTRY(entry_SYSENTER_32)
popfl
jmp .Lsysenter_flags_fixed
SYM_ENTRY(__end_SYSENTER_singlestep_region, SYM_L_GLOBAL, SYM_A_NONE)
-ENDPROC(entry_SYSENTER_32)
+SYM_FUNC_END(entry_SYSENTER_32)

/*
* 32-bit legacy system call entry.
@@ -534,7 +534,7 @@ ENDPROC(entry_SYSENTER_32)
* edi arg5
* ebp arg6
*/
-ENTRY(entry_INT80_32)
+SYM_FUNC_START(entry_INT80_32)
ASM_CLAC
pushl %eax /* pt_regs->orig_ax */
SAVE_ALL pt_regs_ax=$-ENOSYS /* save rest */
@@ -620,7 +620,7 @@ SYM_CODE_END(iret_exc)
lss (%esp), %esp /* switch to espfix segment */
jmp .Lrestore_nocheck
#endif
-ENDPROC(entry_INT80_32)
+SYM_FUNC_END(entry_INT80_32)

.macro FIXUP_ESPFIX_STACK
/*
@@ -688,7 +688,7 @@ SYM_CODE_START_LOCAL(common_interrupt)
SYM_CODE_END(common_interrupt)

#define BUILD_INTERRUPT3(name, nr, fn) \
-ENTRY(name) \
+SYM_FUNC_START(name) \
ASM_CLAC; \
pushl $~(nr); \
SAVE_ALL; \
@@ -697,7 +697,7 @@ ENTRY(name) \
movl %esp, %eax; \
call fn; \
jmp ret_from_intr; \
-ENDPROC(name)
+SYM_FUNC_END(name)

#define BUILD_INTERRUPT(name, nr) \
BUILD_INTERRUPT3(name, nr, smp_##name); \
@@ -816,7 +816,7 @@ SYM_CODE_START(spurious_interrupt_bug)
SYM_CODE_END(spurious_interrupt_bug)

#ifdef CONFIG_XEN
-ENTRY(xen_hypervisor_callback)
+SYM_FUNC_START(xen_hypervisor_callback)
pushl $-1 /* orig_ax = -1 => not a system call */
SAVE_ALL
ENCODE_FRAME_POINTER
@@ -844,7 +844,7 @@ SYM_INNER_LABEL_ALIGN(xen_do_upcall, SYM_L_GLOBAL)
call xen_maybe_preempt_hcall
#endif
jmp ret_from_intr
-ENDPROC(xen_hypervisor_callback)
+SYM_FUNC_END(xen_hypervisor_callback)

/*
* Hypervisor uses this for application faults while it executes.
@@ -858,7 +858,7 @@ ENDPROC(xen_hypervisor_callback)
* to pop the stack frame we end up in an infinite loop of failsafe callbacks.
* We distinguish between categories by maintaining a status value in EAX.
*/
-ENTRY(xen_failsafe_callback)
+SYM_FUNC_START(xen_failsafe_callback)
pushl %eax
movl $1, %eax
1: mov 4(%esp), %ds
@@ -895,7 +895,7 @@ ENTRY(xen_failsafe_callback)
_ASM_EXTABLE(2b, 7b)
_ASM_EXTABLE(3b, 8b)
_ASM_EXTABLE(4b, 9b)
-ENDPROC(xen_failsafe_callback)
+SYM_FUNC_END(xen_failsafe_callback)

BUILD_INTERRUPT3(xen_hvm_callback_vector, HYPERVISOR_CALLBACK_VECTOR,
xen_evtchn_do_upcall)
diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
index ba9df7cc545d..7840ab96bee3 100644
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -180,12 +180,12 @@ SYM_CODE_END(startup_32)
* up already except stack. We just set up stack here. Then call
* start_secondary().
*/
-ENTRY(start_cpu0)
+SYM_FUNC_START(start_cpu0)
movl initial_stack, %ecx
movl %ecx, %esp
call *(initial_code)
1: jmp 1b
-ENDPROC(start_cpu0)
+SYM_FUNC_END(start_cpu0)
#endif

/*
@@ -196,7 +196,7 @@ ENDPROC(start_cpu0)
* If cpu hotplug is not supported then this code can go in init section
* which will be freed later
*/
-ENTRY(startup_32_smp)
+SYM_FUNC_START(startup_32_smp)
cld
movl $(__BOOT_DS),%eax
movl %eax,%ds
@@ -363,7 +363,7 @@ ENTRY(startup_32_smp)

call *(initial_code)
1: jmp 1b
-ENDPROC(startup_32_smp)
+SYM_FUNC_END(startup_32_smp)

#include "verify_cpu.S"

@@ -393,7 +393,7 @@ setup_once:
andl $0,setup_once_ref /* Once is enough, thanks */
ret

-ENTRY(early_idt_handler_array)
+SYM_FUNC_START(early_idt_handler_array)
# 36(%esp) %eflags
# 32(%esp) %cs
# 28(%esp) %eip
@@ -408,7 +408,7 @@ ENTRY(early_idt_handler_array)
i = i + 1
.fill early_idt_handler_array + i*EARLY_IDT_HANDLER_SIZE - ., 1, 0xcc
.endr
-ENDPROC(early_idt_handler_array)
+SYM_FUNC_END(early_idt_handler_array)

SYM_CODE_START_LOCAL(early_idt_handler_common)
/*
@@ -464,7 +464,7 @@ SYM_CODE_START_LOCAL(early_idt_handler_common)
SYM_CODE_END(early_idt_handler_common)

/* This is the default interrupt "handler" :-) */
-ENTRY(early_ignore_irq)
+SYM_FUNC_START(early_ignore_irq)
cld
#ifdef CONFIG_PRINTK
pushl %eax
@@ -499,7 +499,7 @@ ENTRY(early_ignore_irq)
hlt_loop:
hlt
jmp hlt_loop
-ENDPROC(early_ignore_irq)
+SYM_FUNC_END(early_ignore_irq)

__INITDATA
.align 4
diff --git a/arch/x86/lib/atomic64_386_32.S b/arch/x86/lib/atomic64_386_32.S
index 9b0ca8fe80fc..9ed71edd9dfe 100644
--- a/arch/x86/lib/atomic64_386_32.S
+++ b/arch/x86/lib/atomic64_386_32.S
@@ -24,10 +24,10 @@

#define BEGIN(op) \
.macro endp; \
-ENDPROC(atomic64_##op##_386); \
+SYM_FUNC_END(atomic64_##op##_386); \
.purgem endp; \
.endm; \
-ENTRY(atomic64_##op##_386); \
+SYM_FUNC_START(atomic64_##op##_386); \
LOCK v;

#define ENDP endp
diff --git a/arch/x86/lib/atomic64_cx8_32.S b/arch/x86/lib/atomic64_cx8_32.S
index db3ae85440ff..f02f70890121 100644
--- a/arch/x86/lib/atomic64_cx8_32.S
+++ b/arch/x86/lib/atomic64_cx8_32.S
@@ -20,12 +20,12 @@
cmpxchg8b (\reg)
.endm

-ENTRY(atomic64_read_cx8)
+SYM_FUNC_START(atomic64_read_cx8)
read64 %ecx
ret
-ENDPROC(atomic64_read_cx8)
+SYM_FUNC_END(atomic64_read_cx8)

-ENTRY(atomic64_set_cx8)
+SYM_FUNC_START(atomic64_set_cx8)
1:
/* we don't need LOCK_PREFIX since aligned 64-bit writes
* are atomic on 586 and newer */
@@ -33,19 +33,19 @@ ENTRY(atomic64_set_cx8)
jne 1b

ret
-ENDPROC(atomic64_set_cx8)
+SYM_FUNC_END(atomic64_set_cx8)

-ENTRY(atomic64_xchg_cx8)
+SYM_FUNC_START(atomic64_xchg_cx8)
1:
LOCK_PREFIX
cmpxchg8b (%esi)
jne 1b

ret
-ENDPROC(atomic64_xchg_cx8)
+SYM_FUNC_END(atomic64_xchg_cx8)

.macro addsub_return func ins insc
-ENTRY(atomic64_\func\()_return_cx8)
+SYM_FUNC_START(atomic64_\func\()_return_cx8)
pushl %ebp
pushl %ebx
pushl %esi
@@ -73,14 +73,14 @@ ENTRY(atomic64_\func\()_return_cx8)
popl %ebx
popl %ebp
ret
-ENDPROC(atomic64_\func\()_return_cx8)
+SYM_FUNC_END(atomic64_\func\()_return_cx8)
.endm

addsub_return add add adc
addsub_return sub sub sbb

.macro incdec_return func ins insc
-ENTRY(atomic64_\func\()_return_cx8)
+SYM_FUNC_START(atomic64_\func\()_return_cx8)
pushl %ebx

read64 %esi
@@ -98,13 +98,13 @@ ENTRY(atomic64_\func\()_return_cx8)
movl %ecx, %edx
popl %ebx
ret
-ENDPROC(atomic64_\func\()_return_cx8)
+SYM_FUNC_END(atomic64_\func\()_return_cx8)
.endm

incdec_return inc add adc
incdec_return dec sub sbb

-ENTRY(atomic64_dec_if_positive_cx8)
+SYM_FUNC_START(atomic64_dec_if_positive_cx8)
pushl %ebx

read64 %esi
@@ -123,9 +123,9 @@ ENTRY(atomic64_dec_if_positive_cx8)
movl %ecx, %edx
popl %ebx
ret
-ENDPROC(atomic64_dec_if_positive_cx8)
+SYM_FUNC_END(atomic64_dec_if_positive_cx8)

-ENTRY(atomic64_add_unless_cx8)
+SYM_FUNC_START(atomic64_add_unless_cx8)
pushl %ebp
pushl %ebx
/* these just push these two parameters on the stack */
@@ -159,9 +159,9 @@ ENTRY(atomic64_add_unless_cx8)
jne 2b
xorl %eax, %eax
jmp 3b
-ENDPROC(atomic64_add_unless_cx8)
+SYM_FUNC_END(atomic64_add_unless_cx8)

-ENTRY(atomic64_inc_not_zero_cx8)
+SYM_FUNC_START(atomic64_inc_not_zero_cx8)
pushl %ebx

read64 %esi
@@ -181,4 +181,4 @@ ENTRY(atomic64_inc_not_zero_cx8)
3:
popl %ebx
ret
-ENDPROC(atomic64_inc_not_zero_cx8)
+SYM_FUNC_END(atomic64_inc_not_zero_cx8)
diff --git a/arch/x86/lib/checksum_32.S b/arch/x86/lib/checksum_32.S
index 28a148de1843..7bd2d075d46a 100644
--- a/arch/x86/lib/checksum_32.S
+++ b/arch/x86/lib/checksum_32.S
@@ -50,7 +50,7 @@ unsigned int csum_partial(const unsigned char * buff, int len, unsigned int sum)
* Fortunately, it is easy to convert 2-byte alignment to 4-byte
* alignment for the unrolled loop.
*/
-ENTRY(csum_partial)
+SYM_FUNC_START(csum_partial)
pushl %esi
pushl %ebx
movl 20(%esp),%eax # Function arg: unsigned int sum
@@ -132,13 +132,13 @@ ENTRY(csum_partial)
popl %ebx
popl %esi
ret
-ENDPROC(csum_partial)
+SYM_FUNC_END(csum_partial)

#else

/* Version for PentiumII/PPro */

-ENTRY(csum_partial)
+SYM_FUNC_START(csum_partial)
pushl %esi
pushl %ebx
movl 20(%esp),%eax # Function arg: unsigned int sum
@@ -250,7 +250,7 @@ ENTRY(csum_partial)
popl %ebx
popl %esi
ret
-ENDPROC(csum_partial)
+SYM_FUNC_END(csum_partial)

#endif
EXPORT_SYMBOL(csum_partial)
diff --git a/arch/x86/math-emu/div_Xsig.S b/arch/x86/math-emu/div_Xsig.S
index ee08449d20fd..951da2ad54bb 100644
--- a/arch/x86/math-emu/div_Xsig.S
+++ b/arch/x86/math-emu/div_Xsig.S
@@ -75,7 +75,7 @@ FPU_result_1:


.text
-ENTRY(div_Xsig)
+SYM_FUNC_START(div_Xsig)
pushl %ebp
movl %esp,%ebp
#ifndef NON_REENTRANT_FPU
@@ -364,4 +364,4 @@ L_bugged_2:
pop %ebx
jmp L_exit
#endif /* PARANOID */
-ENDPROC(div_Xsig)
+SYM_FUNC_END(div_Xsig)
diff --git a/arch/x86/math-emu/div_small.S b/arch/x86/math-emu/div_small.S
index 8f5025c80ee0..d047d1816abe 100644
--- a/arch/x86/math-emu/div_small.S
+++ b/arch/x86/math-emu/div_small.S
@@ -19,7 +19,7 @@
#include "fpu_emu.h"

.text
-ENTRY(FPU_div_small)
+SYM_FUNC_START(FPU_div_small)
pushl %ebp
movl %esp,%ebp

@@ -45,4 +45,4 @@ ENTRY(FPU_div_small)

leave
ret
-ENDPROC(FPU_div_small)
+SYM_FUNC_END(FPU_div_small)
diff --git a/arch/x86/math-emu/mul_Xsig.S b/arch/x86/math-emu/mul_Xsig.S
index 3e489122a2b0..4afc7b1fa6e9 100644
--- a/arch/x86/math-emu/mul_Xsig.S
+++ b/arch/x86/math-emu/mul_Xsig.S
@@ -25,7 +25,7 @@
#include "fpu_emu.h"

.text
-ENTRY(mul32_Xsig)
+SYM_FUNC_START(mul32_Xsig)
pushl %ebp
movl %esp,%ebp
subl $16,%esp
@@ -63,10 +63,10 @@ ENTRY(mul32_Xsig)
popl %esi
leave
ret
-ENDPROC(mul32_Xsig)
+SYM_FUNC_END(mul32_Xsig)


-ENTRY(mul64_Xsig)
+SYM_FUNC_START(mul64_Xsig)
pushl %ebp
movl %esp,%ebp
subl $16,%esp
@@ -116,11 +116,11 @@ ENTRY(mul64_Xsig)
popl %esi
leave
ret
-ENDPROC(mul64_Xsig)
+SYM_FUNC_END(mul64_Xsig)



-ENTRY(mul_Xsig_Xsig)
+SYM_FUNC_START(mul_Xsig_Xsig)
pushl %ebp
movl %esp,%ebp
subl $16,%esp
@@ -176,4 +176,4 @@ ENTRY(mul_Xsig_Xsig)
popl %esi
leave
ret
-ENDPROC(mul_Xsig_Xsig)
+SYM_FUNC_END(mul_Xsig_Xsig)
diff --git a/arch/x86/math-emu/polynom_Xsig.S b/arch/x86/math-emu/polynom_Xsig.S
index 604f0b2d17e8..702315eecb86 100644
--- a/arch/x86/math-emu/polynom_Xsig.S
+++ b/arch/x86/math-emu/polynom_Xsig.S
@@ -37,7 +37,7 @@
#define OVERFLOWED -16(%ebp) /* addition overflow flag */

.text
-ENTRY(polynomial_Xsig)
+SYM_FUNC_START(polynomial_Xsig)
pushl %ebp
movl %esp,%ebp
subl $32,%esp
@@ -134,4 +134,4 @@ L_accum_done:
popl %esi
leave
ret
-ENDPROC(polynomial_Xsig)
+SYM_FUNC_END(polynomial_Xsig)
diff --git a/arch/x86/math-emu/reg_norm.S b/arch/x86/math-emu/reg_norm.S
index 7f6b4392a15d..cad1d60b1e84 100644
--- a/arch/x86/math-emu/reg_norm.S
+++ b/arch/x86/math-emu/reg_norm.S
@@ -22,7 +22,7 @@


.text
-ENTRY(FPU_normalize)
+SYM_FUNC_START(FPU_normalize)
pushl %ebp
movl %esp,%ebp
pushl %ebx
@@ -95,12 +95,12 @@ L_overflow:
call arith_overflow
pop %ebx
jmp L_exit
-ENDPROC(FPU_normalize)
+SYM_FUNC_END(FPU_normalize)



/* Normalise without reporting underflow or overflow */
-ENTRY(FPU_normalize_nuo)
+SYM_FUNC_START(FPU_normalize_nuo)
pushl %ebp
movl %esp,%ebp
pushl %ebx
@@ -147,4 +147,4 @@ L_exit_nuo_zero:
popl %ebx
leave
ret
-ENDPROC(FPU_normalize_nuo)
+SYM_FUNC_END(FPU_normalize_nuo)
diff --git a/arch/x86/math-emu/reg_round.S b/arch/x86/math-emu/reg_round.S
index 04563421ee7d..11a1f798451b 100644
--- a/arch/x86/math-emu/reg_round.S
+++ b/arch/x86/math-emu/reg_round.S
@@ -109,7 +109,7 @@ FPU_denormal:
.globl fpu_Arith_exit

/* Entry point when called from C */
-ENTRY(FPU_round)
+SYM_FUNC_START(FPU_round)
pushl %ebp
movl %esp,%ebp
pushl %esi
@@ -708,4 +708,4 @@ L_exception_exit:
jmp fpu_reg_round_special_exit
#endif /* PARANOID */

-ENDPROC(FPU_round)
+SYM_FUNC_END(FPU_round)
diff --git a/arch/x86/math-emu/reg_u_add.S b/arch/x86/math-emu/reg_u_add.S
index 50fe9f8c893c..9c9e2c810afe 100644
--- a/arch/x86/math-emu/reg_u_add.S
+++ b/arch/x86/math-emu/reg_u_add.S
@@ -32,7 +32,7 @@
#include "control_w.h"

.text
-ENTRY(FPU_u_add)
+SYM_FUNC_START(FPU_u_add)
pushl %ebp
movl %esp,%ebp
pushl %esi
@@ -166,4 +166,4 @@ L_exit:
leave
ret
#endif /* PARANOID */
-ENDPROC(FPU_u_add)
+SYM_FUNC_END(FPU_u_add)
diff --git a/arch/x86/math-emu/reg_u_div.S b/arch/x86/math-emu/reg_u_div.S
index 94d545e118e4..e2fb5c2644c5 100644
--- a/arch/x86/math-emu/reg_u_div.S
+++ b/arch/x86/math-emu/reg_u_div.S
@@ -75,7 +75,7 @@ FPU_ovfl_flag:
#define DEST PARAM3

.text
-ENTRY(FPU_u_div)
+SYM_FUNC_START(FPU_u_div)
pushl %ebp
movl %esp,%ebp
#ifndef NON_REENTRANT_FPU
@@ -471,4 +471,4 @@ L_exit:
ret
#endif /* PARANOID */

-ENDPROC(FPU_u_div)
+SYM_FUNC_END(FPU_u_div)
diff --git a/arch/x86/math-emu/reg_u_mul.S b/arch/x86/math-emu/reg_u_mul.S
index 21cde47fb3e5..0c779c87ac5b 100644
--- a/arch/x86/math-emu/reg_u_mul.S
+++ b/arch/x86/math-emu/reg_u_mul.S
@@ -45,7 +45,7 @@ FPU_accum_1:


.text
-ENTRY(FPU_u_mul)
+SYM_FUNC_START(FPU_u_mul)
pushl %ebp
movl %esp,%ebp
#ifndef NON_REENTRANT_FPU
@@ -147,4 +147,4 @@ L_exit:
ret
#endif /* PARANOID */

-ENDPROC(FPU_u_mul)
+SYM_FUNC_END(FPU_u_mul)
diff --git a/arch/x86/math-emu/reg_u_sub.S b/arch/x86/math-emu/reg_u_sub.S
index f05dea7dec38..e9bb7c248649 100644
--- a/arch/x86/math-emu/reg_u_sub.S
+++ b/arch/x86/math-emu/reg_u_sub.S
@@ -33,7 +33,7 @@
#include "control_w.h"

.text
-ENTRY(FPU_u_sub)
+SYM_FUNC_START(FPU_u_sub)
pushl %ebp
movl %esp,%ebp
pushl %esi
@@ -271,4 +271,4 @@ L_exit:
popl %esi
leave
ret
-ENDPROC(FPU_u_sub)
+SYM_FUNC_END(FPU_u_sub)
diff --git a/arch/x86/math-emu/round_Xsig.S b/arch/x86/math-emu/round_Xsig.S
index 226a51e991f1..d9d7de8dbd7b 100644
--- a/arch/x86/math-emu/round_Xsig.S
+++ b/arch/x86/math-emu/round_Xsig.S
@@ -23,7 +23,7 @@


.text
-ENTRY(round_Xsig)
+SYM_FUNC_START(round_Xsig)
pushl %ebp
movl %esp,%ebp
pushl %ebx /* Reserve some space */
@@ -79,11 +79,11 @@ L_exit:
popl %ebx
leave
ret
-ENDPROC(round_Xsig)
+SYM_FUNC_END(round_Xsig)



-ENTRY(norm_Xsig)
+SYM_FUNC_START(norm_Xsig)
pushl %ebp
movl %esp,%ebp
pushl %ebx /* Reserve some space */
@@ -139,4 +139,4 @@ L_n_exit:
popl %ebx
leave
ret
-ENDPROC(norm_Xsig)
+SYM_FUNC_END(norm_Xsig)
diff --git a/arch/x86/math-emu/shr_Xsig.S b/arch/x86/math-emu/shr_Xsig.S
index 96f4779aa9c1..726af985f758 100644
--- a/arch/x86/math-emu/shr_Xsig.S
+++ b/arch/x86/math-emu/shr_Xsig.S
@@ -22,7 +22,7 @@
#include "fpu_emu.h"

.text
-ENTRY(shr_Xsig)
+SYM_FUNC_START(shr_Xsig)
push %ebp
movl %esp,%ebp
pushl %esi
@@ -86,4 +86,4 @@ L_more_than_95:
popl %esi
leave
ret
-ENDPROC(shr_Xsig)
+SYM_FUNC_END(shr_Xsig)
diff --git a/arch/x86/math-emu/wm_shrx.S b/arch/x86/math-emu/wm_shrx.S
index d588874eb6fb..4fc89174caf0 100644
--- a/arch/x86/math-emu/wm_shrx.S
+++ b/arch/x86/math-emu/wm_shrx.S
@@ -33,7 +33,7 @@
| Results returned in the 64 bit arg and eax. |
+---------------------------------------------------------------------------*/

-ENTRY(FPU_shrx)
+SYM_FUNC_START(FPU_shrx)
push %ebp
movl %esp,%ebp
pushl %esi
@@ -93,7 +93,7 @@ L_more_than_95:
popl %esi
leave
ret
-ENDPROC(FPU_shrx)
+SYM_FUNC_END(FPU_shrx)


/*---------------------------------------------------------------------------+
@@ -112,7 +112,7 @@ ENDPROC(FPU_shrx)
| part which has been shifted out of the arg. |
| Results returned in the 64 bit arg and eax. |
+---------------------------------------------------------------------------*/
-ENTRY(FPU_shrxs)
+SYM_FUNC_START(FPU_shrxs)
push %ebp
movl %esp,%ebp
pushl %esi
@@ -204,4 +204,4 @@ Ls_more_than_95:
popl %esi
leave
ret
-ENDPROC(FPU_shrxs)
+SYM_FUNC_END(FPU_shrxs)
diff --git a/arch/x86/math-emu/wm_sqrt.S b/arch/x86/math-emu/wm_sqrt.S
index f031c0e19356..3b2b58164ec1 100644
--- a/arch/x86/math-emu/wm_sqrt.S
+++ b/arch/x86/math-emu/wm_sqrt.S
@@ -75,7 +75,7 @@ FPU_fsqrt_arg_0:


.text
-ENTRY(wm_sqrt)
+SYM_FUNC_START(wm_sqrt)
pushl %ebp
movl %esp,%ebp
#ifndef NON_REENTRANT_FPU
@@ -469,4 +469,4 @@ sqrt_more_prec_large:
/* Our estimate is too large */
movl $0x7fffff00,%eax
jmp sqrt_round_result
-ENDPROC(wm_sqrt)
+SYM_FUNC_END(wm_sqrt)
diff --git a/arch/x86/platform/efi/efi_stub_32.S b/arch/x86/platform/efi/efi_stub_32.S
index ab2e91e76894..eed8b5b441f8 100644
--- a/arch/x86/platform/efi/efi_stub_32.S
+++ b/arch/x86/platform/efi/efi_stub_32.S
@@ -22,7 +22,7 @@
*/

.text
-ENTRY(efi_call_phys)
+SYM_FUNC_START(efi_call_phys)
/*
* 0. The function can only be called in Linux kernel. So CS has been
* set to 0x0010, DS and SS have been set to 0x0018. In EFI, I found
@@ -114,7 +114,7 @@ ENTRY(efi_call_phys)
movl (%edx), %ecx
pushl %ecx
ret
-ENDPROC(efi_call_phys)
+SYM_FUNC_END(efi_call_phys)
.previous

.data
diff --git a/include/linux/linkage.h b/include/linux/linkage.h
index 1b06f6b45198..25cba413ee71 100644
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -105,13 +105,13 @@

/* === DEPRECATED annotations === */

-#ifndef CONFIG_X86_64
+#ifndef CONFIG_X86
#ifndef ENTRY
/* deprecated, use SYM_FUNC_START */
#define ENTRY(name) \
SYM_FUNC_START(name)
#endif
-#endif /* CONFIG_X86_64 */
+#endif /* CONFIG_X86 */
#endif /* LINKER_SCRIPT */

#ifndef WEAK
@@ -126,9 +126,7 @@
#define END(name) \
.size name, .-name
#endif
-#endif /* CONFIG_X86 */

-#ifndef CONFIG_X86_64
/* If symbol 'name' is treated as a subroutine (gets called, and returns)
* then please use ENDPROC to mark 'name' as STT_FUNC for the benefit of
* static analysis tools such as stack depth analyzer.
@@ -138,7 +136,7 @@
#define ENDPROC(name) \
SYM_FUNC_END(name)
#endif
-#endif /* CONFIG_X86_64 */
+#endif /* CONFIG_X86 */

/* === generic annotations === */

--
2.16.3


2018-05-18 09:19:03

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 25/28] x86_32/asm: add ENDs to some functions and relabel with SYM_CODE_*

All these are functions which are invoked from elsewhere, but they are
not typical C functions. So we annotate them using the new
SYM_CODE_START. All these were not balanced with any END, so mark their
ends by SYM_CODE_END, appropriatelly.

Signed-off-by: Jiri Slaby <[email protected]>
Reviewed-by: Boris Ostrovsky <[email protected]> [xen bits]
Reviewed-by: Rafael J. Wysocki <[email protected]> [hibernate]
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Len Brown <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Boris Ostrovsky <[email protected]>
Cc: Juergen Gross <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
arch/x86/entry/entry_32.S | 3 ++-
arch/x86/kernel/acpi/wakeup_32.S | 7 ++++---
arch/x86/kernel/ftrace_32.S | 3 ++-
arch/x86/kernel/head_32.S | 3 ++-
arch/x86/power/hibernate_asm_32.S | 6 ++++--
arch/x86/realmode/rm/trampoline_32.S | 6 ++++--
arch/x86/xen/xen-asm_32.S | 7 ++++---
7 files changed, 22 insertions(+), 13 deletions(-)

diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index f701541ecf10..75d9670bffd8 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -376,9 +376,10 @@ SYM_ENTRY(__begin_SYSENTER_singlestep_region, SYM_L_GLOBAL, SYM_A_NONE)
* Xen doesn't set %esp to be precisely what the normal SYSENTER
* entry point expects, so fix it up before using the normal path.
*/
-ENTRY(xen_sysenter_target)
+SYM_CODE_START(xen_sysenter_target)
addl $5*4, %esp /* remove xen-provided frame */
jmp .Lsysenter_past_esp
+SYM_CODE_END(xen_sysenter_target)
#endif

/*
diff --git a/arch/x86/kernel/acpi/wakeup_32.S b/arch/x86/kernel/acpi/wakeup_32.S
index feac1e5ecba0..71a05a6cc36a 100644
--- a/arch/x86/kernel/acpi/wakeup_32.S
+++ b/arch/x86/kernel/acpi/wakeup_32.S
@@ -8,8 +8,7 @@
.code32
ALIGN

-ENTRY(wakeup_pmode_return)
-wakeup_pmode_return:
+SYM_CODE_START(wakeup_pmode_return)
movw $__KERNEL_DS, %ax
movw %ax, %ss
movw %ax, %fs
@@ -38,6 +37,7 @@ wakeup_pmode_return:
# jump to place where we left off
movl saved_eip, %eax
jmp *%eax
+SYM_CODE_END(wakeup_pmode_return)

bogus_magic:
jmp bogus_magic
@@ -71,7 +71,7 @@ restore_registers:
popfl
ret

-ENTRY(do_suspend_lowlevel)
+SYM_CODE_START(do_suspend_lowlevel)
call save_processor_state
call save_registers
pushl $3
@@ -86,6 +86,7 @@ ret_point:
call restore_registers
call restore_processor_state
ret
+SYM_CODE_END(do_suspend_lowlevel)

.data
ALIGN
diff --git a/arch/x86/kernel/ftrace_32.S b/arch/x86/kernel/ftrace_32.S
index b855dc10daeb..f4dca7df8ad6 100644
--- a/arch/x86/kernel/ftrace_32.S
+++ b/arch/x86/kernel/ftrace_32.S
@@ -102,7 +102,7 @@ WEAK(ftrace_stub)
ret
END(ftrace_caller)

-ENTRY(ftrace_regs_caller)
+SYM_CODE_START(ftrace_regs_caller)
/*
* i386 does not save SS and ESP when coming from kernel.
* Instead, to get sp, &regs->sp is used (see ptrace.h).
@@ -170,6 +170,7 @@ SYM_INNER_LABEL(ftrace_regs_call, SYM_L_GLOBAL)
lea 3*4(%esp), %esp /* Skip orig_ax, ip and cs */

jmp .Lftrace_ret
+SYM_CODE_END(ftrace_regs_caller)
#else /* ! CONFIG_DYNAMIC_FTRACE */

ENTRY(function_hook)
diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
index 1a6a6b4e4b4c..ba9df7cc545d 100644
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -64,7 +64,7 @@ RESERVE_BRK(pagetables, INIT_MAP_SIZE)
* can.
*/
__HEAD
-ENTRY(startup_32)
+SYM_CODE_START(startup_32)
movl pa(initial_stack),%ecx

/* test KEEP_SEGMENTS flag to see if the bootloader is asking
@@ -172,6 +172,7 @@ num_subarch_entries = (. - subarch_entries) / 4
#else
jmp .Ldefault_entry
#endif /* CONFIG_PARAVIRT */
+SYM_CODE_END(startup_32)

#ifdef CONFIG_HOTPLUG_CPU
/*
diff --git a/arch/x86/power/hibernate_asm_32.S b/arch/x86/power/hibernate_asm_32.S
index 6e56815e13a0..3cd15e34aa87 100644
--- a/arch/x86/power/hibernate_asm_32.S
+++ b/arch/x86/power/hibernate_asm_32.S
@@ -15,7 +15,7 @@

.text

-ENTRY(swsusp_arch_suspend)
+SYM_CODE_START(swsusp_arch_suspend)
movl %esp, saved_context_esp
movl %ebx, saved_context_ebx
movl %ebp, saved_context_ebp
@@ -26,8 +26,9 @@ ENTRY(swsusp_arch_suspend)

call swsusp_save
ret
+SYM_CODE_END(swsusp_arch_suspend)

-ENTRY(restore_image)
+SYM_CODE_START(restore_image)
movl mmu_cr4_features, %ecx
movl resume_pg_dir, %eax
subl $__PAGE_OFFSET, %eax
@@ -83,3 +84,4 @@ done:
xorl %eax, %eax

ret
+SYM_CODE_END(restore_image)
diff --git a/arch/x86/realmode/rm/trampoline_32.S b/arch/x86/realmode/rm/trampoline_32.S
index e96efcd60bf7..a3b047a44c5c 100644
--- a/arch/x86/realmode/rm/trampoline_32.S
+++ b/arch/x86/realmode/rm/trampoline_32.S
@@ -29,7 +29,7 @@
.code16

.balign PAGE_SIZE
-ENTRY(trampoline_start)
+SYM_CODE_START(trampoline_start)
wbinvd # Needed for NUMA-Q should be harmless for others

LJMPW_RM(1f)
@@ -57,11 +57,13 @@ ENTRY(trampoline_start)
lmsw %dx # into protected mode

ljmpl $__BOOT_CS, $pa_startup_32
+SYM_CODE_END(trampoline_start)

.section ".text32","ax"
.code32
-ENTRY(startup_32) # note: also used from wakeup_asm.S
+SYM_CODE_START(startup_32) # note: also used from wakeup_asm.S
jmp *%eax
+SYM_CODE_END(startup_32)

.bss
.balign 8
diff --git a/arch/x86/xen/xen-asm_32.S b/arch/x86/xen/xen-asm_32.S
index c15db060a242..8b8f8355b938 100644
--- a/arch/x86/xen/xen-asm_32.S
+++ b/arch/x86/xen/xen-asm_32.S
@@ -56,7 +56,7 @@
_ASM_EXTABLE(1b,2b)
.endm

-ENTRY(xen_iret)
+SYM_CODE_START(xen_iret)
/* test eflags for special cases */
testl $(X86_EFLAGS_VM | XEN_EFLAGS_NMI), 8(%esp)
jnz hyper_iret
@@ -122,6 +122,7 @@ xen_iret_end_crit:
hyper_iret:
/* put this out of line since its very rarely used */
jmp hypercall_page + __HYPERVISOR_iret * 32
+SYM_CODE_END(xen_iret)

.globl xen_iret_start_crit, xen_iret_end_crit

@@ -165,7 +166,7 @@ hyper_iret:
* SAVE_ALL state before going on, since it's usermode state which we
* eventually need to restore.
*/
-ENTRY(xen_iret_crit_fixup)
+SYM_CODE_START(xen_iret_crit_fixup)
/*
* Paranoia: Make sure we're really coming from kernel space.
* One could imagine a case where userspace jumps into the
@@ -204,4 +205,4 @@ ENTRY(xen_iret_crit_fixup)

lea 4(%edi), %esp /* point esp to new frame */
2: jmp xen_do_upcall
-
+SYM_CODE_END(xen_iret_crit_fixup)
--
2.16.3


2018-05-18 09:19:17

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 26/28] x86_32/asm: change all ENTRY+END to SYM_CODE_*

Here, we change all assembly code which is marked using END (and not
ENDPROC). We switch all these to appropriate new markings SYM_CODE_START
and SYM_CODE_END.

And since we removed the last user of END on X86, make sure, that END is
not defined there.

Signed-off-by: Jiri Slaby <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
---
arch/x86/entry/entry_32.S | 104 ++++++++++++++++++++++----------------------
arch/x86/kernel/ftrace_32.S | 12 ++---
include/linux/linkage.h | 2 +
3 files changed, 60 insertions(+), 58 deletions(-)

diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 75d9670bffd8..ec2ea6379582 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -227,7 +227,7 @@
* %eax: prev task
* %edx: next task
*/
-ENTRY(__switch_to_asm)
+SYM_CODE_START(__switch_to_asm)
/*
* Save callee-saved registers
* This must match the order in struct inactive_task_frame
@@ -264,7 +264,7 @@ ENTRY(__switch_to_asm)
popl %ebp

jmp __switch_to
-END(__switch_to_asm)
+SYM_CODE_END(__switch_to_asm)

/*
* The unwinder expects the last frame on the stack to always be at the same
@@ -290,7 +290,7 @@ ENDPROC(schedule_tail_wrapper)
* ebx: kernel thread func (NULL for user thread)
* edi: kernel thread arg
*/
-ENTRY(ret_from_fork)
+SYM_CODE_START(ret_from_fork)
call schedule_tail_wrapper

testl %ebx, %ebx
@@ -313,7 +313,7 @@ ENTRY(ret_from_fork)
*/
movl $0, PT_EAX(%esp)
jmp 2b
-END(ret_from_fork)
+SYM_CODE_END(ret_from_fork)

/*
* Return to user mode is not as complex as all this looks,
@@ -349,7 +349,7 @@ SYM_INNER_LABEL_ALIGN(resume_userspace, SYM_L_LOCAL)
SYM_CODE_END(ret_from_exception)

#ifdef CONFIG_PREEMPT
-ENTRY(resume_kernel)
+SYM_CODE_START(resume_kernel)
DISABLE_INTERRUPTS(CLBR_ANY)
.Lneed_resched:
cmpl $0, PER_CPU_VAR(__preempt_count)
@@ -358,7 +358,7 @@ ENTRY(resume_kernel)
jz restore_all
call preempt_schedule_irq
jmp .Lneed_resched
-END(resume_kernel)
+SYM_CODE_END(resume_kernel)
#endif

SYM_ENTRY(__begin_SYSENTER_singlestep_region, SYM_L_GLOBAL, SYM_A_NONE)
@@ -661,7 +661,7 @@ ENDPROC(entry_INT80_32)
* We pack 1 stub into every 8-byte block.
*/
.align 8
-ENTRY(irq_entries_start)
+SYM_CODE_START(irq_entries_start)
vector=FIRST_EXTERNAL_VECTOR
.rept (FIRST_SYSTEM_VECTOR - FIRST_EXTERNAL_VECTOR)
pushl $(~vector+0x80) /* Note: always in signed byte range */
@@ -669,7 +669,7 @@ ENTRY(irq_entries_start)
jmp common_interrupt
.align 8
.endr
-END(irq_entries_start)
+SYM_CODE_END(irq_entries_start)

/*
* the CPU automatically disables interrupts when executing an IRQ vector,
@@ -705,14 +705,14 @@ ENDPROC(name)
/* The include is where all of the SMP etc. interrupts come from */
#include <asm/entry_arch.h>

-ENTRY(coprocessor_error)
+SYM_CODE_START(coprocessor_error)
ASM_CLAC
pushl $0
pushl $do_coprocessor_error
jmp common_exception
-END(coprocessor_error)
+SYM_CODE_END(coprocessor_error)

-ENTRY(simd_coprocessor_error)
+SYM_CODE_START(simd_coprocessor_error)
ASM_CLAC
pushl $0
#ifdef CONFIG_X86_INVD_BUG
@@ -724,96 +724,96 @@ ENTRY(simd_coprocessor_error)
pushl $do_simd_coprocessor_error
#endif
jmp common_exception
-END(simd_coprocessor_error)
+SYM_CODE_END(simd_coprocessor_error)

-ENTRY(device_not_available)
+SYM_CODE_START(device_not_available)
ASM_CLAC
pushl $-1 # mark this as an int
pushl $do_device_not_available
jmp common_exception
-END(device_not_available)
+SYM_CODE_END(device_not_available)

#ifdef CONFIG_PARAVIRT
-ENTRY(native_iret)
+SYM_CODE_START(native_iret)
iret
_ASM_EXTABLE(native_iret, iret_exc)
-END(native_iret)
+SYM_CODE_END(native_iret)
#endif

-ENTRY(overflow)
+SYM_CODE_START(overflow)
ASM_CLAC
pushl $0
pushl $do_overflow
jmp common_exception
-END(overflow)
+SYM_CODE_END(overflow)

-ENTRY(bounds)
+SYM_CODE_START(bounds)
ASM_CLAC
pushl $0
pushl $do_bounds
jmp common_exception
-END(bounds)
+SYM_CODE_END(bounds)

-ENTRY(invalid_op)
+SYM_CODE_START(invalid_op)
ASM_CLAC
pushl $0
pushl $do_invalid_op
jmp common_exception
-END(invalid_op)
+SYM_CODE_END(invalid_op)

-ENTRY(coprocessor_segment_overrun)
+SYM_CODE_START(coprocessor_segment_overrun)
ASM_CLAC
pushl $0
pushl $do_coprocessor_segment_overrun
jmp common_exception
-END(coprocessor_segment_overrun)
+SYM_CODE_END(coprocessor_segment_overrun)

-ENTRY(invalid_TSS)
+SYM_CODE_START(invalid_TSS)
ASM_CLAC
pushl $do_invalid_TSS
jmp common_exception
-END(invalid_TSS)
+SYM_CODE_END(invalid_TSS)

-ENTRY(segment_not_present)
+SYM_CODE_START(segment_not_present)
ASM_CLAC
pushl $do_segment_not_present
jmp common_exception
-END(segment_not_present)
+SYM_CODE_END(segment_not_present)

-ENTRY(stack_segment)
+SYM_CODE_START(stack_segment)
ASM_CLAC
pushl $do_stack_segment
jmp common_exception
-END(stack_segment)
+SYM_CODE_END(stack_segment)

-ENTRY(alignment_check)
+SYM_CODE_START(alignment_check)
ASM_CLAC
pushl $do_alignment_check
jmp common_exception
-END(alignment_check)
+SYM_CODE_END(alignment_check)

-ENTRY(divide_error)
+SYM_CODE_START(divide_error)
ASM_CLAC
pushl $0 # no error code
pushl $do_divide_error
jmp common_exception
-END(divide_error)
+SYM_CODE_END(divide_error)

#ifdef CONFIG_X86_MCE
-ENTRY(machine_check)
+SYM_CODE_START(machine_check)
ASM_CLAC
pushl $0
pushl machine_check_vector
jmp common_exception
-END(machine_check)
+SYM_CODE_END(machine_check)
#endif

-ENTRY(spurious_interrupt_bug)
+SYM_CODE_START(spurious_interrupt_bug)
ASM_CLAC
pushl $0
pushl $do_spurious_interrupt_bug
jmp common_exception
-END(spurious_interrupt_bug)
+SYM_CODE_END(spurious_interrupt_bug)

#ifdef CONFIG_XEN
ENTRY(xen_hypervisor_callback)
@@ -915,12 +915,12 @@ BUILD_INTERRUPT3(hv_stimer0_callback_vector, HYPERV_STIMER0_VECTOR,

#endif /* CONFIG_HYPERV */

-ENTRY(page_fault)
+SYM_CODE_START(page_fault)
ASM_CLAC
pushl $do_page_fault
ALIGN
jmp common_exception
-END(page_fault)
+SYM_CODE_END(page_fault)

SYM_CODE_START_LOCAL_NOALIGN(common_exception)
/* the function address is in %gs's slot on the stack */
@@ -954,7 +954,7 @@ SYM_CODE_START_LOCAL_NOALIGN(common_exception)
jmp ret_from_exception
SYM_CODE_END(common_exception)

-ENTRY(debug)
+SYM_CODE_START(debug)
/*
* #DB can happen at the first instruction of
* entry_SYSENTER_32 or in Xen's SYSENTER prologue. If this
@@ -990,7 +990,7 @@ ENTRY(debug)
call do_debug
movl %ebx, %esp
jmp ret_from_exception
-END(debug)
+SYM_CODE_END(debug)

/*
* NMI is doubly nasty. It can happen on the first instruction of
@@ -999,7 +999,7 @@ END(debug)
* switched stacks. We handle both conditions by simply checking whether we
* interrupted kernel code running on the SYSENTER stack.
*/
-ENTRY(nmi)
+SYM_CODE_START(nmi)
ASM_CLAC
#ifdef CONFIG_X86_ESPFIX32
pushl %eax
@@ -1059,9 +1059,9 @@ ENTRY(nmi)
lss 12+4(%esp), %esp # back to espfix stack
jmp .Lirq_return
#endif
-END(nmi)
+SYM_CODE_END(nmi)

-ENTRY(int3)
+SYM_CODE_START(int3)
ASM_CLAC
pushl $-1 # mark this as an int
SAVE_ALL
@@ -1071,22 +1071,22 @@ ENTRY(int3)
movl %esp, %eax # pt_regs pointer
call do_int3
jmp ret_from_exception
-END(int3)
+SYM_CODE_END(int3)

-ENTRY(general_protection)
+SYM_CODE_START(general_protection)
pushl $do_general_protection
jmp common_exception
-END(general_protection)
+SYM_CODE_END(general_protection)

#ifdef CONFIG_KVM_GUEST
-ENTRY(async_page_fault)
+SYM_CODE_START(async_page_fault)
ASM_CLAC
pushl $do_async_page_fault
jmp common_exception
-END(async_page_fault)
+SYM_CODE_END(async_page_fault)
#endif

-ENTRY(rewind_stack_do_exit)
+SYM_CODE_START(rewind_stack_do_exit)
/* Prevent any naive code from trying to unwind to our caller. */
xorl %ebp, %ebp

@@ -1095,4 +1095,4 @@ ENTRY(rewind_stack_do_exit)

call do_exit
1: jmp 1b
-END(rewind_stack_do_exit)
+SYM_CODE_END(rewind_stack_do_exit)
diff --git a/arch/x86/kernel/ftrace_32.S b/arch/x86/kernel/ftrace_32.S
index f4dca7df8ad6..f519c22f6f9e 100644
--- a/arch/x86/kernel/ftrace_32.S
+++ b/arch/x86/kernel/ftrace_32.S
@@ -35,7 +35,7 @@ SYM_FUNC_START(function_hook)
ret
SYM_FUNC_END(function_hook)

-ENTRY(ftrace_caller)
+SYM_CODE_START(ftrace_caller)

#ifdef USING_FRAME_POINTER
# ifdef CC_USING_FENTRY
@@ -100,7 +100,7 @@ ftrace_graph_call:
/* This is weak to keep gas from relaxing the jumps */
WEAK(ftrace_stub)
ret
-END(ftrace_caller)
+SYM_CODE_END(ftrace_caller)

SYM_CODE_START(ftrace_regs_caller)
/*
@@ -173,7 +173,7 @@ SYM_INNER_LABEL(ftrace_regs_call, SYM_L_GLOBAL)
SYM_CODE_END(ftrace_regs_caller)
#else /* ! CONFIG_DYNAMIC_FTRACE */

-ENTRY(function_hook)
+SYM_CODE_START(function_hook)
cmpl $__PAGE_OFFSET, %esp
jb ftrace_stub /* Paging not enabled yet? */

@@ -206,11 +206,11 @@ ftrace_stub:
popl %ecx
popl %eax
jmp ftrace_stub
-END(function_hook)
+SYM_CODE_END(function_hook)
#endif /* CONFIG_DYNAMIC_FTRACE */

#ifdef CONFIG_FUNCTION_GRAPH_TRACER
-ENTRY(ftrace_graph_caller)
+SYM_CODE_START(ftrace_graph_caller)
pushl %eax
pushl %ecx
pushl %edx
@@ -229,7 +229,7 @@ ENTRY(ftrace_graph_caller)
popl %ecx
popl %eax
ret
-END(ftrace_graph_caller)
+SYM_CODE_END(ftrace_graph_caller)

.globl return_to_handler
return_to_handler:
diff --git a/include/linux/linkage.h b/include/linux/linkage.h
index a57da818d88f..1b06f6b45198 100644
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -120,11 +120,13 @@
SYM_FUNC_START_WEAK_NOALIGN(name)
#endif

+#ifndef CONFIG_X86
#ifndef END
/* deprecated, use SYM_FUNC_END, SYM_DATA_END, or SYM_END */
#define END(name) \
.size name, .-name
#endif
+#endif /* CONFIG_X86 */

#ifndef CONFIG_X86_64
/* If symbol 'name' is treated as a subroutine (gets called, and returns)
--
2.16.3


2018-05-18 09:20:46

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 24/28] x86_64/asm: change all ENTRY+ENDPROC to SYM_FUNC_*

These are all functions which are invoked from elsewhere, so we annotate
them as global using the new SYM_FUNC_START. And their ENDPROC's by
SYM_FUNC_END.

And make sure ENTRY/ENDPROC is not defined on X86_64, given these were
the last users.

Signed-off-by: Jiri Slaby <[email protected]>
Reviewed-by: Rafael J. Wysocki <[email protected]> [hibernate]
Reviewed-by: Boris Ostrovsky <[email protected]> [xen bits]
Cc: "H. Peter Anvin" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: [email protected]
Cc: Herbert Xu <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Len Brown <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Matt Fleming <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Boris Ostrovsky <[email protected]>
Cc: Juergen Gross <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
---
arch/x86/boot/compressed/efi_thunk_64.S | 4 +-
arch/x86/boot/compressed/head_64.S | 16 +++---
arch/x86/boot/compressed/mem_encrypt.S | 8 +--
arch/x86/crypto/aes-i586-asm_32.S | 8 +--
arch/x86/crypto/aes-x86_64-asm_64.S | 4 +-
arch/x86/crypto/aes_ctrby8_avx-x86_64.S | 12 ++---
arch/x86/crypto/aesni-intel_asm.S | 60 +++++++++++-----------
arch/x86/crypto/aesni-intel_avx-x86_64.S | 24 ++++-----
arch/x86/crypto/blowfish-x86_64-asm_64.S | 16 +++---
arch/x86/crypto/camellia-aesni-avx-asm_64.S | 24 ++++-----
arch/x86/crypto/camellia-aesni-avx2-asm_64.S | 24 ++++-----
arch/x86/crypto/camellia-x86_64-asm_64.S | 16 +++---
arch/x86/crypto/cast5-avx-x86_64-asm_64.S | 16 +++---
arch/x86/crypto/cast6-avx-x86_64-asm_64.S | 24 ++++-----
arch/x86/crypto/chacha20-avx2-x86_64.S | 4 +-
arch/x86/crypto/chacha20-ssse3-x86_64.S | 8 +--
arch/x86/crypto/crc32-pclmul_asm.S | 4 +-
arch/x86/crypto/crc32c-pcl-intel-asm_64.S | 4 +-
arch/x86/crypto/crct10dif-pcl-asm_64.S | 4 +-
arch/x86/crypto/des3_ede-asm_64.S | 8 +--
arch/x86/crypto/ghash-clmulni-intel_asm.S | 8 +--
arch/x86/crypto/poly1305-avx2-x86_64.S | 4 +-
arch/x86/crypto/poly1305-sse2-x86_64.S | 8 +--
arch/x86/crypto/salsa20-x86_64-asm_64.S | 4 +-
arch/x86/crypto/serpent-avx-x86_64-asm_64.S | 24 ++++-----
arch/x86/crypto/serpent-avx2-asm_64.S | 24 ++++-----
arch/x86/crypto/serpent-sse2-x86_64-asm_64.S | 8 +--
arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S | 8 +--
arch/x86/crypto/sha1-mb/sha1_mb_mgr_submit_avx2.S | 4 +-
arch/x86/crypto/sha1-mb/sha1_x8_avx2.S | 4 +-
arch/x86/crypto/sha1_avx2_x86_64_asm.S | 4 +-
arch/x86/crypto/sha1_ni_asm.S | 4 +-
arch/x86/crypto/sha1_ssse3_asm.S | 4 +-
arch/x86/crypto/sha256-avx-asm.S | 4 +-
arch/x86/crypto/sha256-avx2-asm.S | 4 +-
.../crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S | 8 +--
.../crypto/sha256-mb/sha256_mb_mgr_submit_avx2.S | 4 +-
arch/x86/crypto/sha256-mb/sha256_x8_avx2.S | 4 +-
arch/x86/crypto/sha256-ssse3-asm.S | 4 +-
arch/x86/crypto/sha256_ni_asm.S | 4 +-
arch/x86/crypto/sha512-avx-asm.S | 4 +-
arch/x86/crypto/sha512-avx2-asm.S | 4 +-
.../crypto/sha512-mb/sha512_mb_mgr_flush_avx2.S | 8 +--
.../crypto/sha512-mb/sha512_mb_mgr_submit_avx2.S | 4 +-
arch/x86/crypto/sha512-mb/sha512_x4_avx2.S | 4 +-
arch/x86/crypto/sha512-ssse3-asm.S | 4 +-
arch/x86/crypto/twofish-avx-x86_64-asm_64.S | 24 ++++-----
arch/x86/crypto/twofish-x86_64-asm_64-3way.S | 8 +--
arch/x86/crypto/twofish-x86_64-asm_64.S | 8 +--
arch/x86/entry/entry_64.S | 10 ++--
arch/x86/entry/entry_64_compat.S | 4 +-
arch/x86/kernel/acpi/wakeup_64.S | 8 +--
arch/x86/kernel/ftrace_64.S | 20 ++++----
arch/x86/kernel/head_64.S | 12 ++---
arch/x86/lib/checksum_32.S | 8 +--
arch/x86/lib/clear_page_64.S | 12 ++---
arch/x86/lib/cmpxchg16b_emu.S | 4 +-
arch/x86/lib/cmpxchg8b_emu.S | 4 +-
arch/x86/lib/copy_page_64.S | 4 +-
arch/x86/lib/copy_user_64.S | 16 +++---
arch/x86/lib/csum-copy_64.S | 4 +-
arch/x86/lib/getuser.S | 16 +++---
arch/x86/lib/hweight.S | 8 +--
arch/x86/lib/iomap_copy_64.S | 4 +-
arch/x86/lib/memcpy_64.S | 4 +-
arch/x86/lib/memmove_64.S | 4 +-
arch/x86/lib/memset_64.S | 4 +-
arch/x86/lib/msr-reg.S | 8 +--
arch/x86/lib/putuser.S | 16 +++---
arch/x86/lib/retpoline.S | 4 +-
arch/x86/lib/rwsem.S | 24 ++++-----
arch/x86/mm/mem_encrypt_boot.S | 8 +--
arch/x86/platform/efi/efi_stub_64.S | 4 +-
arch/x86/platform/efi/efi_thunk_64.S | 4 +-
arch/x86/power/hibernate_asm_64.S | 8 +--
arch/x86/xen/xen-asm.S | 20 ++++----
arch/x86/xen/xen-asm_64.S | 16 +++---
include/linux/linkage.h | 4 ++
78 files changed, 381 insertions(+), 377 deletions(-)

diff --git a/arch/x86/boot/compressed/efi_thunk_64.S b/arch/x86/boot/compressed/efi_thunk_64.S
index 31312070db22..593913692d16 100644
--- a/arch/x86/boot/compressed/efi_thunk_64.S
+++ b/arch/x86/boot/compressed/efi_thunk_64.S
@@ -23,7 +23,7 @@

.code64
.text
-ENTRY(efi64_thunk)
+SYM_FUNC_START(efi64_thunk)
push %rbp
push %rbx

@@ -97,7 +97,7 @@ ENTRY(efi64_thunk)
pop %rbx
pop %rbp
ret
-ENDPROC(efi64_thunk)
+SYM_FUNC_END(efi64_thunk)

SYM_FUNC_START_LOCAL(efi_exit32)
movq func_rt_ptr(%rip), %rax
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index d056c789f90d..109d2e00650b 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -45,7 +45,7 @@

__HEAD
.code32
-ENTRY(startup_32)
+SYM_FUNC_START(startup_32)
/*
* 32bit entry is 0 and it is ABI so immutable!
* If we come here directly from a bootloader,
@@ -222,11 +222,11 @@ ENTRY(startup_32)

/* Jump from 32bit compatibility mode into 64bit mode. */
lret
-ENDPROC(startup_32)
+SYM_FUNC_END(startup_32)

#ifdef CONFIG_EFI_MIXED
.org 0x190
-ENTRY(efi32_stub_entry)
+SYM_FUNC_START(efi32_stub_entry)
add $0x4, %esp /* Discard return address */
popl %ecx
popl %edx
@@ -245,7 +245,7 @@ ENTRY(efi32_stub_entry)
movl %eax, efi_config(%ebp)

jmp startup_32
-ENDPROC(efi32_stub_entry)
+SYM_FUNC_END(efi32_stub_entry)
#endif

.code64
@@ -405,7 +405,7 @@ SYM_CODE_END(startup_64)
#ifdef CONFIG_EFI_STUB

/* The entry point for the PE/COFF executable is efi_pe_entry. */
-ENTRY(efi_pe_entry)
+SYM_FUNC_START(efi_pe_entry)
movq %rcx, efi64_config(%rip) /* Handle */
movq %rdx, efi64_config+8(%rip) /* EFI System table pointer */

@@ -454,10 +454,10 @@ fail:
movl BP_code32_start(%esi), %eax
leaq startup_64(%rax), %rax
jmp *%rax
-ENDPROC(efi_pe_entry)
+SYM_FUNC_END(efi_pe_entry)

.org 0x390
-ENTRY(efi64_stub_entry)
+SYM_FUNC_START(efi64_stub_entry)
movq %rdi, efi64_config(%rip) /* Handle */
movq %rsi, efi64_config+8(%rip) /* EFI System table pointer */

@@ -466,7 +466,7 @@ ENTRY(efi64_stub_entry)

movq %rdx, %rsi
jmp handover_entry
-ENDPROC(efi64_stub_entry)
+SYM_FUNC_END(efi64_stub_entry)
#endif

.text
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index fabed28d2edd..ebf82e1f9300 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -18,7 +18,7 @@

.text
.code32
-ENTRY(get_sev_encryption_bit)
+SYM_FUNC_START(get_sev_encryption_bit)
xor %eax, %eax

#ifdef CONFIG_AMD_MEM_ENCRYPT
@@ -85,10 +85,10 @@ ENTRY(get_sev_encryption_bit)
#endif /* CONFIG_AMD_MEM_ENCRYPT */

ret
-ENDPROC(get_sev_encryption_bit)
+SYM_FUNC_END(get_sev_encryption_bit)

.code64
-ENTRY(set_sev_encryption_mask)
+SYM_FUNC_START(set_sev_encryption_mask)
#ifdef CONFIG_AMD_MEM_ENCRYPT
push %rbp
push %rdx
@@ -110,7 +110,7 @@ ENTRY(set_sev_encryption_mask)

xor %rax, %rax
ret
-ENDPROC(set_sev_encryption_mask)
+SYM_FUNC_END(set_sev_encryption_mask)

.data
SYM_DATA_LOCAL(enc_bit, .int 0xffffffff)
diff --git a/arch/x86/crypto/aes-i586-asm_32.S b/arch/x86/crypto/aes-i586-asm_32.S
index 2849dbc59e11..5b2636c58527 100644
--- a/arch/x86/crypto/aes-i586-asm_32.S
+++ b/arch/x86/crypto/aes-i586-asm_32.S
@@ -223,7 +223,7 @@
.extern crypto_ft_tab
.extern crypto_fl_tab

-ENTRY(aes_enc_blk)
+SYM_FUNC_START(aes_enc_blk)
push %ebp
mov ctx(%esp),%ebp

@@ -287,7 +287,7 @@ ENTRY(aes_enc_blk)
mov %r0,(%ebp)
pop %ebp
ret
-ENDPROC(aes_enc_blk)
+SYM_FUNC_END(aes_enc_blk)

// AES (Rijndael) Decryption Subroutine
/* void aes_dec_blk(struct crypto_aes_ctx *ctx, u8 *out_blk, const u8 *in_blk) */
@@ -295,7 +295,7 @@ ENDPROC(aes_enc_blk)
.extern crypto_it_tab
.extern crypto_il_tab

-ENTRY(aes_dec_blk)
+SYM_FUNC_START(aes_dec_blk)
push %ebp
mov ctx(%esp),%ebp

@@ -359,4 +359,4 @@ ENTRY(aes_dec_blk)
mov %r0,(%ebp)
pop %ebp
ret
-ENDPROC(aes_dec_blk)
+SYM_FUNC_END(aes_dec_blk)
diff --git a/arch/x86/crypto/aes-x86_64-asm_64.S b/arch/x86/crypto/aes-x86_64-asm_64.S
index 8739cf7795de..22c44ad3ef42 100644
--- a/arch/x86/crypto/aes-x86_64-asm_64.S
+++ b/arch/x86/crypto/aes-x86_64-asm_64.S
@@ -49,7 +49,7 @@
#define R11 %r11

#define prologue(FUNC,KEY,B128,B192,r1,r2,r5,r6,r7,r8,r9,r10,r11) \
- ENTRY(FUNC); \
+ SYM_FUNC_START(FUNC); \
movq r1,r2; \
leaq KEY+48(r8),r9; \
movq r10,r11; \
@@ -75,7 +75,7 @@
movl r7 ## E,8(r9); \
movl r8 ## E,12(r9); \
ret; \
- ENDPROC(FUNC);
+ SYM_FUNC_END(FUNC);

#define round(TAB,OFFSET,r1,r2,r3,r4,r5,r6,r7,r8,ra,rb,rc,rd) \
movzbl r2 ## H,r5 ## E; \
diff --git a/arch/x86/crypto/aes_ctrby8_avx-x86_64.S b/arch/x86/crypto/aes_ctrby8_avx-x86_64.S
index 5f6a5af9c489..ec437db1fa54 100644
--- a/arch/x86/crypto/aes_ctrby8_avx-x86_64.S
+++ b/arch/x86/crypto/aes_ctrby8_avx-x86_64.S
@@ -544,11 +544,11 @@ ddq_add_8:
* aes_ctr_enc_128_avx_by8(void *in, void *iv, void *keys, void *out,
* unsigned int num_bytes)
*/
-ENTRY(aes_ctr_enc_128_avx_by8)
+SYM_FUNC_START(aes_ctr_enc_128_avx_by8)
/* call the aes main loop */
do_aes_ctrmain KEY_128

-ENDPROC(aes_ctr_enc_128_avx_by8)
+SYM_FUNC_END(aes_ctr_enc_128_avx_by8)

/*
* routine to do AES192 CTR enc/decrypt "by8"
@@ -557,11 +557,11 @@ ENDPROC(aes_ctr_enc_128_avx_by8)
* aes_ctr_enc_192_avx_by8(void *in, void *iv, void *keys, void *out,
* unsigned int num_bytes)
*/
-ENTRY(aes_ctr_enc_192_avx_by8)
+SYM_FUNC_START(aes_ctr_enc_192_avx_by8)
/* call the aes main loop */
do_aes_ctrmain KEY_192

-ENDPROC(aes_ctr_enc_192_avx_by8)
+SYM_FUNC_END(aes_ctr_enc_192_avx_by8)

/*
* routine to do AES256 CTR enc/decrypt "by8"
@@ -570,8 +570,8 @@ ENDPROC(aes_ctr_enc_192_avx_by8)
* aes_ctr_enc_256_avx_by8(void *in, void *iv, void *keys, void *out,
* unsigned int num_bytes)
*/
-ENTRY(aes_ctr_enc_256_avx_by8)
+SYM_FUNC_START(aes_ctr_enc_256_avx_by8)
/* call the aes main loop */
do_aes_ctrmain KEY_256

-ENDPROC(aes_ctr_enc_256_avx_by8)
+SYM_FUNC_END(aes_ctr_enc_256_avx_by8)
diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S
index c85ecb163c78..8a0b154d3a9f 100644
--- a/arch/x86/crypto/aesni-intel_asm.S
+++ b/arch/x86/crypto/aesni-intel_asm.S
@@ -1596,7 +1596,7 @@ _esb_loop_\@:
* poly = x^128 + x^127 + x^126 + x^121 + 1
*
*****************************************************************************/
-ENTRY(aesni_gcm_dec)
+SYM_FUNC_START(aesni_gcm_dec)
FUNC_SAVE

GCM_INIT %arg6, arg7, arg8, arg9
@@ -1604,7 +1604,7 @@ ENTRY(aesni_gcm_dec)
GCM_COMPLETE arg10, arg11
FUNC_RESTORE
ret
-ENDPROC(aesni_gcm_dec)
+SYM_FUNC_END(aesni_gcm_dec)


/*****************************************************************************
@@ -1684,7 +1684,7 @@ ENDPROC(aesni_gcm_dec)
*
* poly = x^128 + x^127 + x^126 + x^121 + 1
***************************************************************************/
-ENTRY(aesni_gcm_enc)
+SYM_FUNC_START(aesni_gcm_enc)
FUNC_SAVE

GCM_INIT %arg6, arg7, arg8, arg9
@@ -1693,7 +1693,7 @@ ENTRY(aesni_gcm_enc)
GCM_COMPLETE arg10, arg11
FUNC_RESTORE
ret
-ENDPROC(aesni_gcm_enc)
+SYM_FUNC_END(aesni_gcm_enc)

/*****************************************************************************
* void aesni_gcm_init(void *aes_ctx, // AES Key schedule. Starts on a 16 byte boundary.
@@ -1706,12 +1706,12 @@ ENDPROC(aesni_gcm_enc)
* const u8 *aad, // Additional Authentication Data (AAD)
* u64 aad_len) // Length of AAD in bytes.
*/
-ENTRY(aesni_gcm_init)
+SYM_FUNC_START(aesni_gcm_init)
FUNC_SAVE
GCM_INIT %arg3, %arg4,%arg5, %arg6
FUNC_RESTORE
ret
-ENDPROC(aesni_gcm_init)
+SYM_FUNC_END(aesni_gcm_init)

/*****************************************************************************
* void aesni_gcm_enc_update(void *aes_ctx, // AES Key schedule. Starts on a 16 byte boundary.
@@ -1721,12 +1721,12 @@ ENDPROC(aesni_gcm_init)
* const u8 *in, // Plaintext input
* u64 plaintext_len, // Length of data in bytes for encryption.
*/
-ENTRY(aesni_gcm_enc_update)
+SYM_FUNC_START(aesni_gcm_enc_update)
FUNC_SAVE
GCM_ENC_DEC enc
FUNC_RESTORE
ret
-ENDPROC(aesni_gcm_enc_update)
+SYM_FUNC_END(aesni_gcm_enc_update)

/*****************************************************************************
* void aesni_gcm_dec_update(void *aes_ctx, // AES Key schedule. Starts on a 16 byte boundary.
@@ -1736,12 +1736,12 @@ ENDPROC(aesni_gcm_enc_update)
* const u8 *in, // Plaintext input
* u64 plaintext_len, // Length of data in bytes for encryption.
*/
-ENTRY(aesni_gcm_dec_update)
+SYM_FUNC_START(aesni_gcm_dec_update)
FUNC_SAVE
GCM_ENC_DEC dec
FUNC_RESTORE
ret
-ENDPROC(aesni_gcm_dec_update)
+SYM_FUNC_END(aesni_gcm_dec_update)

/*****************************************************************************
* void aesni_gcm_finalize(void *aes_ctx, // AES Key schedule. Starts on a 16 byte boundary.
@@ -1751,12 +1751,12 @@ ENDPROC(aesni_gcm_dec_update)
* u64 auth_tag_len); // Authenticated Tag Length in bytes. Valid values are 16 (most likely),
* // 12 or 8.
*/
-ENTRY(aesni_gcm_finalize)
+SYM_FUNC_START(aesni_gcm_finalize)
FUNC_SAVE
GCM_COMPLETE %arg3 %arg4
FUNC_RESTORE
ret
-ENDPROC(aesni_gcm_finalize)
+SYM_FUNC_END(aesni_gcm_finalize)

#endif

@@ -1834,7 +1834,7 @@ SYM_FUNC_END(_key_expansion_256b)
* int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key,
* unsigned int key_len)
*/
-ENTRY(aesni_set_key)
+SYM_FUNC_START(aesni_set_key)
FRAME_BEGIN
#ifndef __x86_64__
pushl KEYP
@@ -1943,12 +1943,12 @@ ENTRY(aesni_set_key)
#endif
FRAME_END
ret
-ENDPROC(aesni_set_key)
+SYM_FUNC_END(aesni_set_key)

/*
* void aesni_enc(struct crypto_aes_ctx *ctx, u8 *dst, const u8 *src)
*/
-ENTRY(aesni_enc)
+SYM_FUNC_START(aesni_enc)
FRAME_BEGIN
#ifndef __x86_64__
pushl KEYP
@@ -1967,7 +1967,7 @@ ENTRY(aesni_enc)
#endif
FRAME_END
ret
-ENDPROC(aesni_enc)
+SYM_FUNC_END(aesni_enc)

/*
* _aesni_enc1: internal ABI
@@ -2137,7 +2137,7 @@ SYM_FUNC_END(_aesni_enc4)
/*
* void aesni_dec (struct crypto_aes_ctx *ctx, u8 *dst, const u8 *src)
*/
-ENTRY(aesni_dec)
+SYM_FUNC_START(aesni_dec)
FRAME_BEGIN
#ifndef __x86_64__
pushl KEYP
@@ -2157,7 +2157,7 @@ ENTRY(aesni_dec)
#endif
FRAME_END
ret
-ENDPROC(aesni_dec)
+SYM_FUNC_END(aesni_dec)

/*
* _aesni_dec1: internal ABI
@@ -2328,7 +2328,7 @@ SYM_FUNC_END(_aesni_dec4)
* void aesni_ecb_enc(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src,
* size_t len)
*/
-ENTRY(aesni_ecb_enc)
+SYM_FUNC_START(aesni_ecb_enc)
FRAME_BEGIN
#ifndef __x86_64__
pushl LEN
@@ -2382,13 +2382,13 @@ ENTRY(aesni_ecb_enc)
#endif
FRAME_END
ret
-ENDPROC(aesni_ecb_enc)
+SYM_FUNC_END(aesni_ecb_enc)

/*
* void aesni_ecb_dec(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src,
* size_t len);
*/
-ENTRY(aesni_ecb_dec)
+SYM_FUNC_START(aesni_ecb_dec)
FRAME_BEGIN
#ifndef __x86_64__
pushl LEN
@@ -2443,13 +2443,13 @@ ENTRY(aesni_ecb_dec)
#endif
FRAME_END
ret
-ENDPROC(aesni_ecb_dec)
+SYM_FUNC_END(aesni_ecb_dec)

/*
* void aesni_cbc_enc(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src,
* size_t len, u8 *iv)
*/
-ENTRY(aesni_cbc_enc)
+SYM_FUNC_START(aesni_cbc_enc)
FRAME_BEGIN
#ifndef __x86_64__
pushl IVP
@@ -2487,13 +2487,13 @@ ENTRY(aesni_cbc_enc)
#endif
FRAME_END
ret
-ENDPROC(aesni_cbc_enc)
+SYM_FUNC_END(aesni_cbc_enc)

/*
* void aesni_cbc_dec(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src,
* size_t len, u8 *iv)
*/
-ENTRY(aesni_cbc_dec)
+SYM_FUNC_START(aesni_cbc_dec)
FRAME_BEGIN
#ifndef __x86_64__
pushl IVP
@@ -2580,7 +2580,7 @@ ENTRY(aesni_cbc_dec)
#endif
FRAME_END
ret
-ENDPROC(aesni_cbc_dec)
+SYM_FUNC_END(aesni_cbc_dec)

#ifdef __x86_64__
.pushsection .rodata
@@ -2642,7 +2642,7 @@ SYM_FUNC_END(_aesni_inc)
* void aesni_ctr_enc(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src,
* size_t len, u8 *iv)
*/
-ENTRY(aesni_ctr_enc)
+SYM_FUNC_START(aesni_ctr_enc)
FRAME_BEGIN
cmp $16, LEN
jb .Lctr_enc_just_ret
@@ -2699,7 +2699,7 @@ ENTRY(aesni_ctr_enc)
.Lctr_enc_just_ret:
FRAME_END
ret
-ENDPROC(aesni_ctr_enc)
+SYM_FUNC_END(aesni_ctr_enc)

/*
* _aesni_gf128mul_x_ble: internal ABI
@@ -2723,7 +2723,7 @@ ENDPROC(aesni_ctr_enc)
* void aesni_xts_crypt8(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src,
* bool enc, u8 *iv)
*/
-ENTRY(aesni_xts_crypt8)
+SYM_FUNC_START(aesni_xts_crypt8)
FRAME_BEGIN
cmpb $0, %cl
movl $0, %ecx
@@ -2827,6 +2827,6 @@ ENTRY(aesni_xts_crypt8)

FRAME_END
ret
-ENDPROC(aesni_xts_crypt8)
+SYM_FUNC_END(aesni_xts_crypt8)

#endif
diff --git a/arch/x86/crypto/aesni-intel_avx-x86_64.S b/arch/x86/crypto/aesni-intel_avx-x86_64.S
index faecb1518bf8..ee056694e54d 100644
--- a/arch/x86/crypto/aesni-intel_avx-x86_64.S
+++ b/arch/x86/crypto/aesni-intel_avx-x86_64.S
@@ -1531,7 +1531,7 @@ _return_T_done\@:
# (gcm_data *my_ctx_data,
# u8 *hash_subkey)# /* H, the Hash sub key input. Data starts on a 16-byte boundary. */
#############################################################
-ENTRY(aesni_gcm_precomp_avx_gen2)
+SYM_FUNC_START(aesni_gcm_precomp_avx_gen2)
#the number of pushes must equal STACK_OFFSET
push %r12
push %r13
@@ -1574,7 +1574,7 @@ ENTRY(aesni_gcm_precomp_avx_gen2)
pop %r13
pop %r12
ret
-ENDPROC(aesni_gcm_precomp_avx_gen2)
+SYM_FUNC_END(aesni_gcm_precomp_avx_gen2)

###############################################################################
#void aesni_gcm_enc_avx_gen2(
@@ -1592,10 +1592,10 @@ ENDPROC(aesni_gcm_precomp_avx_gen2)
# u64 auth_tag_len)# /* Authenticated Tag Length in bytes.
# Valid values are 16 (most likely), 12 or 8. */
###############################################################################
-ENTRY(aesni_gcm_enc_avx_gen2)
+SYM_FUNC_START(aesni_gcm_enc_avx_gen2)
GCM_ENC_DEC_AVX ENC
ret
-ENDPROC(aesni_gcm_enc_avx_gen2)
+SYM_FUNC_END(aesni_gcm_enc_avx_gen2)

###############################################################################
#void aesni_gcm_dec_avx_gen2(
@@ -1613,10 +1613,10 @@ ENDPROC(aesni_gcm_enc_avx_gen2)
# u64 auth_tag_len)# /* Authenticated Tag Length in bytes.
# Valid values are 16 (most likely), 12 or 8. */
###############################################################################
-ENTRY(aesni_gcm_dec_avx_gen2)
+SYM_FUNC_START(aesni_gcm_dec_avx_gen2)
GCM_ENC_DEC_AVX DEC
ret
-ENDPROC(aesni_gcm_dec_avx_gen2)
+SYM_FUNC_END(aesni_gcm_dec_avx_gen2)
#endif /* CONFIG_AS_AVX */

#ifdef CONFIG_AS_AVX2
@@ -2855,7 +2855,7 @@ _return_T_done\@:
# u8 *hash_subkey)# /* H, the Hash sub key input.
# Data starts on a 16-byte boundary. */
#############################################################
-ENTRY(aesni_gcm_precomp_avx_gen4)
+SYM_FUNC_START(aesni_gcm_precomp_avx_gen4)
#the number of pushes must equal STACK_OFFSET
push %r12
push %r13
@@ -2898,7 +2898,7 @@ ENTRY(aesni_gcm_precomp_avx_gen4)
pop %r13
pop %r12
ret
-ENDPROC(aesni_gcm_precomp_avx_gen4)
+SYM_FUNC_END(aesni_gcm_precomp_avx_gen4)


###############################################################################
@@ -2917,10 +2917,10 @@ ENDPROC(aesni_gcm_precomp_avx_gen4)
# u64 auth_tag_len)# /* Authenticated Tag Length in bytes.
# Valid values are 16 (most likely), 12 or 8. */
###############################################################################
-ENTRY(aesni_gcm_enc_avx_gen4)
+SYM_FUNC_START(aesni_gcm_enc_avx_gen4)
GCM_ENC_DEC_AVX2 ENC
ret
-ENDPROC(aesni_gcm_enc_avx_gen4)
+SYM_FUNC_END(aesni_gcm_enc_avx_gen4)

###############################################################################
#void aesni_gcm_dec_avx_gen4(
@@ -2938,9 +2938,9 @@ ENDPROC(aesni_gcm_enc_avx_gen4)
# u64 auth_tag_len)# /* Authenticated Tag Length in bytes.
# Valid values are 16 (most likely), 12 or 8. */
###############################################################################
-ENTRY(aesni_gcm_dec_avx_gen4)
+SYM_FUNC_START(aesni_gcm_dec_avx_gen4)
GCM_ENC_DEC_AVX2 DEC
ret
-ENDPROC(aesni_gcm_dec_avx_gen4)
+SYM_FUNC_END(aesni_gcm_dec_avx_gen4)

#endif /* CONFIG_AS_AVX2 */
diff --git a/arch/x86/crypto/blowfish-x86_64-asm_64.S b/arch/x86/crypto/blowfish-x86_64-asm_64.S
index 8c1fcb6bad21..70c34850ee0b 100644
--- a/arch/x86/crypto/blowfish-x86_64-asm_64.S
+++ b/arch/x86/crypto/blowfish-x86_64-asm_64.S
@@ -118,7 +118,7 @@
bswapq RX0; \
xorq RX0, (RIO);

-ENTRY(__blowfish_enc_blk)
+SYM_FUNC_START(__blowfish_enc_blk)
/* input:
* %rdi: ctx
* %rsi: dst
@@ -154,9 +154,9 @@ ENTRY(__blowfish_enc_blk)
.L__enc_xor:
xor_block();
ret;
-ENDPROC(__blowfish_enc_blk)
+SYM_FUNC_END(__blowfish_enc_blk)

-ENTRY(blowfish_dec_blk)
+SYM_FUNC_START(blowfish_dec_blk)
/* input:
* %rdi: ctx
* %rsi: dst
@@ -186,7 +186,7 @@ ENTRY(blowfish_dec_blk)
movq %r11, %r12;

ret;
-ENDPROC(blowfish_dec_blk)
+SYM_FUNC_END(blowfish_dec_blk)

/**********************************************************************
4-way blowfish, four blocks parallel
@@ -298,7 +298,7 @@ ENDPROC(blowfish_dec_blk)
bswapq RX3; \
xorq RX3, 24(RIO);

-ENTRY(__blowfish_enc_blk_4way)
+SYM_FUNC_START(__blowfish_enc_blk_4way)
/* input:
* %rdi: ctx
* %rsi: dst
@@ -345,9 +345,9 @@ ENTRY(__blowfish_enc_blk_4way)
popq %rbx;
popq %r12;
ret;
-ENDPROC(__blowfish_enc_blk_4way)
+SYM_FUNC_END(__blowfish_enc_blk_4way)

-ENTRY(blowfish_dec_blk_4way)
+SYM_FUNC_START(blowfish_dec_blk_4way)
/* input:
* %rdi: ctx
* %rsi: dst
@@ -380,4 +380,4 @@ ENTRY(blowfish_dec_blk_4way)
popq %r12;

ret;
-ENDPROC(blowfish_dec_blk_4way)
+SYM_FUNC_END(blowfish_dec_blk_4way)
diff --git a/arch/x86/crypto/camellia-aesni-avx-asm_64.S b/arch/x86/crypto/camellia-aesni-avx-asm_64.S
index f4408ca55fdb..d01ddd73de65 100644
--- a/arch/x86/crypto/camellia-aesni-avx-asm_64.S
+++ b/arch/x86/crypto/camellia-aesni-avx-asm_64.S
@@ -893,7 +893,7 @@ SYM_FUNC_START_LOCAL(__camellia_dec_blk16)
jmp .Ldec_max24;
SYM_FUNC_END(__camellia_dec_blk16)

-ENTRY(camellia_ecb_enc_16way)
+SYM_FUNC_START(camellia_ecb_enc_16way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (16 blocks)
@@ -916,9 +916,9 @@ ENTRY(camellia_ecb_enc_16way)

FRAME_END
ret;
-ENDPROC(camellia_ecb_enc_16way)
+SYM_FUNC_END(camellia_ecb_enc_16way)

-ENTRY(camellia_ecb_dec_16way)
+SYM_FUNC_START(camellia_ecb_dec_16way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (16 blocks)
@@ -946,9 +946,9 @@ ENTRY(camellia_ecb_dec_16way)

FRAME_END
ret;
-ENDPROC(camellia_ecb_dec_16way)
+SYM_FUNC_END(camellia_ecb_dec_16way)

-ENTRY(camellia_cbc_dec_16way)
+SYM_FUNC_START(camellia_cbc_dec_16way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (16 blocks)
@@ -997,7 +997,7 @@ ENTRY(camellia_cbc_dec_16way)

FRAME_END
ret;
-ENDPROC(camellia_cbc_dec_16way)
+SYM_FUNC_END(camellia_cbc_dec_16way)

#define inc_le128(x, minus_one, tmp) \
vpcmpeqq minus_one, x, tmp; \
@@ -1005,7 +1005,7 @@ ENDPROC(camellia_cbc_dec_16way)
vpslldq $8, tmp, tmp; \
vpsubq tmp, x, x;

-ENTRY(camellia_ctr_16way)
+SYM_FUNC_START(camellia_ctr_16way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (16 blocks)
@@ -1110,7 +1110,7 @@ ENTRY(camellia_ctr_16way)

FRAME_END
ret;
-ENDPROC(camellia_ctr_16way)
+SYM_FUNC_END(camellia_ctr_16way)

#define gf128mul_x_ble(iv, mask, tmp) \
vpsrad $31, iv, tmp; \
@@ -1256,7 +1256,7 @@ SYM_FUNC_START_LOCAL(camellia_xts_crypt_16way)
ret;
SYM_FUNC_END(camellia_xts_crypt_16way)

-ENTRY(camellia_xts_enc_16way)
+SYM_FUNC_START(camellia_xts_enc_16way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (16 blocks)
@@ -1268,9 +1268,9 @@ ENTRY(camellia_xts_enc_16way)
leaq __camellia_enc_blk16, %r9;

jmp camellia_xts_crypt_16way;
-ENDPROC(camellia_xts_enc_16way)
+SYM_FUNC_END(camellia_xts_enc_16way)

-ENTRY(camellia_xts_dec_16way)
+SYM_FUNC_START(camellia_xts_dec_16way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (16 blocks)
@@ -1286,4 +1286,4 @@ ENTRY(camellia_xts_dec_16way)
leaq __camellia_dec_blk16, %r9;

jmp camellia_xts_crypt_16way;
-ENDPROC(camellia_xts_dec_16way)
+SYM_FUNC_END(camellia_xts_dec_16way)
diff --git a/arch/x86/crypto/camellia-aesni-avx2-asm_64.S b/arch/x86/crypto/camellia-aesni-avx2-asm_64.S
index 916a3e2b8ea4..85f0a265dee8 100644
--- a/arch/x86/crypto/camellia-aesni-avx2-asm_64.S
+++ b/arch/x86/crypto/camellia-aesni-avx2-asm_64.S
@@ -936,7 +936,7 @@ SYM_FUNC_START_LOCAL(__camellia_dec_blk32)
jmp .Ldec_max24;
SYM_FUNC_END(__camellia_dec_blk32)

-ENTRY(camellia_ecb_enc_32way)
+SYM_FUNC_START(camellia_ecb_enc_32way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (32 blocks)
@@ -963,9 +963,9 @@ ENTRY(camellia_ecb_enc_32way)

FRAME_END
ret;
-ENDPROC(camellia_ecb_enc_32way)
+SYM_FUNC_END(camellia_ecb_enc_32way)

-ENTRY(camellia_ecb_dec_32way)
+SYM_FUNC_START(camellia_ecb_dec_32way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (32 blocks)
@@ -997,9 +997,9 @@ ENTRY(camellia_ecb_dec_32way)

FRAME_END
ret;
-ENDPROC(camellia_ecb_dec_32way)
+SYM_FUNC_END(camellia_ecb_dec_32way)

-ENTRY(camellia_cbc_dec_32way)
+SYM_FUNC_START(camellia_cbc_dec_32way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (32 blocks)
@@ -1065,7 +1065,7 @@ ENTRY(camellia_cbc_dec_32way)

FRAME_END
ret;
-ENDPROC(camellia_cbc_dec_32way)
+SYM_FUNC_END(camellia_cbc_dec_32way)

#define inc_le128(x, minus_one, tmp) \
vpcmpeqq minus_one, x, tmp; \
@@ -1081,7 +1081,7 @@ ENDPROC(camellia_cbc_dec_32way)
vpslldq $8, tmp1, tmp1; \
vpsubq tmp1, x, x;

-ENTRY(camellia_ctr_32way)
+SYM_FUNC_START(camellia_ctr_32way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (32 blocks)
@@ -1205,7 +1205,7 @@ ENTRY(camellia_ctr_32way)

FRAME_END
ret;
-ENDPROC(camellia_ctr_32way)
+SYM_FUNC_END(camellia_ctr_32way)

#define gf128mul_x_ble(iv, mask, tmp) \
vpsrad $31, iv, tmp; \
@@ -1374,7 +1374,7 @@ SYM_FUNC_START_LOCAL(camellia_xts_crypt_32way)
ret;
SYM_FUNC_END(camellia_xts_crypt_32way)

-ENTRY(camellia_xts_enc_32way)
+SYM_FUNC_START(camellia_xts_enc_32way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (32 blocks)
@@ -1387,9 +1387,9 @@ ENTRY(camellia_xts_enc_32way)
leaq __camellia_enc_blk32, %r9;

jmp camellia_xts_crypt_32way;
-ENDPROC(camellia_xts_enc_32way)
+SYM_FUNC_END(camellia_xts_enc_32way)

-ENTRY(camellia_xts_dec_32way)
+SYM_FUNC_START(camellia_xts_dec_32way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (32 blocks)
@@ -1405,4 +1405,4 @@ ENTRY(camellia_xts_dec_32way)
leaq __camellia_dec_blk32, %r9;

jmp camellia_xts_crypt_32way;
-ENDPROC(camellia_xts_dec_32way)
+SYM_FUNC_END(camellia_xts_dec_32way)
diff --git a/arch/x86/crypto/camellia-x86_64-asm_64.S b/arch/x86/crypto/camellia-x86_64-asm_64.S
index 95ba6956a7f6..4d77c9dcddbd 100644
--- a/arch/x86/crypto/camellia-x86_64-asm_64.S
+++ b/arch/x86/crypto/camellia-x86_64-asm_64.S
@@ -190,7 +190,7 @@
bswapq RAB0; \
movq RAB0, 4*2(RIO);

-ENTRY(__camellia_enc_blk)
+SYM_FUNC_START(__camellia_enc_blk)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -235,9 +235,9 @@ ENTRY(__camellia_enc_blk)

movq RR12, %r12;
ret;
-ENDPROC(__camellia_enc_blk)
+SYM_FUNC_END(__camellia_enc_blk)

-ENTRY(camellia_dec_blk)
+SYM_FUNC_START(camellia_dec_blk)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -273,7 +273,7 @@ ENTRY(camellia_dec_blk)

movq RR12, %r12;
ret;
-ENDPROC(camellia_dec_blk)
+SYM_FUNC_END(camellia_dec_blk)

/**********************************************************************
2-way camellia
@@ -424,7 +424,7 @@ ENDPROC(camellia_dec_blk)
bswapq RAB1; \
movq RAB1, 12*2(RIO);

-ENTRY(__camellia_enc_blk_2way)
+SYM_FUNC_START(__camellia_enc_blk_2way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -471,9 +471,9 @@ ENTRY(__camellia_enc_blk_2way)
movq RR12, %r12;
popq %rbx;
ret;
-ENDPROC(__camellia_enc_blk_2way)
+SYM_FUNC_END(__camellia_enc_blk_2way)

-ENTRY(camellia_dec_blk_2way)
+SYM_FUNC_START(camellia_dec_blk_2way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -511,4 +511,4 @@ ENTRY(camellia_dec_blk_2way)
movq RR12, %r12;
movq RXOR, %rbx;
ret;
-ENDPROC(camellia_dec_blk_2way)
+SYM_FUNC_END(camellia_dec_blk_2way)
diff --git a/arch/x86/crypto/cast5-avx-x86_64-asm_64.S b/arch/x86/crypto/cast5-avx-x86_64-asm_64.S
index b26df120413c..3789c61f6166 100644
--- a/arch/x86/crypto/cast5-avx-x86_64-asm_64.S
+++ b/arch/x86/crypto/cast5-avx-x86_64-asm_64.S
@@ -374,7 +374,7 @@ SYM_FUNC_START_LOCAL(__cast5_dec_blk16)
jmp .L__dec_tail;
SYM_FUNC_END(__cast5_dec_blk16)

-ENTRY(cast5_ecb_enc_16way)
+SYM_FUNC_START(cast5_ecb_enc_16way)
/* input:
* %rdi: ctx
* %rsi: dst
@@ -409,9 +409,9 @@ ENTRY(cast5_ecb_enc_16way)
popq %r15;
FRAME_END
ret;
-ENDPROC(cast5_ecb_enc_16way)
+SYM_FUNC_END(cast5_ecb_enc_16way)

-ENTRY(cast5_ecb_dec_16way)
+SYM_FUNC_START(cast5_ecb_dec_16way)
/* input:
* %rdi: ctx
* %rsi: dst
@@ -447,9 +447,9 @@ ENTRY(cast5_ecb_dec_16way)
popq %r15;
FRAME_END
ret;
-ENDPROC(cast5_ecb_dec_16way)
+SYM_FUNC_END(cast5_ecb_dec_16way)

-ENTRY(cast5_cbc_dec_16way)
+SYM_FUNC_START(cast5_cbc_dec_16way)
/* input:
* %rdi: ctx
* %rsi: dst
@@ -499,9 +499,9 @@ ENTRY(cast5_cbc_dec_16way)
popq %r12;
FRAME_END
ret;
-ENDPROC(cast5_cbc_dec_16way)
+SYM_FUNC_END(cast5_cbc_dec_16way)

-ENTRY(cast5_ctr_16way)
+SYM_FUNC_START(cast5_ctr_16way)
/* input:
* %rdi: ctx
* %rsi: dst
@@ -575,4 +575,4 @@ ENTRY(cast5_ctr_16way)
popq %r12;
FRAME_END
ret;
-ENDPROC(cast5_ctr_16way)
+SYM_FUNC_END(cast5_ctr_16way)
diff --git a/arch/x86/crypto/cast6-avx-x86_64-asm_64.S b/arch/x86/crypto/cast6-avx-x86_64-asm_64.S
index 0a68e42a00f9..e38ab4571a6b 100644
--- a/arch/x86/crypto/cast6-avx-x86_64-asm_64.S
+++ b/arch/x86/crypto/cast6-avx-x86_64-asm_64.S
@@ -356,7 +356,7 @@ SYM_FUNC_START_LOCAL(__cast6_dec_blk8)
ret;
SYM_FUNC_END(__cast6_dec_blk8)

-ENTRY(cast6_ecb_enc_8way)
+SYM_FUNC_START(cast6_ecb_enc_8way)
/* input:
* %rdi: ctx
* %rsi: dst
@@ -377,9 +377,9 @@ ENTRY(cast6_ecb_enc_8way)
popq %r15;
FRAME_END
ret;
-ENDPROC(cast6_ecb_enc_8way)
+SYM_FUNC_END(cast6_ecb_enc_8way)

-ENTRY(cast6_ecb_dec_8way)
+SYM_FUNC_START(cast6_ecb_dec_8way)
/* input:
* %rdi: ctx
* %rsi: dst
@@ -400,9 +400,9 @@ ENTRY(cast6_ecb_dec_8way)
popq %r15;
FRAME_END
ret;
-ENDPROC(cast6_ecb_dec_8way)
+SYM_FUNC_END(cast6_ecb_dec_8way)

-ENTRY(cast6_cbc_dec_8way)
+SYM_FUNC_START(cast6_cbc_dec_8way)
/* input:
* %rdi: ctx
* %rsi: dst
@@ -426,9 +426,9 @@ ENTRY(cast6_cbc_dec_8way)
popq %r12;
FRAME_END
ret;
-ENDPROC(cast6_cbc_dec_8way)
+SYM_FUNC_END(cast6_cbc_dec_8way)

-ENTRY(cast6_ctr_8way)
+SYM_FUNC_START(cast6_ctr_8way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -454,9 +454,9 @@ ENTRY(cast6_ctr_8way)
popq %r12;
FRAME_END
ret;
-ENDPROC(cast6_ctr_8way)
+SYM_FUNC_END(cast6_ctr_8way)

-ENTRY(cast6_xts_enc_8way)
+SYM_FUNC_START(cast6_xts_enc_8way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -481,9 +481,9 @@ ENTRY(cast6_xts_enc_8way)
popq %r15;
FRAME_END
ret;
-ENDPROC(cast6_xts_enc_8way)
+SYM_FUNC_END(cast6_xts_enc_8way)

-ENTRY(cast6_xts_dec_8way)
+SYM_FUNC_START(cast6_xts_dec_8way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -508,4 +508,4 @@ ENTRY(cast6_xts_dec_8way)
popq %r15;
FRAME_END
ret;
-ENDPROC(cast6_xts_dec_8way)
+SYM_FUNC_END(cast6_xts_dec_8way)
diff --git a/arch/x86/crypto/chacha20-avx2-x86_64.S b/arch/x86/crypto/chacha20-avx2-x86_64.S
index f3cd26f48332..72c96a6aec8f 100644
--- a/arch/x86/crypto/chacha20-avx2-x86_64.S
+++ b/arch/x86/crypto/chacha20-avx2-x86_64.S
@@ -28,7 +28,7 @@ CTRINC: .octa 0x00000003000000020000000100000000

.text

-ENTRY(chacha20_8block_xor_avx2)
+SYM_FUNC_START(chacha20_8block_xor_avx2)
# %rdi: Input state matrix, s
# %rsi: 8 data blocks output, o
# %rdx: 8 data blocks input, i
@@ -445,4 +445,4 @@ ENTRY(chacha20_8block_xor_avx2)
vzeroupper
lea -8(%r10),%rsp
ret
-ENDPROC(chacha20_8block_xor_avx2)
+SYM_FUNC_END(chacha20_8block_xor_avx2)
diff --git a/arch/x86/crypto/chacha20-ssse3-x86_64.S b/arch/x86/crypto/chacha20-ssse3-x86_64.S
index 512a2b500fd1..950dea7c92d1 100644
--- a/arch/x86/crypto/chacha20-ssse3-x86_64.S
+++ b/arch/x86/crypto/chacha20-ssse3-x86_64.S
@@ -23,7 +23,7 @@ CTRINC: .octa 0x00000003000000020000000100000000

.text

-ENTRY(chacha20_block_xor_ssse3)
+SYM_FUNC_START(chacha20_block_xor_ssse3)
# %rdi: Input state matrix, s
# %rsi: 1 data block output, o
# %rdx: 1 data block input, i
@@ -143,9 +143,9 @@ ENTRY(chacha20_block_xor_ssse3)
movdqu %xmm3,0x30(%rsi)

ret
-ENDPROC(chacha20_block_xor_ssse3)
+SYM_FUNC_END(chacha20_block_xor_ssse3)

-ENTRY(chacha20_4block_xor_ssse3)
+SYM_FUNC_START(chacha20_4block_xor_ssse3)
# %rdi: Input state matrix, s
# %rsi: 4 data blocks output, o
# %rdx: 4 data blocks input, i
@@ -627,4 +627,4 @@ ENTRY(chacha20_4block_xor_ssse3)

lea -8(%r10),%rsp
ret
-ENDPROC(chacha20_4block_xor_ssse3)
+SYM_FUNC_END(chacha20_4block_xor_ssse3)
diff --git a/arch/x86/crypto/crc32-pclmul_asm.S b/arch/x86/crypto/crc32-pclmul_asm.S
index 1c099dc08cc3..9fd28ff65bc2 100644
--- a/arch/x86/crypto/crc32-pclmul_asm.S
+++ b/arch/x86/crypto/crc32-pclmul_asm.S
@@ -103,7 +103,7 @@
* size_t len, uint crc32)
*/

-ENTRY(crc32_pclmul_le_16) /* buffer and buffer size are 16 bytes aligned */
+SYM_FUNC_START(crc32_pclmul_le_16) /* buffer and buffer size are 16 bytes aligned */
movdqa (BUF), %xmm1
movdqa 0x10(BUF), %xmm2
movdqa 0x20(BUF), %xmm3
@@ -238,4 +238,4 @@ fold_64:
PEXTRD 0x01, %xmm1, %eax

ret
-ENDPROC(crc32_pclmul_le_16)
+SYM_FUNC_END(crc32_pclmul_le_16)
diff --git a/arch/x86/crypto/crc32c-pcl-intel-asm_64.S b/arch/x86/crypto/crc32c-pcl-intel-asm_64.S
index d9b734d0c8cc..0e6690e3618c 100644
--- a/arch/x86/crypto/crc32c-pcl-intel-asm_64.S
+++ b/arch/x86/crypto/crc32c-pcl-intel-asm_64.S
@@ -74,7 +74,7 @@
# unsigned int crc_pcl(u8 *buffer, int len, unsigned int crc_init);

.text
-ENTRY(crc_pcl)
+SYM_FUNC_START(crc_pcl)
#define bufp %rdi
#define bufp_dw %edi
#define bufp_w %di
@@ -311,7 +311,7 @@ do_return:
popq %rdi
popq %rbx
ret
-ENDPROC(crc_pcl)
+SYM_FUNC_END(crc_pcl)

.section .rodata, "a", @progbits
################################################################
diff --git a/arch/x86/crypto/crct10dif-pcl-asm_64.S b/arch/x86/crypto/crct10dif-pcl-asm_64.S
index de04d3e98d8d..f56b499541e0 100644
--- a/arch/x86/crypto/crct10dif-pcl-asm_64.S
+++ b/arch/x86/crypto/crct10dif-pcl-asm_64.S
@@ -68,7 +68,7 @@

#define arg1_low32 %edi

-ENTRY(crc_t10dif_pcl)
+SYM_FUNC_START(crc_t10dif_pcl)
.align 16

# adjust the 16-bit initial_crc value, scale it to 32 bits
@@ -552,7 +552,7 @@ _only_less_than_2:

jmp _barrett

-ENDPROC(crc_t10dif_pcl)
+SYM_FUNC_END(crc_t10dif_pcl)

.section .rodata, "a", @progbits
.align 16
diff --git a/arch/x86/crypto/des3_ede-asm_64.S b/arch/x86/crypto/des3_ede-asm_64.S
index 8e49ce117494..82779c08029b 100644
--- a/arch/x86/crypto/des3_ede-asm_64.S
+++ b/arch/x86/crypto/des3_ede-asm_64.S
@@ -171,7 +171,7 @@
movl left##d, (io); \
movl right##d, 4(io);

-ENTRY(des3_ede_x86_64_crypt_blk)
+SYM_FUNC_START(des3_ede_x86_64_crypt_blk)
/* input:
* %rdi: round keys, CTX
* %rsi: dst
@@ -253,7 +253,7 @@ ENTRY(des3_ede_x86_64_crypt_blk)
popq %rbx;

ret;
-ENDPROC(des3_ede_x86_64_crypt_blk)
+SYM_FUNC_END(des3_ede_x86_64_crypt_blk)

/***********************************************************************
* 3-way 3DES
@@ -427,7 +427,7 @@ ENDPROC(des3_ede_x86_64_crypt_blk)
#define __movq(src, dst) \
movq src, dst;

-ENTRY(des3_ede_x86_64_crypt_blk_3way)
+SYM_FUNC_START(des3_ede_x86_64_crypt_blk_3way)
/* input:
* %rdi: ctx, round keys
* %rsi: dst (3 blocks)
@@ -538,7 +538,7 @@ ENTRY(des3_ede_x86_64_crypt_blk_3way)
popq %rbx;

ret;
-ENDPROC(des3_ede_x86_64_crypt_blk_3way)
+SYM_FUNC_END(des3_ede_x86_64_crypt_blk_3way)

.section .rodata, "a", @progbits
.align 16
diff --git a/arch/x86/crypto/ghash-clmulni-intel_asm.S b/arch/x86/crypto/ghash-clmulni-intel_asm.S
index c3db86842578..12e3a850257b 100644
--- a/arch/x86/crypto/ghash-clmulni-intel_asm.S
+++ b/arch/x86/crypto/ghash-clmulni-intel_asm.S
@@ -93,7 +93,7 @@ SYM_FUNC_START_LOCAL(__clmul_gf128mul_ble)
SYM_FUNC_END(__clmul_gf128mul_ble)

/* void clmul_ghash_mul(char *dst, const u128 *shash) */
-ENTRY(clmul_ghash_mul)
+SYM_FUNC_START(clmul_ghash_mul)
FRAME_BEGIN
movups (%rdi), DATA
movups (%rsi), SHASH
@@ -104,13 +104,13 @@ ENTRY(clmul_ghash_mul)
movups DATA, (%rdi)
FRAME_END
ret
-ENDPROC(clmul_ghash_mul)
+SYM_FUNC_END(clmul_ghash_mul)

/*
* void clmul_ghash_update(char *dst, const char *src, unsigned int srclen,
* const u128 *shash);
*/
-ENTRY(clmul_ghash_update)
+SYM_FUNC_START(clmul_ghash_update)
FRAME_BEGIN
cmp $16, %rdx
jb .Lupdate_just_ret # check length
@@ -133,4 +133,4 @@ ENTRY(clmul_ghash_update)
.Lupdate_just_ret:
FRAME_END
ret
-ENDPROC(clmul_ghash_update)
+SYM_FUNC_END(clmul_ghash_update)
diff --git a/arch/x86/crypto/poly1305-avx2-x86_64.S b/arch/x86/crypto/poly1305-avx2-x86_64.S
index 3b6e70d085da..68b0f4386dc4 100644
--- a/arch/x86/crypto/poly1305-avx2-x86_64.S
+++ b/arch/x86/crypto/poly1305-avx2-x86_64.S
@@ -83,7 +83,7 @@ ORMASK: .octa 0x00000000010000000000000001000000
#define d3 %r12
#define d4 %r13

-ENTRY(poly1305_4block_avx2)
+SYM_FUNC_START(poly1305_4block_avx2)
# %rdi: Accumulator h[5]
# %rsi: 64 byte input block m
# %rdx: Poly1305 key r[5]
@@ -385,4 +385,4 @@ ENTRY(poly1305_4block_avx2)
pop %r12
pop %rbx
ret
-ENDPROC(poly1305_4block_avx2)
+SYM_FUNC_END(poly1305_4block_avx2)
diff --git a/arch/x86/crypto/poly1305-sse2-x86_64.S b/arch/x86/crypto/poly1305-sse2-x86_64.S
index c88c670cb5fc..66715fbedc18 100644
--- a/arch/x86/crypto/poly1305-sse2-x86_64.S
+++ b/arch/x86/crypto/poly1305-sse2-x86_64.S
@@ -50,7 +50,7 @@ ORMASK: .octa 0x00000000010000000000000001000000
#define d3 %r11
#define d4 %r12

-ENTRY(poly1305_block_sse2)
+SYM_FUNC_START(poly1305_block_sse2)
# %rdi: Accumulator h[5]
# %rsi: 16 byte input block m
# %rdx: Poly1305 key r[5]
@@ -276,7 +276,7 @@ ENTRY(poly1305_block_sse2)
pop %r12
pop %rbx
ret
-ENDPROC(poly1305_block_sse2)
+SYM_FUNC_END(poly1305_block_sse2)


#define u0 0x00(%r8)
@@ -301,7 +301,7 @@ ENDPROC(poly1305_block_sse2)
#undef d0
#define d0 %r13

-ENTRY(poly1305_2block_sse2)
+SYM_FUNC_START(poly1305_2block_sse2)
# %rdi: Accumulator h[5]
# %rsi: 16 byte input block m
# %rdx: Poly1305 key r[5]
@@ -581,4 +581,4 @@ ENTRY(poly1305_2block_sse2)
pop %r12
pop %rbx
ret
-ENDPROC(poly1305_2block_sse2)
+SYM_FUNC_END(poly1305_2block_sse2)
diff --git a/arch/x86/crypto/salsa20-x86_64-asm_64.S b/arch/x86/crypto/salsa20-x86_64-asm_64.S
index 03a4918f41ee..5984d8c2edc5 100644
--- a/arch/x86/crypto/salsa20-x86_64-asm_64.S
+++ b/arch/x86/crypto/salsa20-x86_64-asm_64.S
@@ -2,7 +2,7 @@
#include <linux/linkage.h>

# enter salsa20_encrypt_bytes
-ENTRY(salsa20_encrypt_bytes)
+SYM_FUNC_START(salsa20_encrypt_bytes)
mov %rsp,%r11
and $31,%r11
add $256,%r11
@@ -802,4 +802,4 @@ ENTRY(salsa20_encrypt_bytes)
# comment:fp stack unchanged by jump
# goto bytesatleast1
jmp ._bytesatleast1
-ENDPROC(salsa20_encrypt_bytes)
+SYM_FUNC_END(salsa20_encrypt_bytes)
diff --git a/arch/x86/crypto/serpent-avx-x86_64-asm_64.S b/arch/x86/crypto/serpent-avx-x86_64-asm_64.S
index c2d4a1fc9ee8..72de86a8091e 100644
--- a/arch/x86/crypto/serpent-avx-x86_64-asm_64.S
+++ b/arch/x86/crypto/serpent-avx-x86_64-asm_64.S
@@ -677,7 +677,7 @@ SYM_FUNC_START_LOCAL(__serpent_dec_blk8_avx)
ret;
SYM_FUNC_END(__serpent_dec_blk8_avx)

-ENTRY(serpent_ecb_enc_8way_avx)
+SYM_FUNC_START(serpent_ecb_enc_8way_avx)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -693,9 +693,9 @@ ENTRY(serpent_ecb_enc_8way_avx)

FRAME_END
ret;
-ENDPROC(serpent_ecb_enc_8way_avx)
+SYM_FUNC_END(serpent_ecb_enc_8way_avx)

-ENTRY(serpent_ecb_dec_8way_avx)
+SYM_FUNC_START(serpent_ecb_dec_8way_avx)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -711,9 +711,9 @@ ENTRY(serpent_ecb_dec_8way_avx)

FRAME_END
ret;
-ENDPROC(serpent_ecb_dec_8way_avx)
+SYM_FUNC_END(serpent_ecb_dec_8way_avx)

-ENTRY(serpent_cbc_dec_8way_avx)
+SYM_FUNC_START(serpent_cbc_dec_8way_avx)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -729,9 +729,9 @@ ENTRY(serpent_cbc_dec_8way_avx)

FRAME_END
ret;
-ENDPROC(serpent_cbc_dec_8way_avx)
+SYM_FUNC_END(serpent_cbc_dec_8way_avx)

-ENTRY(serpent_ctr_8way_avx)
+SYM_FUNC_START(serpent_ctr_8way_avx)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -749,9 +749,9 @@ ENTRY(serpent_ctr_8way_avx)

FRAME_END
ret;
-ENDPROC(serpent_ctr_8way_avx)
+SYM_FUNC_END(serpent_ctr_8way_avx)

-ENTRY(serpent_xts_enc_8way_avx)
+SYM_FUNC_START(serpent_xts_enc_8way_avx)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -771,9 +771,9 @@ ENTRY(serpent_xts_enc_8way_avx)

FRAME_END
ret;
-ENDPROC(serpent_xts_enc_8way_avx)
+SYM_FUNC_END(serpent_xts_enc_8way_avx)

-ENTRY(serpent_xts_dec_8way_avx)
+SYM_FUNC_START(serpent_xts_dec_8way_avx)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -793,4 +793,4 @@ ENTRY(serpent_xts_dec_8way_avx)

FRAME_END
ret;
-ENDPROC(serpent_xts_dec_8way_avx)
+SYM_FUNC_END(serpent_xts_dec_8way_avx)
diff --git a/arch/x86/crypto/serpent-avx2-asm_64.S b/arch/x86/crypto/serpent-avx2-asm_64.S
index 52c527ce4b18..b866f1632803 100644
--- a/arch/x86/crypto/serpent-avx2-asm_64.S
+++ b/arch/x86/crypto/serpent-avx2-asm_64.S
@@ -673,7 +673,7 @@ SYM_FUNC_START_LOCAL(__serpent_dec_blk16)
ret;
SYM_FUNC_END(__serpent_dec_blk16)

-ENTRY(serpent_ecb_enc_16way)
+SYM_FUNC_START(serpent_ecb_enc_16way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -693,9 +693,9 @@ ENTRY(serpent_ecb_enc_16way)

FRAME_END
ret;
-ENDPROC(serpent_ecb_enc_16way)
+SYM_FUNC_END(serpent_ecb_enc_16way)

-ENTRY(serpent_ecb_dec_16way)
+SYM_FUNC_START(serpent_ecb_dec_16way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -715,9 +715,9 @@ ENTRY(serpent_ecb_dec_16way)

FRAME_END
ret;
-ENDPROC(serpent_ecb_dec_16way)
+SYM_FUNC_END(serpent_ecb_dec_16way)

-ENTRY(serpent_cbc_dec_16way)
+SYM_FUNC_START(serpent_cbc_dec_16way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -738,9 +738,9 @@ ENTRY(serpent_cbc_dec_16way)

FRAME_END
ret;
-ENDPROC(serpent_cbc_dec_16way)
+SYM_FUNC_END(serpent_cbc_dec_16way)

-ENTRY(serpent_ctr_16way)
+SYM_FUNC_START(serpent_ctr_16way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (16 blocks)
@@ -763,9 +763,9 @@ ENTRY(serpent_ctr_16way)

FRAME_END
ret;
-ENDPROC(serpent_ctr_16way)
+SYM_FUNC_END(serpent_ctr_16way)

-ENTRY(serpent_xts_enc_16way)
+SYM_FUNC_START(serpent_xts_enc_16way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (16 blocks)
@@ -789,9 +789,9 @@ ENTRY(serpent_xts_enc_16way)

FRAME_END
ret;
-ENDPROC(serpent_xts_enc_16way)
+SYM_FUNC_END(serpent_xts_enc_16way)

-ENTRY(serpent_xts_dec_16way)
+SYM_FUNC_START(serpent_xts_dec_16way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (16 blocks)
@@ -815,4 +815,4 @@ ENTRY(serpent_xts_dec_16way)

FRAME_END
ret;
-ENDPROC(serpent_xts_dec_16way)
+SYM_FUNC_END(serpent_xts_dec_16way)
diff --git a/arch/x86/crypto/serpent-sse2-x86_64-asm_64.S b/arch/x86/crypto/serpent-sse2-x86_64-asm_64.S
index acc066c7c6b2..bdeee900df63 100644
--- a/arch/x86/crypto/serpent-sse2-x86_64-asm_64.S
+++ b/arch/x86/crypto/serpent-sse2-x86_64-asm_64.S
@@ -634,7 +634,7 @@
pxor t0, x3; \
movdqu x3, (3*4*4)(out);

-ENTRY(__serpent_enc_blk_8way)
+SYM_FUNC_START(__serpent_enc_blk_8way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -697,9 +697,9 @@ ENTRY(__serpent_enc_blk_8way)
xor_blocks(%rax, RA2, RB2, RC2, RD2, RK0, RK1, RK2);

ret;
-ENDPROC(__serpent_enc_blk_8way)
+SYM_FUNC_END(__serpent_enc_blk_8way)

-ENTRY(serpent_dec_blk_8way)
+SYM_FUNC_START(serpent_dec_blk_8way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -751,4 +751,4 @@ ENTRY(serpent_dec_blk_8way)
write_blocks(%rax, RC2, RD2, RB2, RE2, RK0, RK1, RK2);

ret;
-ENDPROC(serpent_dec_blk_8way)
+SYM_FUNC_END(serpent_dec_blk_8way)
diff --git a/arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S b/arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S
index 7cfba738f104..a1be3b33990c 100644
--- a/arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S
+++ b/arch/x86/crypto/sha1-mb/sha1_mb_mgr_flush_avx2.S
@@ -103,7 +103,7 @@ offset = \_offset

# JOB* sha1_mb_mgr_flush_avx2(MB_MGR *state)
# arg 1 : rcx : state
-ENTRY(sha1_mb_mgr_flush_avx2)
+SYM_FUNC_START(sha1_mb_mgr_flush_avx2)
FRAME_BEGIN
push %rbx

@@ -220,13 +220,13 @@ return:
return_null:
xor job_rax, job_rax
jmp return
-ENDPROC(sha1_mb_mgr_flush_avx2)
+SYM_FUNC_END(sha1_mb_mgr_flush_avx2)


#################################################################

.align 16
-ENTRY(sha1_mb_mgr_get_comp_job_avx2)
+SYM_FUNC_START(sha1_mb_mgr_get_comp_job_avx2)
push %rbx

## if bit 32+3 is set, then all lanes are empty
@@ -279,7 +279,7 @@ ENTRY(sha1_mb_mgr_get_comp_job_avx2)
xor job_rax, job_rax
pop %rbx
ret
-ENDPROC(sha1_mb_mgr_get_comp_job_avx2)
+SYM_FUNC_END(sha1_mb_mgr_get_comp_job_avx2)

.section .rodata.cst16.clear_low_nibble, "aM", @progbits, 16
.align 16
diff --git a/arch/x86/crypto/sha1-mb/sha1_mb_mgr_submit_avx2.S b/arch/x86/crypto/sha1-mb/sha1_mb_mgr_submit_avx2.S
index 7a93b1c0d69a..a46e3b04385e 100644
--- a/arch/x86/crypto/sha1-mb/sha1_mb_mgr_submit_avx2.S
+++ b/arch/x86/crypto/sha1-mb/sha1_mb_mgr_submit_avx2.S
@@ -98,7 +98,7 @@ lane_data = %r10
# JOB* submit_mb_mgr_submit_avx2(MB_MGR *state, job_sha1 *job)
# arg 1 : rcx : state
# arg 2 : rdx : job
-ENTRY(sha1_mb_mgr_submit_avx2)
+SYM_FUNC_START(sha1_mb_mgr_submit_avx2)
FRAME_BEGIN
push %rbx
push %r12
@@ -201,7 +201,7 @@ return_null:
xor job_rax, job_rax
jmp return

-ENDPROC(sha1_mb_mgr_submit_avx2)
+SYM_FUNC_END(sha1_mb_mgr_submit_avx2)

.section .rodata.cst16.clear_low_nibble, "aM", @progbits, 16
.align 16
diff --git a/arch/x86/crypto/sha1-mb/sha1_x8_avx2.S b/arch/x86/crypto/sha1-mb/sha1_x8_avx2.S
index 20f77aa633de..04d763520a82 100644
--- a/arch/x86/crypto/sha1-mb/sha1_x8_avx2.S
+++ b/arch/x86/crypto/sha1-mb/sha1_x8_avx2.S
@@ -294,7 +294,7 @@ W14 = TMP_
# arg 1 : pointer to array[4] of pointer to input data
# arg 2 : size (in blocks) ;; assumed to be >= 1
#
-ENTRY(sha1_x8_avx2)
+SYM_FUNC_START(sha1_x8_avx2)

# save callee-saved clobbered registers to comply with C function ABI
push %r12
@@ -458,7 +458,7 @@ lloop:
pop %r12

ret
-ENDPROC(sha1_x8_avx2)
+SYM_FUNC_END(sha1_x8_avx2)


.section .rodata.cst32.K00_19, "aM", @progbits, 32
diff --git a/arch/x86/crypto/sha1_avx2_x86_64_asm.S b/arch/x86/crypto/sha1_avx2_x86_64_asm.S
index 9f712a7dfd79..6decc85ef7b7 100644
--- a/arch/x86/crypto/sha1_avx2_x86_64_asm.S
+++ b/arch/x86/crypto/sha1_avx2_x86_64_asm.S
@@ -634,7 +634,7 @@ _loop3:
* param: function's name
*/
.macro SHA1_VECTOR_ASM name
- ENTRY(\name)
+ SYM_FUNC_START(\name)

push %rbx
push %r12
@@ -676,7 +676,7 @@ _loop3:

ret

- ENDPROC(\name)
+ SYM_FUNC_END(\name)
.endm

.section .rodata
diff --git a/arch/x86/crypto/sha1_ni_asm.S b/arch/x86/crypto/sha1_ni_asm.S
index ebbdba72ae07..11efe3a45a1f 100644
--- a/arch/x86/crypto/sha1_ni_asm.S
+++ b/arch/x86/crypto/sha1_ni_asm.S
@@ -95,7 +95,7 @@
*/
.text
.align 32
-ENTRY(sha1_ni_transform)
+SYM_FUNC_START(sha1_ni_transform)
mov %rsp, RSPSAVE
sub $FRAME_SIZE, %rsp
and $~0xF, %rsp
@@ -291,7 +291,7 @@ ENTRY(sha1_ni_transform)
mov RSPSAVE, %rsp

ret
-ENDPROC(sha1_ni_transform)
+SYM_FUNC_END(sha1_ni_transform)

.section .rodata.cst16.PSHUFFLE_BYTE_FLIP_MASK, "aM", @progbits, 16
.align 16
diff --git a/arch/x86/crypto/sha1_ssse3_asm.S b/arch/x86/crypto/sha1_ssse3_asm.S
index 6204bd53528c..c253255fd4c1 100644
--- a/arch/x86/crypto/sha1_ssse3_asm.S
+++ b/arch/x86/crypto/sha1_ssse3_asm.S
@@ -71,7 +71,7 @@
* param: function's name
*/
.macro SHA1_VECTOR_ASM name
- ENTRY(\name)
+ SYM_FUNC_START(\name)

push %rbx
push %r12
@@ -105,7 +105,7 @@
pop %rbx
ret

- ENDPROC(\name)
+ SYM_FUNC_END(\name)
.endm

/*
diff --git a/arch/x86/crypto/sha256-avx-asm.S b/arch/x86/crypto/sha256-avx-asm.S
index 001bbcf93c79..22e14c8dd2e4 100644
--- a/arch/x86/crypto/sha256-avx-asm.S
+++ b/arch/x86/crypto/sha256-avx-asm.S
@@ -347,7 +347,7 @@ a = TMP_
## arg 3 : Num blocks
########################################################################
.text
-ENTRY(sha256_transform_avx)
+SYM_FUNC_START(sha256_transform_avx)
.align 32
pushq %rbx
pushq %r12
@@ -460,7 +460,7 @@ done_hash:
popq %r12
popq %rbx
ret
-ENDPROC(sha256_transform_avx)
+SYM_FUNC_END(sha256_transform_avx)

.section .rodata.cst256.K256, "aM", @progbits, 256
.align 64
diff --git a/arch/x86/crypto/sha256-avx2-asm.S b/arch/x86/crypto/sha256-avx2-asm.S
index 1420db15dcdd..519b551ad576 100644
--- a/arch/x86/crypto/sha256-avx2-asm.S
+++ b/arch/x86/crypto/sha256-avx2-asm.S
@@ -526,7 +526,7 @@ STACK_SIZE = _RSP + _RSP_SIZE
## arg 3 : Num blocks
########################################################################
.text
-ENTRY(sha256_transform_rorx)
+SYM_FUNC_START(sha256_transform_rorx)
.align 32
pushq %rbx
pushq %r12
@@ -713,7 +713,7 @@ done_hash:
popq %r12
popq %rbx
ret
-ENDPROC(sha256_transform_rorx)
+SYM_FUNC_END(sha256_transform_rorx)

.section .rodata.cst512.K256, "aM", @progbits, 512
.align 64
diff --git a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
index 16c4ccb1f154..11f00ee0a3a4 100644
--- a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
+++ b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_flush_avx2.S
@@ -101,7 +101,7 @@ offset = \_offset

# JOB_SHA256* sha256_mb_mgr_flush_avx2(MB_MGR *state)
# arg 1 : rcx : state
-ENTRY(sha256_mb_mgr_flush_avx2)
+SYM_FUNC_START(sha256_mb_mgr_flush_avx2)
FRAME_BEGIN
push %rbx

@@ -220,12 +220,12 @@ return:
return_null:
xor job_rax, job_rax
jmp return
-ENDPROC(sha256_mb_mgr_flush_avx2)
+SYM_FUNC_END(sha256_mb_mgr_flush_avx2)

##############################################################################

.align 16
-ENTRY(sha256_mb_mgr_get_comp_job_avx2)
+SYM_FUNC_START(sha256_mb_mgr_get_comp_job_avx2)
push %rbx

## if bit 32+3 is set, then all lanes are empty
@@ -282,7 +282,7 @@ ENTRY(sha256_mb_mgr_get_comp_job_avx2)
xor job_rax, job_rax
pop %rbx
ret
-ENDPROC(sha256_mb_mgr_get_comp_job_avx2)
+SYM_FUNC_END(sha256_mb_mgr_get_comp_job_avx2)

.section .rodata.cst16.clear_low_nibble, "aM", @progbits, 16
.align 16
diff --git a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_submit_avx2.S b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_submit_avx2.S
index b36ae7454084..2213c04a30dc 100644
--- a/arch/x86/crypto/sha256-mb/sha256_mb_mgr_submit_avx2.S
+++ b/arch/x86/crypto/sha256-mb/sha256_mb_mgr_submit_avx2.S
@@ -96,7 +96,7 @@ lane_data = %r10
# JOB* sha256_mb_mgr_submit_avx2(MB_MGR *state, JOB_SHA256 *job)
# arg 1 : rcx : state
# arg 2 : rdx : job
-ENTRY(sha256_mb_mgr_submit_avx2)
+SYM_FUNC_START(sha256_mb_mgr_submit_avx2)
FRAME_BEGIN
push %rbx
push %r12
@@ -206,7 +206,7 @@ return_null:
xor job_rax, job_rax
jmp return

-ENDPROC(sha256_mb_mgr_submit_avx2)
+SYM_FUNC_END(sha256_mb_mgr_submit_avx2)

.section .rodata.cst16.clear_low_nibble, "aM", @progbits, 16
.align 16
diff --git a/arch/x86/crypto/sha256-mb/sha256_x8_avx2.S b/arch/x86/crypto/sha256-mb/sha256_x8_avx2.S
index 1687c80c5995..042d2381f435 100644
--- a/arch/x86/crypto/sha256-mb/sha256_x8_avx2.S
+++ b/arch/x86/crypto/sha256-mb/sha256_x8_avx2.S
@@ -280,7 +280,7 @@ a = TMP_
# general registers preserved in outer calling routine
# outer calling routine saves all the XMM registers
# save rsp, allocate 32-byte aligned for local variables
-ENTRY(sha256_x8_avx2)
+SYM_FUNC_START(sha256_x8_avx2)

# save callee-saved clobbered registers to comply with C function ABI
push %r12
@@ -436,7 +436,7 @@ Lrounds_16_xx:
pop %r12

ret
-ENDPROC(sha256_x8_avx2)
+SYM_FUNC_END(sha256_x8_avx2)

.section .rodata.K256_8, "a", @progbits
.align 64
diff --git a/arch/x86/crypto/sha256-ssse3-asm.S b/arch/x86/crypto/sha256-ssse3-asm.S
index c6c05ed2c16a..69cc2f91dc4c 100644
--- a/arch/x86/crypto/sha256-ssse3-asm.S
+++ b/arch/x86/crypto/sha256-ssse3-asm.S
@@ -353,7 +353,7 @@ a = TMP_
## arg 3 : Num blocks
########################################################################
.text
-ENTRY(sha256_transform_ssse3)
+SYM_FUNC_START(sha256_transform_ssse3)
.align 32
pushq %rbx
pushq %r12
@@ -471,7 +471,7 @@ done_hash:
popq %rbx

ret
-ENDPROC(sha256_transform_ssse3)
+SYM_FUNC_END(sha256_transform_ssse3)

.section .rodata.cst256.K256, "aM", @progbits, 256
.align 64
diff --git a/arch/x86/crypto/sha256_ni_asm.S b/arch/x86/crypto/sha256_ni_asm.S
index fb58f58ecfbc..7abade04a3a3 100644
--- a/arch/x86/crypto/sha256_ni_asm.S
+++ b/arch/x86/crypto/sha256_ni_asm.S
@@ -97,7 +97,7 @@

.text
.align 32
-ENTRY(sha256_ni_transform)
+SYM_FUNC_START(sha256_ni_transform)

shl $6, NUM_BLKS /* convert to bytes */
jz .Ldone_hash
@@ -327,7 +327,7 @@ ENTRY(sha256_ni_transform)
.Ldone_hash:

ret
-ENDPROC(sha256_ni_transform)
+SYM_FUNC_END(sha256_ni_transform)

.section .rodata.cst256.K256, "aM", @progbits, 256
.align 64
diff --git a/arch/x86/crypto/sha512-avx-asm.S b/arch/x86/crypto/sha512-avx-asm.S
index 39235fefe6f7..3704ddd7e5d5 100644
--- a/arch/x86/crypto/sha512-avx-asm.S
+++ b/arch/x86/crypto/sha512-avx-asm.S
@@ -277,7 +277,7 @@ frame_size = frame_GPRSAVE + GPRSAVE_SIZE
# message blocks.
# L is the message length in SHA512 blocks
########################################################################
-ENTRY(sha512_transform_avx)
+SYM_FUNC_START(sha512_transform_avx)
cmp $0, msglen
je nowork

@@ -365,7 +365,7 @@ updateblock:

nowork:
ret
-ENDPROC(sha512_transform_avx)
+SYM_FUNC_END(sha512_transform_avx)

########################################################################
### Binary Data
diff --git a/arch/x86/crypto/sha512-avx2-asm.S b/arch/x86/crypto/sha512-avx2-asm.S
index b16d56005162..80d830e7ee09 100644
--- a/arch/x86/crypto/sha512-avx2-asm.S
+++ b/arch/x86/crypto/sha512-avx2-asm.S
@@ -569,7 +569,7 @@ frame_size = frame_GPRSAVE + GPRSAVE_SIZE
# message blocks.
# L is the message length in SHA512 blocks
########################################################################
-ENTRY(sha512_transform_rorx)
+SYM_FUNC_START(sha512_transform_rorx)
# Allocate Stack Space
mov %rsp, %rax
sub $frame_size, %rsp
@@ -682,7 +682,7 @@ done_hash:
# Restore Stack Pointer
mov frame_RSPSAVE(%rsp), %rsp
ret
-ENDPROC(sha512_transform_rorx)
+SYM_FUNC_END(sha512_transform_rorx)

########################################################################
### Binary Data
diff --git a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_flush_avx2.S b/arch/x86/crypto/sha512-mb/sha512_mb_mgr_flush_avx2.S
index 7c629caebc05..8642f3a04388 100644
--- a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_flush_avx2.S
+++ b/arch/x86/crypto/sha512-mb/sha512_mb_mgr_flush_avx2.S
@@ -107,7 +107,7 @@ offset = \_offset

# JOB* sha512_mb_mgr_flush_avx2(MB_MGR *state)
# arg 1 : rcx : state
-ENTRY(sha512_mb_mgr_flush_avx2)
+SYM_FUNC_START(sha512_mb_mgr_flush_avx2)
FRAME_BEGIN
push %rbx

@@ -217,10 +217,10 @@ return:
return_null:
xor job_rax, job_rax
jmp return
-ENDPROC(sha512_mb_mgr_flush_avx2)
+SYM_FUNC_END(sha512_mb_mgr_flush_avx2)
.align 16

-ENTRY(sha512_mb_mgr_get_comp_job_avx2)
+SYM_FUNC_START(sha512_mb_mgr_get_comp_job_avx2)
push %rbx

mov _unused_lanes(state), unused_lanes
@@ -279,7 +279,7 @@ ENTRY(sha512_mb_mgr_get_comp_job_avx2)
xor job_rax, job_rax
pop %rbx
ret
-ENDPROC(sha512_mb_mgr_get_comp_job_avx2)
+SYM_FUNC_END(sha512_mb_mgr_get_comp_job_avx2)

.section .rodata.cst8.one, "aM", @progbits, 8
.align 8
diff --git a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_submit_avx2.S b/arch/x86/crypto/sha512-mb/sha512_mb_mgr_submit_avx2.S
index 4ba709ba78e5..62932723d6e9 100644
--- a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_submit_avx2.S
+++ b/arch/x86/crypto/sha512-mb/sha512_mb_mgr_submit_avx2.S
@@ -98,7 +98,7 @@
# JOB* sha512_mb_mgr_submit_avx2(MB_MGR *state, JOB *job)
# arg 1 : rcx : state
# arg 2 : rdx : job
-ENTRY(sha512_mb_mgr_submit_avx2)
+SYM_FUNC_START(sha512_mb_mgr_submit_avx2)
FRAME_BEGIN
push %rbx
push %r12
@@ -208,7 +208,7 @@ return:
return_null:
xor job_rax, job_rax
jmp return
-ENDPROC(sha512_mb_mgr_submit_avx2)
+SYM_FUNC_END(sha512_mb_mgr_submit_avx2)

/* UNUSED?
.section .rodata.cst16, "aM", @progbits, 16
diff --git a/arch/x86/crypto/sha512-mb/sha512_x4_avx2.S b/arch/x86/crypto/sha512-mb/sha512_x4_avx2.S
index e22e907643a6..504065d19e03 100644
--- a/arch/x86/crypto/sha512-mb/sha512_x4_avx2.S
+++ b/arch/x86/crypto/sha512-mb/sha512_x4_avx2.S
@@ -239,7 +239,7 @@ a = TMP_
# void sha512_x4_avx2(void *STATE, const int INP_SIZE)
# arg 1 : STATE : pointer to input data
# arg 2 : INP_SIZE : size of data in blocks (assumed >= 1)
-ENTRY(sha512_x4_avx2)
+SYM_FUNC_START(sha512_x4_avx2)
# general registers preserved in outer calling routine
# outer calling routine saves all the XMM registers
# save callee-saved clobbered registers to comply with C function ABI
@@ -359,7 +359,7 @@ Lrounds_16_xx:

# outer calling routine restores XMM and other GP registers
ret
-ENDPROC(sha512_x4_avx2)
+SYM_FUNC_END(sha512_x4_avx2)

.section .rodata.K512_4, "a", @progbits
.align 64
diff --git a/arch/x86/crypto/sha512-ssse3-asm.S b/arch/x86/crypto/sha512-ssse3-asm.S
index 66bbd9058a90..838f984e95d9 100644
--- a/arch/x86/crypto/sha512-ssse3-asm.S
+++ b/arch/x86/crypto/sha512-ssse3-asm.S
@@ -275,7 +275,7 @@ frame_size = frame_GPRSAVE + GPRSAVE_SIZE
# message blocks.
# L is the message length in SHA512 blocks.
########################################################################
-ENTRY(sha512_transform_ssse3)
+SYM_FUNC_START(sha512_transform_ssse3)

cmp $0, msglen
je nowork
@@ -364,7 +364,7 @@ updateblock:

nowork:
ret
-ENDPROC(sha512_transform_ssse3)
+SYM_FUNC_END(sha512_transform_ssse3)

########################################################################
### Binary Data
diff --git a/arch/x86/crypto/twofish-avx-x86_64-asm_64.S b/arch/x86/crypto/twofish-avx-x86_64-asm_64.S
index 96ddfda4d7b2..16e53c98e6a0 100644
--- a/arch/x86/crypto/twofish-avx-x86_64-asm_64.S
+++ b/arch/x86/crypto/twofish-avx-x86_64-asm_64.S
@@ -330,7 +330,7 @@ SYM_FUNC_START_LOCAL(__twofish_dec_blk8)
ret;
SYM_FUNC_END(__twofish_dec_blk8)

-ENTRY(twofish_ecb_enc_8way)
+SYM_FUNC_START(twofish_ecb_enc_8way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -348,9 +348,9 @@ ENTRY(twofish_ecb_enc_8way)

FRAME_END
ret;
-ENDPROC(twofish_ecb_enc_8way)
+SYM_FUNC_END(twofish_ecb_enc_8way)

-ENTRY(twofish_ecb_dec_8way)
+SYM_FUNC_START(twofish_ecb_dec_8way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -368,9 +368,9 @@ ENTRY(twofish_ecb_dec_8way)

FRAME_END
ret;
-ENDPROC(twofish_ecb_dec_8way)
+SYM_FUNC_END(twofish_ecb_dec_8way)

-ENTRY(twofish_cbc_dec_8way)
+SYM_FUNC_START(twofish_cbc_dec_8way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -393,9 +393,9 @@ ENTRY(twofish_cbc_dec_8way)

FRAME_END
ret;
-ENDPROC(twofish_cbc_dec_8way)
+SYM_FUNC_END(twofish_cbc_dec_8way)

-ENTRY(twofish_ctr_8way)
+SYM_FUNC_START(twofish_ctr_8way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -420,9 +420,9 @@ ENTRY(twofish_ctr_8way)

FRAME_END
ret;
-ENDPROC(twofish_ctr_8way)
+SYM_FUNC_END(twofish_ctr_8way)

-ENTRY(twofish_xts_enc_8way)
+SYM_FUNC_START(twofish_xts_enc_8way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -444,9 +444,9 @@ ENTRY(twofish_xts_enc_8way)

FRAME_END
ret;
-ENDPROC(twofish_xts_enc_8way)
+SYM_FUNC_END(twofish_xts_enc_8way)

-ENTRY(twofish_xts_dec_8way)
+SYM_FUNC_START(twofish_xts_dec_8way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -468,4 +468,4 @@ ENTRY(twofish_xts_dec_8way)

FRAME_END
ret;
-ENDPROC(twofish_xts_dec_8way)
+SYM_FUNC_END(twofish_xts_dec_8way)
diff --git a/arch/x86/crypto/twofish-x86_64-asm_64-3way.S b/arch/x86/crypto/twofish-x86_64-asm_64-3way.S
index e7273a606a07..c830aef77070 100644
--- a/arch/x86/crypto/twofish-x86_64-asm_64-3way.S
+++ b/arch/x86/crypto/twofish-x86_64-asm_64-3way.S
@@ -235,7 +235,7 @@
rorq $32, RAB2; \
outunpack3(mov, RIO, 2, RAB, 2);

-ENTRY(__twofish_enc_blk_3way)
+SYM_FUNC_START(__twofish_enc_blk_3way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -282,9 +282,9 @@ ENTRY(__twofish_enc_blk_3way)
popq %r12;
popq %r13;
ret;
-ENDPROC(__twofish_enc_blk_3way)
+SYM_FUNC_END(__twofish_enc_blk_3way)

-ENTRY(twofish_dec_blk_3way)
+SYM_FUNC_START(twofish_dec_blk_3way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst
@@ -317,4 +317,4 @@ ENTRY(twofish_dec_blk_3way)
popq %r12;
popq %r13;
ret;
-ENDPROC(twofish_dec_blk_3way)
+SYM_FUNC_END(twofish_dec_blk_3way)
diff --git a/arch/x86/crypto/twofish-x86_64-asm_64.S b/arch/x86/crypto/twofish-x86_64-asm_64.S
index a350c990dc86..74ef6c55d75f 100644
--- a/arch/x86/crypto/twofish-x86_64-asm_64.S
+++ b/arch/x86/crypto/twofish-x86_64-asm_64.S
@@ -215,7 +215,7 @@
xor %r8d, d ## D;\
ror $1, d ## D;

-ENTRY(twofish_enc_blk)
+SYM_FUNC_START(twofish_enc_blk)
pushq R1

/* %rdi contains the ctx address */
@@ -266,9 +266,9 @@ ENTRY(twofish_enc_blk)
popq R1
movl $1,%eax
ret
-ENDPROC(twofish_enc_blk)
+SYM_FUNC_END(twofish_enc_blk)

-ENTRY(twofish_dec_blk)
+SYM_FUNC_START(twofish_dec_blk)
pushq R1

/* %rdi contains the ctx address */
@@ -318,4 +318,4 @@ ENTRY(twofish_dec_blk)
popq R1
movl $1,%eax
ret
-ENDPROC(twofish_dec_blk)
+SYM_FUNC_END(twofish_dec_blk)
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 6f85f43a4877..3a9f0dd209b2 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -15,7 +15,7 @@
* at the top of the kernel process stack.
*
* Some macro usage:
- * - ENTRY/END: Define functions in the symbol table.
+ * - SYM_FUNC_START/END:Define functions in the symbol table.
* - TRACE_IRQ_*: Trace hardirq state for lock debugging.
* - idtentry: Define exception entry points.
*/
@@ -1007,7 +1007,7 @@ idtentry simd_coprocessor_error do_simd_coprocessor_error has_error_code=0
* Reload gs selector with exception handling
* edi: new selector
*/
-ENTRY(native_load_gs_index)
+SYM_FUNC_START(native_load_gs_index)
FRAME_BEGIN
pushfq
DISABLE_INTERRUPTS(CLBR_ANY & ~CLBR_RDI)
@@ -1021,7 +1021,7 @@ ENTRY(native_load_gs_index)
popfq
FRAME_END
ret
-ENDPROC(native_load_gs_index)
+SYM_FUNC_END(native_load_gs_index)
EXPORT_SYMBOL(native_load_gs_index)

_ASM_EXTABLE(.Lgs_change, bad_gs)
@@ -1042,7 +1042,7 @@ SYM_CODE_END(bad_gs)
.previous

/* Call softirq on interrupt stack. Interrupts are off. */
-ENTRY(do_softirq_own_stack)
+SYM_FUNC_START(do_softirq_own_stack)
pushq %rbp
mov %rsp, %rbp
ENTER_IRQ_STACK regs=0 old_rsp=%r11
@@ -1050,7 +1050,7 @@ ENTRY(do_softirq_own_stack)
LEAVE_IRQ_STACK regs=0
leaveq
ret
-ENDPROC(do_softirq_own_stack)
+SYM_FUNC_END(do_softirq_own_stack)

#ifdef CONFIG_XEN
idtentry hypervisor_callback xen_do_hypervisor_callback has_error_code=0
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
index d03ddfc959e6..19bf98256174 100644
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -46,7 +46,7 @@
* ebp user stack
* 0(%ebp) arg6
*/
-ENTRY(entry_SYSENTER_compat)
+SYM_FUNC_START(entry_SYSENTER_compat)
/* Interrupts are off on entry. */
SWAPGS

@@ -147,7 +147,7 @@ ENTRY(entry_SYSENTER_compat)
popfq
jmp .Lsysenter_flags_fixed
SYM_INNER_LABEL(__end_entry_SYSENTER_compat, SYM_L_GLOBAL)
-ENDPROC(entry_SYSENTER_compat)
+SYM_FUNC_END(entry_SYSENTER_compat)

/*
* 32-bit SYSCALL entry.
diff --git a/arch/x86/kernel/acpi/wakeup_64.S b/arch/x86/kernel/acpi/wakeup_64.S
index 6c60fe346583..042fd30ac493 100644
--- a/arch/x86/kernel/acpi/wakeup_64.S
+++ b/arch/x86/kernel/acpi/wakeup_64.S
@@ -13,7 +13,7 @@
/*
* Hooray, we are in Long 64-bit mode (but still running in low memory)
*/
-ENTRY(wakeup_long64)
+SYM_FUNC_START(wakeup_long64)
movq saved_magic, %rax
movq $0x123456789abcdef0, %rdx
cmpq %rdx, %rax
@@ -34,13 +34,13 @@ ENTRY(wakeup_long64)

movq saved_rip, %rax
jmp *%rax
-ENDPROC(wakeup_long64)
+SYM_FUNC_END(wakeup_long64)

SYM_CODE_START_LOCAL(bogus_64_magic)
jmp bogus_64_magic
SYM_CODE_END(bogus_64_magic)

-ENTRY(do_suspend_lowlevel)
+SYM_FUNC_START(do_suspend_lowlevel)
FRAME_BEGIN
subq $8, %rsp
xorl %eax, %eax
@@ -123,7 +123,7 @@ ENTRY(do_suspend_lowlevel)
addq $8, %rsp
FRAME_END
jmp restore_processor_state
-ENDPROC(do_suspend_lowlevel)
+SYM_FUNC_END(do_suspend_lowlevel)

.data
saved_rbp: .quad 0
diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
index 51970806c2df..68b8c4b3e543 100644
--- a/arch/x86/kernel/ftrace_64.S
+++ b/arch/x86/kernel/ftrace_64.S
@@ -150,11 +150,11 @@ EXPORT_SYMBOL(mcount)

#ifdef CONFIG_DYNAMIC_FTRACE

-ENTRY(function_hook)
+SYM_FUNC_START(function_hook)
retq
-ENDPROC(function_hook)
+SYM_FUNC_END(function_hook)

-ENTRY(ftrace_caller)
+SYM_FUNC_START(ftrace_caller)
/* save_mcount_regs fills in first two parameters */
save_mcount_regs

@@ -188,9 +188,9 @@ SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL)
/* This is weak to keep gas from relaxing the jumps */
WEAK(ftrace_stub)
retq
-ENDPROC(ftrace_caller)
+SYM_FUNC_END(ftrace_caller)

-ENTRY(ftrace_regs_caller)
+SYM_FUNC_START(ftrace_regs_caller)
/* Save the current flags before any operations that can change them */
pushfq

@@ -259,12 +259,12 @@ SYM_INNER_LABEL(ftrace_regs_caller_end, SYM_L_GLOBAL)

jmp ftrace_epilogue

-ENDPROC(ftrace_regs_caller)
+SYM_FUNC_END(ftrace_regs_caller)


#else /* ! CONFIG_DYNAMIC_FTRACE */

-ENTRY(function_hook)
+SYM_FUNC_START(function_hook)
cmpq $ftrace_stub, ftrace_trace_function
jnz trace

@@ -295,11 +295,11 @@ trace:
restore_mcount_regs

jmp fgraph_trace
-ENDPROC(function_hook)
+SYM_FUNC_END(function_hook)
#endif /* CONFIG_DYNAMIC_FTRACE */

#ifdef CONFIG_FUNCTION_GRAPH_TRACER
-ENTRY(ftrace_graph_caller)
+SYM_FUNC_START(ftrace_graph_caller)
/* Saves rbp into %rdx and fills first parameter */
save_mcount_regs

@@ -317,7 +317,7 @@ ENTRY(ftrace_graph_caller)
restore_mcount_regs

retq
-ENDPROC(ftrace_graph_caller)
+SYM_FUNC_END(ftrace_graph_caller)

SYM_CODE_START(return_to_handler)
UNWIND_HINT_EMPTY
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 48e71043b99c..f4383f4d41b1 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -92,7 +92,7 @@ SYM_CODE_START_NOALIGN(startup_64)
jmp 1f
SYM_CODE_END(startup_64)

-ENTRY(secondary_startup_64)
+SYM_CODE_START(secondary_startup_64)
UNWIND_HINT_EMPTY
/*
* At this point the CPU runs in 64bit mode CS.L = 1 CS.D = 0,
@@ -242,7 +242,7 @@ ENTRY(secondary_startup_64)
pushq %rax # target address in negative space
lretq
.Lafter_lret:
-END(secondary_startup_64)
+SYM_CODE_END(secondary_startup_64)

#include "verify_cpu.S"

@@ -252,11 +252,11 @@ END(secondary_startup_64)
* up already except stack. We just set up stack here. Then call
* start_secondary() via .Ljump_to_C_code.
*/
-ENTRY(start_cpu0)
+SYM_FUNC_START(start_cpu0)
movq initial_stack(%rip), %rsp
UNWIND_HINT_EMPTY
jmp .Ljump_to_C_code
-ENDPROC(start_cpu0)
+SYM_FUNC_END(start_cpu0)
#endif

/* Both SMP bootup and ACPI suspend change these variables */
@@ -273,7 +273,7 @@ SYM_DATA(initial_stack,
__FINITDATA

__INIT
-ENTRY(early_idt_handler_array)
+SYM_CODE_START(early_idt_handler_array)
i = 0
.rept NUM_EXCEPTION_VECTORS
.if ((EXCEPTION_ERRCODE_MASK >> i) & 1) == 0
@@ -289,7 +289,7 @@ ENTRY(early_idt_handler_array)
.fill early_idt_handler_array + i*EARLY_IDT_HANDLER_SIZE - ., 1, 0xcc
.endr
UNWIND_HINT_IRET_REGS offset=16
-END(early_idt_handler_array)
+SYM_CODE_END(early_idt_handler_array)

SYM_CODE_START_LOCAL(early_idt_handler_common)
/*
diff --git a/arch/x86/lib/checksum_32.S b/arch/x86/lib/checksum_32.S
index 46e71a74e612..28a148de1843 100644
--- a/arch/x86/lib/checksum_32.S
+++ b/arch/x86/lib/checksum_32.S
@@ -284,7 +284,7 @@ unsigned int csum_partial_copy_generic (const char *src, char *dst,
#define ARGBASE 16
#define FP 12

-ENTRY(csum_partial_copy_generic)
+SYM_FUNC_START(csum_partial_copy_generic)
subl $4,%esp
pushl %edi
pushl %esi
@@ -402,7 +402,7 @@ DST( movb %cl, (%edi) )
popl %edi
popl %ecx # equivalent to addl $4,%esp
ret
-ENDPROC(csum_partial_copy_generic)
+SYM_FUNC_END(csum_partial_copy_generic)

#else

@@ -420,7 +420,7 @@ ENDPROC(csum_partial_copy_generic)

#define ARGBASE 12

-ENTRY(csum_partial_copy_generic)
+SYM_FUNC_START(csum_partial_copy_generic)
pushl %ebx
pushl %edi
pushl %esi
@@ -487,7 +487,7 @@ DST( movb %dl, (%edi) )
popl %edi
popl %ebx
ret
-ENDPROC(csum_partial_copy_generic)
+SYM_FUNC_END(csum_partial_copy_generic)

#undef ROUND
#undef ROUND1
diff --git a/arch/x86/lib/clear_page_64.S b/arch/x86/lib/clear_page_64.S
index 88acd349911b..47aa2830010b 100644
--- a/arch/x86/lib/clear_page_64.S
+++ b/arch/x86/lib/clear_page_64.S
@@ -12,15 +12,15 @@
* Zero a page.
* %rdi - page
*/
-ENTRY(clear_page_rep)
+SYM_FUNC_START(clear_page_rep)
movl $4096/8,%ecx
xorl %eax,%eax
rep stosq
ret
-ENDPROC(clear_page_rep)
+SYM_FUNC_END(clear_page_rep)
EXPORT_SYMBOL_GPL(clear_page_rep)

-ENTRY(clear_page_orig)
+SYM_FUNC_START(clear_page_orig)
xorl %eax,%eax
movl $4096/64,%ecx
.p2align 4
@@ -39,13 +39,13 @@ ENTRY(clear_page_orig)
jnz .Lloop
nop
ret
-ENDPROC(clear_page_orig)
+SYM_FUNC_END(clear_page_orig)
EXPORT_SYMBOL_GPL(clear_page_orig)

-ENTRY(clear_page_erms)
+SYM_FUNC_START(clear_page_erms)
movl $4096,%ecx
xorl %eax,%eax
rep stosb
ret
-ENDPROC(clear_page_erms)
+SYM_FUNC_END(clear_page_erms)
EXPORT_SYMBOL_GPL(clear_page_erms)
diff --git a/arch/x86/lib/cmpxchg16b_emu.S b/arch/x86/lib/cmpxchg16b_emu.S
index 9b330242e740..b6ba6360b3ca 100644
--- a/arch/x86/lib/cmpxchg16b_emu.S
+++ b/arch/x86/lib/cmpxchg16b_emu.S
@@ -19,7 +19,7 @@
* %rcx : high 64 bits of new value
* %al : Operation successful
*/
-ENTRY(this_cpu_cmpxchg16b_emu)
+SYM_FUNC_START(this_cpu_cmpxchg16b_emu)

#
# Emulate 'cmpxchg16b %gs:(%rsi)' except we return the result in %al not
@@ -50,4 +50,4 @@ ENTRY(this_cpu_cmpxchg16b_emu)
xor %al,%al
ret

-ENDPROC(this_cpu_cmpxchg16b_emu)
+SYM_FUNC_END(this_cpu_cmpxchg16b_emu)
diff --git a/arch/x86/lib/cmpxchg8b_emu.S b/arch/x86/lib/cmpxchg8b_emu.S
index 03a186fc06ea..77aa18db3968 100644
--- a/arch/x86/lib/cmpxchg8b_emu.S
+++ b/arch/x86/lib/cmpxchg8b_emu.S
@@ -19,7 +19,7 @@
* %ebx : low 32 bits of new value
* %ecx : high 32 bits of new value
*/
-ENTRY(cmpxchg8b_emu)
+SYM_FUNC_START(cmpxchg8b_emu)

#
# Emulate 'cmpxchg8b (%esi)' on UP except we don't
@@ -48,5 +48,5 @@ ENTRY(cmpxchg8b_emu)
popfl
ret

-ENDPROC(cmpxchg8b_emu)
+SYM_FUNC_END(cmpxchg8b_emu)
EXPORT_SYMBOL(cmpxchg8b_emu)
diff --git a/arch/x86/lib/copy_page_64.S b/arch/x86/lib/copy_page_64.S
index f505870bd93b..2402d4c489d2 100644
--- a/arch/x86/lib/copy_page_64.S
+++ b/arch/x86/lib/copy_page_64.S
@@ -13,12 +13,12 @@
* prefetch distance based on SMP/UP.
*/
ALIGN
-ENTRY(copy_page)
+SYM_FUNC_START(copy_page)
ALTERNATIVE "jmp copy_page_regs", "", X86_FEATURE_REP_GOOD
movl $4096/8, %ecx
rep movsq
ret
-ENDPROC(copy_page)
+SYM_FUNC_END(copy_page)
EXPORT_SYMBOL(copy_page)

SYM_FUNC_START_LOCAL(copy_page_regs)
diff --git a/arch/x86/lib/copy_user_64.S b/arch/x86/lib/copy_user_64.S
index 020f75cc8cf6..5e9e80c05a97 100644
--- a/arch/x86/lib/copy_user_64.S
+++ b/arch/x86/lib/copy_user_64.S
@@ -29,7 +29,7 @@
* Output:
* eax uncopied bytes or 0 if successful.
*/
-ENTRY(copy_user_generic_unrolled)
+SYM_FUNC_START(copy_user_generic_unrolled)
ASM_STAC
cmpl $8,%edx
jb 20f /* less then 8 bytes, go to byte copy loop */
@@ -112,7 +112,7 @@ ENTRY(copy_user_generic_unrolled)
_ASM_EXTABLE(19b,40b)
_ASM_EXTABLE(21b,50b)
_ASM_EXTABLE(22b,50b)
-ENDPROC(copy_user_generic_unrolled)
+SYM_FUNC_END(copy_user_generic_unrolled)
EXPORT_SYMBOL(copy_user_generic_unrolled)

/* Some CPUs run faster using the string copy instructions.
@@ -133,7 +133,7 @@ EXPORT_SYMBOL(copy_user_generic_unrolled)
* Output:
* eax uncopied bytes or 0 if successful.
*/
-ENTRY(copy_user_generic_string)
+SYM_FUNC_START(copy_user_generic_string)
ASM_STAC
cmpl $8,%edx
jb 2f /* less than 8 bytes, go to byte copy loop */
@@ -158,7 +158,7 @@ ENTRY(copy_user_generic_string)

_ASM_EXTABLE(1b,11b)
_ASM_EXTABLE(3b,12b)
-ENDPROC(copy_user_generic_string)
+SYM_FUNC_END(copy_user_generic_string)
EXPORT_SYMBOL(copy_user_generic_string)

/*
@@ -173,7 +173,7 @@ EXPORT_SYMBOL(copy_user_generic_string)
* Output:
* eax uncopied bytes or 0 if successful.
*/
-ENTRY(copy_user_enhanced_fast_string)
+SYM_FUNC_START(copy_user_enhanced_fast_string)
ASM_STAC
cmpl $64,%edx
jb .L_copy_short_string /* less then 64 bytes, avoid the costly 'rep' */
@@ -190,7 +190,7 @@ ENTRY(copy_user_enhanced_fast_string)
.previous

_ASM_EXTABLE(1b,12b)
-ENDPROC(copy_user_enhanced_fast_string)
+SYM_FUNC_END(copy_user_enhanced_fast_string)
EXPORT_SYMBOL(copy_user_enhanced_fast_string)

/*
@@ -202,7 +202,7 @@ EXPORT_SYMBOL(copy_user_enhanced_fast_string)
* - Require 8-byte alignment when size is 8 bytes or larger.
* - Require 4-byte alignment when size is 4 bytes.
*/
-ENTRY(__copy_user_nocache)
+SYM_FUNC_START(__copy_user_nocache)
ASM_STAC

/* If size is less than 8 bytes, go to 4-byte copy */
@@ -341,5 +341,5 @@ ENTRY(__copy_user_nocache)
_ASM_EXTABLE(31b,.L_fixup_4b_copy)
_ASM_EXTABLE(40b,.L_fixup_1b_copy)
_ASM_EXTABLE(41b,.L_fixup_1b_copy)
-ENDPROC(__copy_user_nocache)
+SYM_FUNC_END(__copy_user_nocache)
EXPORT_SYMBOL(__copy_user_nocache)
diff --git a/arch/x86/lib/csum-copy_64.S b/arch/x86/lib/csum-copy_64.S
index 45a53dfe1859..523e4964078f 100644
--- a/arch/x86/lib/csum-copy_64.S
+++ b/arch/x86/lib/csum-copy_64.S
@@ -45,7 +45,7 @@
.endm


-ENTRY(csum_partial_copy_generic)
+SYM_FUNC_START(csum_partial_copy_generic)
cmpl $3*64, %edx
jle .Lignore

@@ -221,4 +221,4 @@ ENTRY(csum_partial_copy_generic)
jz .Lende
movl $-EFAULT, (%rax)
jmp .Lende
-ENDPROC(csum_partial_copy_generic)
+SYM_FUNC_END(csum_partial_copy_generic)
diff --git a/arch/x86/lib/getuser.S b/arch/x86/lib/getuser.S
index a5d7fe7fe401..71dd96676194 100644
--- a/arch/x86/lib/getuser.S
+++ b/arch/x86/lib/getuser.S
@@ -36,7 +36,7 @@
#include <asm/export.h>

.text
-ENTRY(__get_user_1)
+SYM_FUNC_START(__get_user_1)
mov PER_CPU_VAR(current_task), %_ASM_DX
cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX
jae bad_get_user
@@ -47,10 +47,10 @@ ENTRY(__get_user_1)
xor %eax,%eax
ASM_CLAC
ret
-ENDPROC(__get_user_1)
+SYM_FUNC_END(__get_user_1)
EXPORT_SYMBOL(__get_user_1)

-ENTRY(__get_user_2)
+SYM_FUNC_START(__get_user_2)
add $1,%_ASM_AX
jc bad_get_user
mov PER_CPU_VAR(current_task), %_ASM_DX
@@ -63,10 +63,10 @@ ENTRY(__get_user_2)
xor %eax,%eax
ASM_CLAC
ret
-ENDPROC(__get_user_2)
+SYM_FUNC_END(__get_user_2)
EXPORT_SYMBOL(__get_user_2)

-ENTRY(__get_user_4)
+SYM_FUNC_START(__get_user_4)
add $3,%_ASM_AX
jc bad_get_user
mov PER_CPU_VAR(current_task), %_ASM_DX
@@ -79,10 +79,10 @@ ENTRY(__get_user_4)
xor %eax,%eax
ASM_CLAC
ret
-ENDPROC(__get_user_4)
+SYM_FUNC_END(__get_user_4)
EXPORT_SYMBOL(__get_user_4)

-ENTRY(__get_user_8)
+SYM_FUNC_START(__get_user_8)
#ifdef CONFIG_X86_64
add $7,%_ASM_AX
jc bad_get_user
@@ -111,7 +111,7 @@ ENTRY(__get_user_8)
ASM_CLAC
ret
#endif
-ENDPROC(__get_user_8)
+SYM_FUNC_END(__get_user_8)
EXPORT_SYMBOL(__get_user_8)


diff --git a/arch/x86/lib/hweight.S b/arch/x86/lib/hweight.S
index a14f9939c365..dbf8cc97b7f5 100644
--- a/arch/x86/lib/hweight.S
+++ b/arch/x86/lib/hweight.S
@@ -8,7 +8,7 @@
* unsigned int __sw_hweight32(unsigned int w)
* %rdi: w
*/
-ENTRY(__sw_hweight32)
+SYM_FUNC_START(__sw_hweight32)

#ifdef CONFIG_X86_64
movl %edi, %eax # w
@@ -33,10 +33,10 @@ ENTRY(__sw_hweight32)
shrl $24, %eax # w = w_tmp >> 24
__ASM_SIZE(pop,) %__ASM_REG(dx)
ret
-ENDPROC(__sw_hweight32)
+SYM_FUNC_END(__sw_hweight32)
EXPORT_SYMBOL(__sw_hweight32)

-ENTRY(__sw_hweight64)
+SYM_FUNC_START(__sw_hweight64)
#ifdef CONFIG_X86_64
pushq %rdi
pushq %rdx
@@ -79,5 +79,5 @@ ENTRY(__sw_hweight64)
popl %ecx
ret
#endif
-ENDPROC(__sw_hweight64)
+SYM_FUNC_END(__sw_hweight64)
EXPORT_SYMBOL(__sw_hweight64)
diff --git a/arch/x86/lib/iomap_copy_64.S b/arch/x86/lib/iomap_copy_64.S
index 33147fef3452..2246fbf32fa8 100644
--- a/arch/x86/lib/iomap_copy_64.S
+++ b/arch/x86/lib/iomap_copy_64.S
@@ -20,8 +20,8 @@
/*
* override generic version in lib/iomap_copy.c
*/
-ENTRY(__iowrite32_copy)
+SYM_FUNC_START(__iowrite32_copy)
movl %edx,%ecx
rep movsd
ret
-ENDPROC(__iowrite32_copy)
+SYM_FUNC_END(__iowrite32_copy)
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 728703c47d58..9bec63e212a8 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -188,7 +188,7 @@ SYM_FUNC_END(memcpy_orig)
* Note that we only catch machine checks when reading the source addresses.
* Writes to target are posted and don't generate machine checks.
*/
-ENTRY(memcpy_mcsafe_unrolled)
+SYM_FUNC_START(memcpy_mcsafe_unrolled)
cmpl $8, %edx
/* Less than 8 bytes? Go to byte copy loop */
jb .L_no_whole_words
@@ -276,7 +276,7 @@ ENTRY(memcpy_mcsafe_unrolled)
.L_done_memcpy_trap:
xorq %rax, %rax
ret
-ENDPROC(memcpy_mcsafe_unrolled)
+SYM_FUNC_END(memcpy_mcsafe_unrolled)
EXPORT_SYMBOL_GPL(memcpy_mcsafe_unrolled)

.section .fixup, "ax"
diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
index 50c1648311b3..337830d7a59c 100644
--- a/arch/x86/lib/memmove_64.S
+++ b/arch/x86/lib/memmove_64.S
@@ -27,7 +27,7 @@
.weak memmove

SYM_FUNC_START_ALIAS(memmove)
-ENTRY(__memmove)
+SYM_FUNC_START(__memmove)

/* Handle more 32 bytes in loop */
mov %rdi, %rax
@@ -207,7 +207,7 @@ ENTRY(__memmove)
movb %r11b, (%rdi)
13:
retq
-ENDPROC(__memmove)
+SYM_FUNC_END(__memmove)
SYM_FUNC_END_ALIAS(memmove)
EXPORT_SYMBOL(__memmove)
EXPORT_SYMBOL(memmove)
diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
index 564abf9ecedb..9ff15ee404a4 100644
--- a/arch/x86/lib/memset_64.S
+++ b/arch/x86/lib/memset_64.S
@@ -20,7 +20,7 @@
* rax original destination
*/
SYM_FUNC_START_ALIAS(memset)
-ENTRY(__memset)
+SYM_FUNC_START(__memset)
/*
* Some CPUs support enhanced REP MOVSB/STOSB feature. It is recommended
* to use it when possible. If not available, use fast string instructions.
@@ -43,7 +43,7 @@ ENTRY(__memset)
rep stosb
movq %r9,%rax
ret
-ENDPROC(__memset)
+SYM_FUNC_END(__memset)
SYM_FUNC_END_ALIAS(memset)
EXPORT_SYMBOL(memset)
EXPORT_SYMBOL(__memset)
diff --git a/arch/x86/lib/msr-reg.S b/arch/x86/lib/msr-reg.S
index ed33cbab3958..a2b9caa5274c 100644
--- a/arch/x86/lib/msr-reg.S
+++ b/arch/x86/lib/msr-reg.S
@@ -12,7 +12,7 @@
*
*/
.macro op_safe_regs op
-ENTRY(\op\()_safe_regs)
+SYM_FUNC_START(\op\()_safe_regs)
pushq %rbx
pushq %r12
movq %rdi, %r10 /* Save pointer */
@@ -41,13 +41,13 @@ ENTRY(\op\()_safe_regs)
jmp 2b

_ASM_EXTABLE(1b, 3b)
-ENDPROC(\op\()_safe_regs)
+SYM_FUNC_END(\op\()_safe_regs)
.endm

#else /* X86_32 */

.macro op_safe_regs op
-ENTRY(\op\()_safe_regs)
+SYM_FUNC_START(\op\()_safe_regs)
pushl %ebx
pushl %ebp
pushl %esi
@@ -83,7 +83,7 @@ ENTRY(\op\()_safe_regs)
jmp 2b

_ASM_EXTABLE(1b, 3b)
-ENDPROC(\op\()_safe_regs)
+SYM_FUNC_END(\op\()_safe_regs)
.endm

#endif
diff --git a/arch/x86/lib/putuser.S b/arch/x86/lib/putuser.S
index 8234d8559385..9ec0f34a8541 100644
--- a/arch/x86/lib/putuser.S
+++ b/arch/x86/lib/putuser.S
@@ -36,7 +36,7 @@
ret

.text
-ENTRY(__put_user_1)
+SYM_FUNC_START(__put_user_1)
ENTER
cmp TASK_addr_limit(%_ASM_BX),%_ASM_CX
jae bad_put_user
@@ -44,10 +44,10 @@ ENTRY(__put_user_1)
1: movb %al,(%_ASM_CX)
xor %eax,%eax
EXIT
-ENDPROC(__put_user_1)
+SYM_FUNC_END(__put_user_1)
EXPORT_SYMBOL(__put_user_1)

-ENTRY(__put_user_2)
+SYM_FUNC_START(__put_user_2)
ENTER
mov TASK_addr_limit(%_ASM_BX),%_ASM_BX
sub $1,%_ASM_BX
@@ -57,10 +57,10 @@ ENTRY(__put_user_2)
2: movw %ax,(%_ASM_CX)
xor %eax,%eax
EXIT
-ENDPROC(__put_user_2)
+SYM_FUNC_END(__put_user_2)
EXPORT_SYMBOL(__put_user_2)

-ENTRY(__put_user_4)
+SYM_FUNC_START(__put_user_4)
ENTER
mov TASK_addr_limit(%_ASM_BX),%_ASM_BX
sub $3,%_ASM_BX
@@ -70,10 +70,10 @@ ENTRY(__put_user_4)
3: movl %eax,(%_ASM_CX)
xor %eax,%eax
EXIT
-ENDPROC(__put_user_4)
+SYM_FUNC_END(__put_user_4)
EXPORT_SYMBOL(__put_user_4)

-ENTRY(__put_user_8)
+SYM_FUNC_START(__put_user_8)
ENTER
mov TASK_addr_limit(%_ASM_BX),%_ASM_BX
sub $7,%_ASM_BX
@@ -86,7 +86,7 @@ ENTRY(__put_user_8)
#endif
xor %eax,%eax
EXIT
-ENDPROC(__put_user_8)
+SYM_FUNC_END(__put_user_8)
EXPORT_SYMBOL(__put_user_8)

SYM_CODE_START_LOCAL(bad_put_user)
diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
index c909961e678a..363ec132df7e 100644
--- a/arch/x86/lib/retpoline.S
+++ b/arch/x86/lib/retpoline.S
@@ -11,11 +11,11 @@
.macro THUNK reg
.section .text.__x86.indirect_thunk

-ENTRY(__x86_indirect_thunk_\reg)
+SYM_FUNC_START(__x86_indirect_thunk_\reg)
CFI_STARTPROC
JMP_NOSPEC %\reg
CFI_ENDPROC
-ENDPROC(__x86_indirect_thunk_\reg)
+SYM_FUNC_END(__x86_indirect_thunk_\reg)
.endm

/*
diff --git a/arch/x86/lib/rwsem.S b/arch/x86/lib/rwsem.S
index dc2ab6ea6768..dcd5c997b068 100644
--- a/arch/x86/lib/rwsem.S
+++ b/arch/x86/lib/rwsem.S
@@ -86,7 +86,7 @@
#endif

/* Fix up special calling conventions */
-ENTRY(call_rwsem_down_read_failed)
+SYM_FUNC_START(call_rwsem_down_read_failed)
FRAME_BEGIN
save_common_regs
__ASM_SIZE(push,) %__ASM_REG(dx)
@@ -96,9 +96,9 @@ ENTRY(call_rwsem_down_read_failed)
restore_common_regs
FRAME_END
ret
-ENDPROC(call_rwsem_down_read_failed)
+SYM_FUNC_END(call_rwsem_down_read_failed)

-ENTRY(call_rwsem_down_read_failed_killable)
+SYM_FUNC_START(call_rwsem_down_read_failed_killable)
FRAME_BEGIN
save_common_regs
__ASM_SIZE(push,) %__ASM_REG(dx)
@@ -108,9 +108,9 @@ ENTRY(call_rwsem_down_read_failed_killable)
restore_common_regs
FRAME_END
ret
-ENDPROC(call_rwsem_down_read_failed_killable)
+SYM_FUNC_END(call_rwsem_down_read_failed_killable)

-ENTRY(call_rwsem_down_write_failed)
+SYM_FUNC_START(call_rwsem_down_write_failed)
FRAME_BEGIN
save_common_regs
movq %rax,%rdi
@@ -118,9 +118,9 @@ ENTRY(call_rwsem_down_write_failed)
restore_common_regs
FRAME_END
ret
-ENDPROC(call_rwsem_down_write_failed)
+SYM_FUNC_END(call_rwsem_down_write_failed)

-ENTRY(call_rwsem_down_write_failed_killable)
+SYM_FUNC_START(call_rwsem_down_write_failed_killable)
FRAME_BEGIN
save_common_regs
movq %rax,%rdi
@@ -128,9 +128,9 @@ ENTRY(call_rwsem_down_write_failed_killable)
restore_common_regs
FRAME_END
ret
-ENDPROC(call_rwsem_down_write_failed_killable)
+SYM_FUNC_END(call_rwsem_down_write_failed_killable)

-ENTRY(call_rwsem_wake)
+SYM_FUNC_START(call_rwsem_wake)
FRAME_BEGIN
/* do nothing if still outstanding active readers */
__ASM_HALF_SIZE(dec) %__ASM_HALF_REG(dx)
@@ -141,9 +141,9 @@ ENTRY(call_rwsem_wake)
restore_common_regs
1: FRAME_END
ret
-ENDPROC(call_rwsem_wake)
+SYM_FUNC_END(call_rwsem_wake)

-ENTRY(call_rwsem_downgrade_wake)
+SYM_FUNC_START(call_rwsem_downgrade_wake)
FRAME_BEGIN
save_common_regs
__ASM_SIZE(push,) %__ASM_REG(dx)
@@ -153,4 +153,4 @@ ENTRY(call_rwsem_downgrade_wake)
restore_common_regs
FRAME_END
ret
-ENDPROC(call_rwsem_downgrade_wake)
+SYM_FUNC_END(call_rwsem_downgrade_wake)
diff --git a/arch/x86/mm/mem_encrypt_boot.S b/arch/x86/mm/mem_encrypt_boot.S
index 40a6085063d6..2c0a6fbd4fe8 100644
--- a/arch/x86/mm/mem_encrypt_boot.S
+++ b/arch/x86/mm/mem_encrypt_boot.S
@@ -19,7 +19,7 @@

.text
.code64
-ENTRY(sme_encrypt_execute)
+SYM_FUNC_START(sme_encrypt_execute)

/*
* Entry parameters:
@@ -69,9 +69,9 @@ ENTRY(sme_encrypt_execute)
pop %rbp

ret
-ENDPROC(sme_encrypt_execute)
+SYM_FUNC_END(sme_encrypt_execute)

-ENTRY(__enc_copy)
+SYM_FUNC_START(__enc_copy)
/*
* Routine used to encrypt memory in place.
* This routine must be run outside of the kernel proper since
@@ -156,4 +156,4 @@ ENTRY(__enc_copy)

ret
.L__enc_copy_end:
-ENDPROC(__enc_copy)
+SYM_FUNC_END(__enc_copy)
diff --git a/arch/x86/platform/efi/efi_stub_64.S b/arch/x86/platform/efi/efi_stub_64.S
index 74628ec78f29..b1d2313fe3bf 100644
--- a/arch/x86/platform/efi/efi_stub_64.S
+++ b/arch/x86/platform/efi/efi_stub_64.S
@@ -39,7 +39,7 @@
mov %rsi, %cr0; \
mov (%rsp), %rsp

-ENTRY(efi_call)
+SYM_FUNC_START(efi_call)
pushq %rbp
movq %rsp, %rbp
SAVE_XMM
@@ -55,4 +55,4 @@ ENTRY(efi_call)
RESTORE_XMM
popq %rbp
ret
-ENDPROC(efi_call)
+SYM_FUNC_END(efi_call)
diff --git a/arch/x86/platform/efi/efi_thunk_64.S b/arch/x86/platform/efi/efi_thunk_64.S
index d677a7eb2d0a..3189f1394701 100644
--- a/arch/x86/platform/efi/efi_thunk_64.S
+++ b/arch/x86/platform/efi/efi_thunk_64.S
@@ -25,7 +25,7 @@

.text
.code64
-ENTRY(efi64_thunk)
+SYM_FUNC_START(efi64_thunk)
push %rbp
push %rbx

@@ -60,7 +60,7 @@ ENTRY(efi64_thunk)
pop %rbx
pop %rbp
retq
-ENDPROC(efi64_thunk)
+SYM_FUNC_END(efi64_thunk)

/*
* We run this function from the 1:1 mapping.
diff --git a/arch/x86/power/hibernate_asm_64.S b/arch/x86/power/hibernate_asm_64.S
index 44755a847856..c87ae08f9312 100644
--- a/arch/x86/power/hibernate_asm_64.S
+++ b/arch/x86/power/hibernate_asm_64.S
@@ -23,7 +23,7 @@
#include <asm/processor-flags.h>
#include <asm/frame.h>

-ENTRY(swsusp_arch_suspend)
+SYM_FUNC_START(swsusp_arch_suspend)
movq $saved_context, %rax
movq %rsp, pt_regs_sp(%rax)
movq %rbp, pt_regs_bp(%rax)
@@ -51,7 +51,7 @@ ENTRY(swsusp_arch_suspend)
call swsusp_save
FRAME_END
ret
-ENDPROC(swsusp_arch_suspend)
+SYM_FUNC_END(swsusp_arch_suspend)

SYM_CODE_START(restore_image)
/* prepare to jump to the image kernel */
@@ -103,7 +103,7 @@ SYM_CODE_END(core_restore_code)

/* code below belongs to the image kernel */
.align PAGE_SIZE
-ENTRY(restore_registers)
+SYM_FUNC_START(restore_registers)
/* go back to the original page tables */
movq %r9, %cr3

@@ -145,4 +145,4 @@ ENTRY(restore_registers)
movq %rax, in_suspend(%rip)

ret
-ENDPROC(restore_registers)
+SYM_FUNC_END(restore_registers)
diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
index 8019edd0125c..d7bf6d5cfcb9 100644
--- a/arch/x86/xen/xen-asm.S
+++ b/arch/x86/xen/xen-asm.S
@@ -18,7 +18,7 @@
* event status with one and operation. If there are pending events,
* then enter the hypervisor to get them handled.
*/
-ENTRY(xen_irq_enable_direct)
+SYM_FUNC_START(xen_irq_enable_direct)
FRAME_BEGIN
/* Unmask events */
movb $0, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_mask
@@ -37,17 +37,17 @@ ENTRY(xen_irq_enable_direct)
1:
FRAME_END
ret
- ENDPROC(xen_irq_enable_direct)
+SYM_FUNC_END(xen_irq_enable_direct)


/*
* Disabling events is simply a matter of making the event mask
* non-zero.
*/
-ENTRY(xen_irq_disable_direct)
+SYM_FUNC_START(xen_irq_disable_direct)
movb $1, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_mask
ret
-ENDPROC(xen_irq_disable_direct)
+SYM_FUNC_END(xen_irq_disable_direct)

/*
* (xen_)save_fl is used to get the current interrupt enable status.
@@ -58,12 +58,12 @@ ENDPROC(xen_irq_disable_direct)
* undefined. We need to toggle the state of the bit, because Xen and
* x86 use opposite senses (mask vs enable).
*/
-ENTRY(xen_save_fl_direct)
+SYM_FUNC_START(xen_save_fl_direct)
testb $0xff, PER_CPU_VAR(xen_vcpu_info) + XEN_vcpu_info_mask
setz %ah
addb %ah, %ah
ret
- ENDPROC(xen_save_fl_direct)
+SYM_FUNC_END(xen_save_fl_direct)


/*
@@ -73,7 +73,7 @@ ENTRY(xen_save_fl_direct)
* interrupt mask state, it checks for unmasked pending events and
* enters the hypervisor to get them delivered if so.
*/
-ENTRY(xen_restore_fl_direct)
+SYM_FUNC_START(xen_restore_fl_direct)
FRAME_BEGIN
#ifdef CONFIG_X86_64
testw $X86_EFLAGS_IF, %di
@@ -94,14 +94,14 @@ ENTRY(xen_restore_fl_direct)
1:
FRAME_END
ret
- ENDPROC(xen_restore_fl_direct)
+SYM_FUNC_END(xen_restore_fl_direct)


/*
* Force an event check by making a hypercall, but preserve regs
* before making the call.
*/
-ENTRY(check_events)
+SYM_FUNC_START(check_events)
FRAME_BEGIN
#ifdef CONFIG_X86_32
push %eax
@@ -134,4 +134,4 @@ ENTRY(check_events)
#endif
FRAME_END
ret
-ENDPROC(check_events)
+SYM_FUNC_END(check_events)
diff --git a/arch/x86/xen/xen-asm_64.S b/arch/x86/xen/xen-asm_64.S
index 5a3f5c18cd0c..dada73db402a 100644
--- a/arch/x86/xen/xen-asm_64.S
+++ b/arch/x86/xen/xen-asm_64.S
@@ -123,7 +123,7 @@ SYM_CODE_END(xen_sysret64)
*/

/* Normal 64-bit system call target */
-ENTRY(xen_syscall_target)
+SYM_FUNC_START(xen_syscall_target)
popq %rcx
popq %r11

@@ -136,12 +136,12 @@ ENTRY(xen_syscall_target)
movq $__USER_CS, 1*8(%rsp)

jmp entry_SYSCALL_64_after_hwframe
-ENDPROC(xen_syscall_target)
+SYM_FUNC_END(xen_syscall_target)

#ifdef CONFIG_IA32_EMULATION

/* 32-bit compat syscall target */
-ENTRY(xen_syscall32_target)
+SYM_FUNC_START(xen_syscall32_target)
popq %rcx
popq %r11

@@ -154,25 +154,25 @@ ENTRY(xen_syscall32_target)
movq $__USER32_CS, 1*8(%rsp)

jmp entry_SYSCALL_compat_after_hwframe
-ENDPROC(xen_syscall32_target)
+SYM_FUNC_END(xen_syscall32_target)

/* 32-bit compat sysenter target */
-ENTRY(xen_sysenter_target)
+SYM_FUNC_START(xen_sysenter_target)
mov 0*8(%rsp), %rcx
mov 1*8(%rsp), %r11
mov 5*8(%rsp), %rsp
jmp entry_SYSENTER_compat
-ENDPROC(xen_sysenter_target)
+SYM_FUNC_END(xen_sysenter_target)

#else /* !CONFIG_IA32_EMULATION */

SYM_FUNC_START_ALIAS(xen_syscall32_target)
-ENTRY(xen_sysenter_target)
+SYM_FUNC_START(xen_sysenter_target)
lea 16(%rsp), %rsp /* strip %rcx, %r11 */
mov $-ENOSYS, %rax
pushq $0
jmp hypercall_iret
-ENDPROC(xen_sysenter_target)
+SYM_FUNC_END(xen_sysenter_target)
SYM_FUNC_END_ALIAS(xen_syscall32_target)

#endif /* CONFIG_IA32_EMULATION */
diff --git a/include/linux/linkage.h b/include/linux/linkage.h
index e920ffa2a943..a57da818d88f 100644
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -105,11 +105,13 @@

/* === DEPRECATED annotations === */

+#ifndef CONFIG_X86_64
#ifndef ENTRY
/* deprecated, use SYM_FUNC_START */
#define ENTRY(name) \
SYM_FUNC_START(name)
#endif
+#endif /* CONFIG_X86_64 */
#endif /* LINKER_SCRIPT */

#ifndef WEAK
@@ -124,6 +126,7 @@
.size name, .-name
#endif

+#ifndef CONFIG_X86_64
/* If symbol 'name' is treated as a subroutine (gets called, and returns)
* then please use ENDPROC to mark 'name' as STT_FUNC for the benefit of
* static analysis tools such as stack depth analyzer.
@@ -133,6 +136,7 @@
#define ENDPROC(name) \
SYM_FUNC_END(name)
#endif
+#endif /* CONFIG_X86_64 */

/* === generic annotations === */

--
2.16.3


2018-05-18 09:21:22

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 15/28] x86/asm/purgatory: start using annotations

purgatory used no annotations at all. So include linux/linkage.h and
annotate everything:
* code by SYM_CODE_*
* data by SYM_DATA_*

Signed-off-by: Jiri Slaby <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
---
arch/x86/purgatory/entry64.S | 21 ++++++++++++---------
arch/x86/purgatory/setup-x86_64.S | 14 ++++++++------
arch/x86/purgatory/stack.S | 7 ++++---
3 files changed, 24 insertions(+), 18 deletions(-)

diff --git a/arch/x86/purgatory/entry64.S b/arch/x86/purgatory/entry64.S
index d1a4291d3568..c51e3c999e38 100644
--- a/arch/x86/purgatory/entry64.S
+++ b/arch/x86/purgatory/entry64.S
@@ -10,13 +10,13 @@
* Version 2. See the file COPYING for more details.
*/

+#include <linux/linkage.h>
+
.text
.balign 16
.code64
- .globl entry64, entry64_regs
-

-entry64:
+SYM_CODE_START(entry64)
/* Setup a gdt that should be preserved */
lgdt gdt(%rip)

@@ -56,10 +56,11 @@ new_cs_exit:

/* Jump to the new code... */
jmpq *rip(%rip)
+SYM_CODE_END(entry64)

.section ".rodata"
.balign 4
-entry64_regs:
+SYM_DATA_START(entry64_regs)
rax: .quad 0x0
rcx: .quad 0x0
rdx: .quad 0x0
@@ -77,12 +78,12 @@ r13: .quad 0x0
r14: .quad 0x0
r15: .quad 0x0
rip: .quad 0x0
- .size entry64_regs, . - entry64_regs
+SYM_DATA_END(entry64_regs)

/* GDT */
.section ".rodata"
.balign 16
-gdt:
+SYM_DATA_START_LOCAL(gdt)
/* 0x00 unusable segment
* 0x08 unused
* so use them as gdt ptr
@@ -96,6 +97,8 @@ gdt:

/* 0x18 4GB flat data segment */
.word 0xFFFF, 0x0000, 0x9200, 0x00CF
-gdt_end:
-stack: .quad 0, 0
-stack_init:
+SYM_DATA_END_LABEL(gdt, SYM_L_LOCAL, gdt_end)
+
+SYM_DATA_START_LOCAL(stack)
+ .quad 0, 0
+SYM_DATA_END_LABEL(stack, SYM_L_LOCAL, stack_init)
diff --git a/arch/x86/purgatory/setup-x86_64.S b/arch/x86/purgatory/setup-x86_64.S
index dfae9b9e60b5..f0de104d3f3a 100644
--- a/arch/x86/purgatory/setup-x86_64.S
+++ b/arch/x86/purgatory/setup-x86_64.S
@@ -9,14 +9,14 @@
* This source code is licensed under the GNU General Public License,
* Version 2. See the file COPYING for more details.
*/
+#include <linux/linkage.h>
#include <asm/purgatory.h>

.text
- .globl purgatory_start
.balign 16
-purgatory_start:
.code64

+SYM_CODE_START(purgatory_start)
/* Load a gdt so I know what the segment registers are */
lgdt gdt(%rip)

@@ -34,10 +34,12 @@ purgatory_start:
/* Call the C code */
call purgatory
jmp entry64
+SYM_CODE_END(purgatory_start)

.section ".rodata"
.balign 16
-gdt: /* 0x00 unusable segment
+SYM_DATA_START_LOCAL(gdt)
+ /* 0x00 unusable segment
* 0x08 unused
* so use them as the gdt ptr
*/
@@ -50,10 +52,10 @@ gdt: /* 0x00 unusable segment

/* 0x18 4GB flat data segment */
.word 0xFFFF, 0x0000, 0x9200, 0x00CF
-gdt_end:
+SYM_DATA_END_LABEL(gdt, SYM_L_LOCAL, gdt_end)

.bss
.balign 4096
-lstack:
+SYM_DATA_START_LOCAL(lstack)
.skip 4096
-lstack_end:
+SYM_DATA_END_LABEL(lstack, SYM_L_LOCAL, lstack_end)
diff --git a/arch/x86/purgatory/stack.S b/arch/x86/purgatory/stack.S
index 50a4147f91fb..987e6510a960 100644
--- a/arch/x86/purgatory/stack.S
+++ b/arch/x86/purgatory/stack.S
@@ -7,13 +7,14 @@
* Version 2. See the file COPYING for more details.
*/

+#include <linux/linkage.h>
+
/* A stack for the loaded kernel.
* Separate and in the data section so it can be prepopulated.
*/
.data
.balign 4096
- .globl stack, stack_end

-stack:
+SYM_DATA_START(stack)
.skip 4096
-stack_end:
+SYM_DATA_END_LABEL(stack, SYM_L_GLOBAL, stack_end)
--
2.16.3


2018-05-18 09:21:23

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 22/28] x86_64/asm: add ENDs to some functions and relabel with SYM_CODE_*

All these are functions which are invoked from elsewhere, but they are
not typical C functions. So we annotate them using the new
SYM_CODE_START. All these were not balanced with any END, so mark their
ends by SYM_CODE_END appropriatelly too.

Signed-off-by: Jiri Slaby <[email protected]>
Reviewed-by: Boris Ostrovsky <[email protected]> [xen bits]
Cc: Boris Ostrovsky <[email protected]>
Cc: Juergen Gross <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
arch/x86/boot/compressed/head_64.S | 6 ++++--
arch/x86/platform/olpc/xo1-wakeup.S | 3 ++-
arch/x86/power/hibernate_asm_64.S | 6 ++++--
arch/x86/realmode/rm/reboot.S | 3 ++-
arch/x86/realmode/rm/trampoline_64.S | 10 +++++++---
arch/x86/realmode/rm/wakeup_asm.S | 3 ++-
arch/x86/xen/xen-asm_64.S | 6 ++++--
7 files changed, 25 insertions(+), 12 deletions(-)

diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index a1a92f6fc8e4..d056c789f90d 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -250,7 +250,7 @@ ENDPROC(efi32_stub_entry)

.code64
.org 0x200
-ENTRY(startup_64)
+SYM_CODE_START(startup_64)
/*
* 64bit entry is 0x200 and it is ABI so immutable!
* We come here either from startup_32 or directly from a
@@ -400,6 +400,7 @@ trampoline_return:
*/
leaq relocated(%rbx), %rax
jmp *%rax
+SYM_CODE_END(startup_64)

#ifdef CONFIG_EFI_STUB

@@ -521,7 +522,7 @@ SYM_FUNC_END(relocated)
* ECX contains the base address of the trampoline memory.
* Non zero RDX on return means we need to enable 5-level paging.
*/
-ENTRY(trampoline_32bit_src)
+SYM_CODE_START(trampoline_32bit_src)
/* Set up data and stack segments */
movl $__KERNEL_DS, %eax
movl %eax, %ds
@@ -574,6 +575,7 @@ ENTRY(trampoline_32bit_src)
movl %eax, %cr0

lret
+SYM_CODE_END(trampoline_32bit_src)

.code64
SYM_FUNC_START_LOCAL_NOALIGN(paging_enabled)
diff --git a/arch/x86/platform/olpc/xo1-wakeup.S b/arch/x86/platform/olpc/xo1-wakeup.S
index 5fee3a2c2fd4..75f4faff8468 100644
--- a/arch/x86/platform/olpc/xo1-wakeup.S
+++ b/arch/x86/platform/olpc/xo1-wakeup.S
@@ -90,7 +90,7 @@ restore_registers:

ret

-ENTRY(do_olpc_suspend_lowlevel)
+SYM_CODE_START(do_olpc_suspend_lowlevel)
call save_processor_state
call save_registers

@@ -110,6 +110,7 @@ ret_point:
call restore_registers
call restore_processor_state
ret
+SYM_CODE_END(do_olpc_suspend_lowlevel)

.data
saved_gdt: .long 0,0
diff --git a/arch/x86/power/hibernate_asm_64.S b/arch/x86/power/hibernate_asm_64.S
index ce8da3a0412c..44755a847856 100644
--- a/arch/x86/power/hibernate_asm_64.S
+++ b/arch/x86/power/hibernate_asm_64.S
@@ -53,7 +53,7 @@ ENTRY(swsusp_arch_suspend)
ret
ENDPROC(swsusp_arch_suspend)

-ENTRY(restore_image)
+SYM_CODE_START(restore_image)
/* prepare to jump to the image kernel */
movq restore_jump_address(%rip), %r8
movq restore_cr3(%rip), %r9
@@ -68,9 +68,10 @@ ENTRY(restore_image)
/* jump to relocated restore code */
movq relocated_restore_code(%rip), %rcx
jmpq *%rcx
+SYM_CODE_END(restore_image)

/* code below has been relocated to a safe page */
-ENTRY(core_restore_code)
+SYM_CODE_START(core_restore_code)
/* switch to temporary page tables */
movq %rax, %cr3
/* flush TLB */
@@ -98,6 +99,7 @@ ENTRY(core_restore_code)
.Ldone:
/* jump to the restore_registers address from the image header */
jmpq *%r8
+SYM_CODE_END(core_restore_code)

/* code below belongs to the image kernel */
.align PAGE_SIZE
diff --git a/arch/x86/realmode/rm/reboot.S b/arch/x86/realmode/rm/reboot.S
index 424826afb501..f10515b10e0a 100644
--- a/arch/x86/realmode/rm/reboot.S
+++ b/arch/x86/realmode/rm/reboot.S
@@ -19,7 +19,7 @@
*/
.section ".text32", "ax"
.code32
-ENTRY(machine_real_restart_asm)
+SYM_CODE_START(machine_real_restart_asm)

#ifdef CONFIG_X86_64
/* Switch to trampoline GDT as it is guaranteed < 4 GiB */
@@ -63,6 +63,7 @@ SYM_INNER_LABEL(machine_real_restart_paging_off, SYM_L_GLOBAL)
movl %ecx, %gs
movl %ecx, %ss
ljmpw $8, $1f
+SYM_CODE_END(machine_real_restart_asm)

/*
* This is 16-bit protected mode code to disable paging and the cache,
diff --git a/arch/x86/realmode/rm/trampoline_64.S b/arch/x86/realmode/rm/trampoline_64.S
index 9e5f9ade43c8..408f81710ccd 100644
--- a/arch/x86/realmode/rm/trampoline_64.S
+++ b/arch/x86/realmode/rm/trampoline_64.S
@@ -38,7 +38,7 @@
.code16

.balign PAGE_SIZE
-ENTRY(trampoline_start)
+SYM_CODE_START(trampoline_start)
cli # We should be safe anyway
wbinvd

@@ -81,12 +81,14 @@ ENTRY(trampoline_start)
no_longmode:
hlt
jmp no_longmode
+SYM_CODE_END(trampoline_start)
+
#include "../kernel/verify_cpu.S"

.section ".text32","ax"
.code32
.balign 4
-ENTRY(startup_32)
+SYM_CODE_START(startup_32)
movl %edx, %ss
addl $pa_real_mode_base, %esp
movl %edx, %ds
@@ -140,13 +142,15 @@ ENTRY(startup_32)
* the new gdt/idt that has __KERNEL_CS with CS.L = 1.
*/
ljmpl $__KERNEL_CS, $pa_startup_64
+SYM_CODE_END(startup_32)

.section ".text64","ax"
.code64
.balign 4
-ENTRY(startup_64)
+SYM_CODE_START(startup_64)
# Now jump into the kernel using virtual addresses
jmpq *tr_start(%rip)
+SYM_CODE_END(startup_64)

.section ".rodata","a"
# Duplicate the global descriptor table
diff --git a/arch/x86/realmode/rm/wakeup_asm.S b/arch/x86/realmode/rm/wakeup_asm.S
index 0af6b30d3c68..7079913adbd2 100644
--- a/arch/x86/realmode/rm/wakeup_asm.S
+++ b/arch/x86/realmode/rm/wakeup_asm.S
@@ -37,7 +37,7 @@ SYM_DATA_END(wakeup_header)
.code16

.balign 16
-ENTRY(wakeup_start)
+SYM_CODE_START(wakeup_start)
cli
cld

@@ -135,6 +135,7 @@ ENTRY(wakeup_start)
#else
jmp trampoline_start
#endif
+SYM_CODE_END(wakeup_start)

bogus_real_magic:
1:
diff --git a/arch/x86/xen/xen-asm_64.S b/arch/x86/xen/xen-asm_64.S
index e8f6f482bb20..a69a171f7cea 100644
--- a/arch/x86/xen/xen-asm_64.S
+++ b/arch/x86/xen/xen-asm_64.S
@@ -84,11 +84,12 @@ hypercall_iret = hypercall_page + __HYPERVISOR_iret * 32
* r11 }<-- pushed by hypercall page
* rsp->rax }
*/
-ENTRY(xen_iret)
+SYM_CODE_START(xen_iret)
pushq $0
jmp hypercall_iret
+SYM_CODE_END(xen_iret)

-ENTRY(xen_sysret64)
+SYM_CODE_START(xen_sysret64)
/*
* We're already on the usermode stack at this point, but
* still with the kernel gs, so we can easily switch back
@@ -104,6 +105,7 @@ ENTRY(xen_sysret64)

pushq $VGCF_in_syscall
jmp hypercall_iret
+SYM_CODE_END(xen_sysret64)

/*
* Xen handles syscall callbacks much like ordinary exceptions, which
--
2.16.3


2018-05-18 09:21:26

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 14/28] xen/pvh: annotate data appropriatelly

Use the new SYM_DATA_START_LOCAL, and SYM_DATA_END* macros to have:
0000 8 OBJECT LOCAL DEFAULT 6 gdt
0008 32 OBJECT LOCAL DEFAULT 6 gdt_start
0028 0 OBJECT LOCAL DEFAULT 6 gdt_end
0028 256 OBJECT LOCAL DEFAULT 6 early_stack
0128 0 OBJECT LOCAL DEFAULT 6 early_stack

Signed-off-by: Jiri Slaby <[email protected]>
Reviewed-by: Boris Ostrovsky <[email protected]>
Cc: Boris Ostrovsky <[email protected]>
Cc: Juergen Gross <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
arch/x86/xen/xen-pvh.S | 11 ++++++-----
1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/x86/xen/xen-pvh.S b/arch/x86/xen/xen-pvh.S
index e1a5fbeae08d..52b28793a625 100644
--- a/arch/x86/xen/xen-pvh.S
+++ b/arch/x86/xen/xen-pvh.S
@@ -137,11 +137,12 @@ END(pvh_start_xen)

.section ".init.data","aw"
.balign 8
-gdt:
+SYM_DATA_START_LOCAL(gdt)
.word gdt_end - gdt_start
.long _pa(gdt_start)
.word 0
-gdt_start:
+SYM_DATA_END(gdt)
+SYM_DATA_START_LOCAL(gdt_start)
.quad 0x0000000000000000 /* NULL descriptor */
.quad 0x0000000000000000 /* reserved */
#ifdef CONFIG_X86_64
@@ -150,12 +151,12 @@ gdt_start:
.quad GDT_ENTRY(0xc09a, 0, 0xfffff) /* __KERNEL_CS */
#endif
.quad GDT_ENTRY(0xc092, 0, 0xfffff) /* __KERNEL_DS */
-gdt_end:
+SYM_DATA_END_LABEL(gdt_start, SYM_L_LOCAL, gdt_end)

.balign 4
-early_stack:
+SYM_DATA_START_LOCAL(early_stack)
.fill 256, 1, 0
-early_stack_end:
+SYM_DATA_END_LABEL(early_stack, SYM_L_LOCAL, early_stack_end)

ELFNOTE(Xen, XEN_ELFNOTE_PHYS32_ENTRY,
_ASM_PTR (pvh_start_xen - __START_KERNEL_map))
--
2.16.3


2018-05-18 09:21:36

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 23/28] x86_64/asm: change all ENTRY+END to SYM_CODE_*

Here, we change all assembly code which is marked using END (and not
ENDPROC). We switch all these to appropriate new markings SYM_CODE_START
and SYM_CODE_END.

Signed-off-by: Jiri Slaby <[email protected]>
Reviewed-by: Boris Ostrovsky <[email protected]> [xen bits]
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: Boris Ostrovsky <[email protected]>
Cc: Juergen Gross <[email protected]>
Cc: [email protected]
---
arch/x86/entry/entry_64.S | 56 ++++++++++++++++++++--------------------
arch/x86/entry/entry_64_compat.S | 8 +++---
arch/x86/kernel/ftrace_64.S | 4 +--
arch/x86/xen/xen-asm_64.S | 8 +++---
arch/x86/xen/xen-head.S | 8 +++---
5 files changed, 42 insertions(+), 42 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index d2e2de100b16..6f85f43a4877 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -46,11 +46,11 @@
.section .entry.text, "ax"

#ifdef CONFIG_PARAVIRT
-ENTRY(native_usergs_sysret64)
+SYM_CODE_START(native_usergs_sysret64)
UNWIND_HINT_EMPTY
swapgs
sysretq
-END(native_usergs_sysret64)
+SYM_CODE_END(native_usergs_sysret64)
#endif /* CONFIG_PARAVIRT */

.macro TRACE_IRQS_FLAGS flags:req
@@ -163,7 +163,7 @@ END(native_usergs_sysret64)
#define RSP_SCRATCH CPU_ENTRY_AREA_entry_stack + \
SIZEOF_entry_stack - 8 + CPU_ENTRY_AREA

-ENTRY(entry_SYSCALL_64_trampoline)
+SYM_CODE_START(entry_SYSCALL_64_trampoline)
UNWIND_HINT_EMPTY
swapgs

@@ -193,17 +193,17 @@ ENTRY(entry_SYSCALL_64_trampoline)
pushq %rdi
movq $entry_SYSCALL_64_stage2, %rdi
JMP_NOSPEC %rdi
-END(entry_SYSCALL_64_trampoline)
+SYM_CODE_END(entry_SYSCALL_64_trampoline)

.popsection

-ENTRY(entry_SYSCALL_64_stage2)
+SYM_CODE_START(entry_SYSCALL_64_stage2)
UNWIND_HINT_EMPTY
popq %rdi
jmp entry_SYSCALL_64_after_hwframe
-END(entry_SYSCALL_64_stage2)
+SYM_CODE_END(entry_SYSCALL_64_stage2)

-ENTRY(entry_SYSCALL_64)
+SYM_CODE_START(entry_SYSCALL_64)
UNWIND_HINT_EMPTY
/*
* Interrupts are off on entry.
@@ -336,13 +336,13 @@ syscall_return_via_sysret:
popq %rdi
popq %rsp
USERGS_SYSRET64
-END(entry_SYSCALL_64)
+SYM_CODE_END(entry_SYSCALL_64)

/*
* %rdi: prev task
* %rsi: next task
*/
-ENTRY(__switch_to_asm)
+SYM_CODE_START(__switch_to_asm)
UNWIND_HINT_FUNC
/*
* Save callee-saved registers
@@ -384,7 +384,7 @@ ENTRY(__switch_to_asm)
popq %rbp

jmp __switch_to
-END(__switch_to_asm)
+SYM_CODE_END(__switch_to_asm)

/*
* A newly forked process directly context switches into this address.
@@ -393,7 +393,7 @@ END(__switch_to_asm)
* rbx: kernel thread func (NULL for user thread)
* r12: kernel thread arg
*/
-ENTRY(ret_from_fork)
+SYM_CODE_START(ret_from_fork)
UNWIND_HINT_EMPTY
movq %rax, %rdi
call schedule_tail /* rdi: 'prev' task parameter */
@@ -419,14 +419,14 @@ ENTRY(ret_from_fork)
*/
movq $0, RAX(%rsp)
jmp 2b
-END(ret_from_fork)
+SYM_CODE_END(ret_from_fork)

/*
* Build the entry stubs with some assembler magic.
* We pack 1 stub into every 8-byte block.
*/
.align 8
-ENTRY(irq_entries_start)
+SYM_CODE_START(irq_entries_start)
vector=FIRST_EXTERNAL_VECTOR
.rept (FIRST_SYSTEM_VECTOR - FIRST_EXTERNAL_VECTOR)
UNWIND_HINT_IRET_REGS
@@ -435,7 +435,7 @@ ENTRY(irq_entries_start)
.align 8
vector=vector+1
.endr
-END(irq_entries_start)
+SYM_CODE_END(irq_entries_start)

.macro DEBUG_ENTRY_ASSERT_IRQS_OFF
#ifdef CONFIG_DEBUG_ENTRY
@@ -561,7 +561,7 @@ END(irq_entries_start)
* | return address |
* +----------------------------------------------------+
*/
-ENTRY(interrupt_entry)
+SYM_CODE_START(interrupt_entry)
UNWIND_HINT_FUNC
ASM_CLAC
cld
@@ -627,7 +627,7 @@ ENTRY(interrupt_entry)
TRACE_IRQS_OFF

ret
-END(interrupt_entry)
+SYM_CODE_END(interrupt_entry)


/* Interrupt entry/exit. */
@@ -832,7 +832,7 @@ SYM_CODE_END(common_interrupt)
* APIC interrupts.
*/
.macro apicinterrupt3 num sym do_sym
-ENTRY(\sym)
+SYM_CODE_START(\sym)
UNWIND_HINT_IRET_REGS
pushq $~(\num)
.Lcommon_\sym:
@@ -840,7 +840,7 @@ ENTRY(\sym)
UNWIND_HINT_REGS indirect=1
call \do_sym /* rdi points to pt_regs */
jmp ret_from_intr
-END(\sym)
+SYM_CODE_END(\sym)
.endm

/* Make sure APIC interrupt handlers end up in the irqentry section: */
@@ -902,7 +902,7 @@ apicinterrupt IRQ_WORK_VECTOR irq_work_interrupt smp_irq_work_interrupt
#define CPU_TSS_IST(x) PER_CPU_VAR(cpu_tss_rw) + (TSS_ist + ((x) - 1) * 8)

.macro idtentry sym do_sym has_error_code:req paranoid=0 shift_ist=-1
-ENTRY(\sym)
+SYM_CODE_START(\sym)
UNWIND_HINT_IRET_REGS offset=\has_error_code*8

/* Sanity check */
@@ -985,7 +985,7 @@ ENTRY(\sym)

jmp error_exit /* %ebx: no swapgs flag */
.endif
-END(\sym)
+SYM_CODE_END(\sym)
.endm

idtentry divide_error do_divide_error has_error_code=0
@@ -1102,7 +1102,7 @@ SYM_CODE_END(xen_do_hypervisor_callback)
* We distinguish between categories by comparing each saved segment register
* with its current contents: any discrepancy means we in category 1.
*/
-ENTRY(xen_failsafe_callback)
+SYM_CODE_START(xen_failsafe_callback)
UNWIND_HINT_EMPTY
movl %ds, %ecx
cmpw %cx, 0x10(%rsp)
@@ -1132,7 +1132,7 @@ ENTRY(xen_failsafe_callback)
PUSH_AND_CLEAR_REGS
ENCODE_FRAME_POINTER
jmp error_exit
-END(xen_failsafe_callback)
+SYM_CODE_END(xen_failsafe_callback)

apicinterrupt3 HYPERVISOR_CALLBACK_VECTOR \
xen_hvm_callback_vector xen_evtchn_do_upcall
@@ -1340,7 +1340,7 @@ SYM_CODE_END(error_exit)
* %r14: Used to save/restore the CR3 of the interrupted context
* when PAGE_TABLE_ISOLATION is in use. Do not clobber.
*/
-ENTRY(nmi)
+SYM_CODE_START(nmi)
UNWIND_HINT_IRET_REGS

/*
@@ -1673,15 +1673,15 @@ nmi_restore:
* about espfix64 on the way back to kernel mode.
*/
iretq
-END(nmi)
+SYM_CODE_END(nmi)

-ENTRY(ignore_sysret)
+SYM_CODE_START(ignore_sysret)
UNWIND_HINT_EMPTY
mov $-ENOSYS, %eax
sysret
-END(ignore_sysret)
+SYM_CODE_END(ignore_sysret)

-ENTRY(rewind_stack_do_exit)
+SYM_CODE_START(rewind_stack_do_exit)
UNWIND_HINT_FUNC
/* Prevent any naive code from trying to unwind to our caller. */
xorl %ebp, %ebp
@@ -1691,4 +1691,4 @@ ENTRY(rewind_stack_do_exit)
UNWIND_HINT_FUNC sp_offset=PTREGS_SIZE

call do_exit
-END(rewind_stack_do_exit)
+SYM_CODE_END(rewind_stack_do_exit)
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
index d0880bef86c3..d03ddfc959e6 100644
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -196,7 +196,7 @@ ENDPROC(entry_SYSENTER_compat)
* esp user stack
* 0(%esp) arg6
*/
-ENTRY(entry_SYSCALL_compat)
+SYM_CODE_START(entry_SYSCALL_compat)
/* Interrupts are off on entry. */
swapgs

@@ -311,7 +311,7 @@ sysret32_from_system_call:
xorl %r10d, %r10d
swapgs
sysretl
-END(entry_SYSCALL_compat)
+SYM_CODE_END(entry_SYSCALL_compat)

/*
* 32-bit legacy system call entry.
@@ -339,7 +339,7 @@ END(entry_SYSCALL_compat)
* edi arg5
* ebp arg6
*/
-ENTRY(entry_INT80_compat)
+SYM_CODE_START(entry_INT80_compat)
/*
* Interrupts are off on entry.
*/
@@ -414,4 +414,4 @@ ENTRY(entry_INT80_compat)
/* Go back to user mode. */
TRACE_IRQS_ON
jmp swapgs_restore_regs_and_return_to_usermode
-END(entry_INT80_compat)
+SYM_CODE_END(entry_INT80_compat)
diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
index 14df6cf07b7e..51970806c2df 100644
--- a/arch/x86/kernel/ftrace_64.S
+++ b/arch/x86/kernel/ftrace_64.S
@@ -319,7 +319,7 @@ ENTRY(ftrace_graph_caller)
retq
ENDPROC(ftrace_graph_caller)

-ENTRY(return_to_handler)
+SYM_CODE_START(return_to_handler)
UNWIND_HINT_EMPTY
subq $24, %rsp

@@ -335,5 +335,5 @@ ENTRY(return_to_handler)
movq (%rsp), %rax
addq $24, %rsp
JMP_NOSPEC %rdi
-END(return_to_handler)
+SYM_CODE_END(return_to_handler)
#endif
diff --git a/arch/x86/xen/xen-asm_64.S b/arch/x86/xen/xen-asm_64.S
index a69a171f7cea..5a3f5c18cd0c 100644
--- a/arch/x86/xen/xen-asm_64.S
+++ b/arch/x86/xen/xen-asm_64.S
@@ -19,11 +19,11 @@
#include <linux/linkage.h>

.macro xen_pv_trap name
-ENTRY(xen_\name)
+SYM_CODE_START(xen_\name)
pop %rcx
pop %r11
jmp \name
-END(xen_\name)
+SYM_CODE_END(xen_\name)
.endm

xen_pv_trap divide_error
@@ -56,7 +56,7 @@ xen_pv_trap entry_INT80_compat
xen_pv_trap hypervisor_callback

__INIT
-ENTRY(xen_early_idt_handler_array)
+SYM_CODE_START(xen_early_idt_handler_array)
i = 0
.rept NUM_EXCEPTION_VECTORS
pop %rcx
@@ -65,7 +65,7 @@ ENTRY(xen_early_idt_handler_array)
i = i + 1
.fill xen_early_idt_handler_array + i*XEN_EARLY_IDT_HANDLER_SIZE - ., 1, 0xcc
.endr
-END(xen_early_idt_handler_array)
+SYM_CODE_END(xen_early_idt_handler_array)
__FINIT

hypercall_iret = hypercall_page + __HYPERVISOR_iret * 32
diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
index 5077ead5e59c..32606eeec053 100644
--- a/arch/x86/xen/xen-head.S
+++ b/arch/x86/xen/xen-head.S
@@ -22,7 +22,7 @@

#ifdef CONFIG_XEN_PV
__INIT
-ENTRY(startup_xen)
+SYM_CODE_START(startup_xen)
UNWIND_HINT_EMPTY
cld

@@ -52,13 +52,13 @@ ENTRY(startup_xen)
#endif

jmp xen_start_kernel
-END(startup_xen)
+SYM_CODE_END(startup_xen)
__FINIT
#endif

.pushsection .text
.balign PAGE_SIZE
-ENTRY(hypercall_page)
+SYM_CODE_START(hypercall_page)
.rept (PAGE_SIZE / 32)
UNWIND_HINT_EMPTY
.skip 32
@@ -69,7 +69,7 @@ ENTRY(hypercall_page)
.type xen_hypercall_##n, @function; .size xen_hypercall_##n, 32
#include <asm/xen-hypercalls.h>
#undef HYPERCALL
-END(hypercall_page)
+SYM_CODE_END(hypercall_page)
.popsection

ELFNOTE(Xen, XEN_ELFNOTE_GUEST_OS, .asciz "linux")
--
2.16.3


2018-05-18 09:22:15

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 18/28] x86/asm/realmode: use SYM_DATA_* instead of GLOBAL

GLOBAL had several meanings and is going away. In this patch, convert
all the data marked using GLOBAL to use SYM_DATA_START or SYM_DATA
instead.

Notes:
* SYM_DATA_END_LABEL is used to generate tr_gdt_end too.
* wakeup_idt is marked as LOCAL now as it is used only locally.

Signed-off-by: Jiri Slaby <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
---
arch/x86/realmode/rm/header.S | 8 +++-----
arch/x86/realmode/rm/reboot.S | 8 ++++----
arch/x86/realmode/rm/stack.S | 14 ++++++--------
arch/x86/realmode/rm/trampoline_32.S | 10 +++++-----
arch/x86/realmode/rm/trampoline_64.S | 19 +++++++++----------
arch/x86/realmode/rm/trampoline_common.S | 4 ++--
arch/x86/realmode/rm/wakeup_asm.S | 12 ++++++------
arch/x86/realmode/rmpiggy.S | 10 ++++------
8 files changed, 39 insertions(+), 46 deletions(-)

diff --git a/arch/x86/realmode/rm/header.S b/arch/x86/realmode/rm/header.S
index 30b0d30d861a..5ee0d96731a3 100644
--- a/arch/x86/realmode/rm/header.S
+++ b/arch/x86/realmode/rm/header.S
@@ -14,7 +14,7 @@
.section ".header", "a"

.balign 16
-GLOBAL(real_mode_header)
+SYM_DATA_START(real_mode_header)
.long pa_text_start
.long pa_ro_end
/* SMP trampoline */
@@ -34,11 +34,9 @@ GLOBAL(real_mode_header)
#ifdef CONFIG_X86_64
.long __KERNEL32_CS
#endif
-END(real_mode_header)
+SYM_DATA_END(real_mode_header)

/* End signature, used to verify integrity */
.section ".signature","a"
.balign 4
-GLOBAL(end_signature)
- .long REALMODE_END_SIGNATURE
-END(end_signature)
+SYM_DATA(end_signature, .long REALMODE_END_SIGNATURE)
diff --git a/arch/x86/realmode/rm/reboot.S b/arch/x86/realmode/rm/reboot.S
index f91425a01f8f..424826afb501 100644
--- a/arch/x86/realmode/rm/reboot.S
+++ b/arch/x86/realmode/rm/reboot.S
@@ -127,13 +127,13 @@ bios:
.section ".rodata", "a"

.balign 16
-GLOBAL(machine_real_restart_idt)
+SYM_DATA_START(machine_real_restart_idt)
.word 0xffff /* Length - real mode default value */
.long 0 /* Base - real mode default value */
-END(machine_real_restart_idt)
+SYM_DATA_END(machine_real_restart_idt)

.balign 16
-GLOBAL(machine_real_restart_gdt)
+SYM_DATA_START(machine_real_restart_gdt)
/* Self-pointer */
.word 0xffff /* Length - real mode default value */
.long pa_machine_real_restart_gdt
@@ -153,4 +153,4 @@ GLOBAL(machine_real_restart_gdt)
* semantics we don't have to reload the segments once CR0.PE = 0.
*/
.quad GDT_ENTRY(0x0093, 0x100, 0xffff)
-END(machine_real_restart_gdt)
+SYM_DATA_END(machine_real_restart_gdt)
diff --git a/arch/x86/realmode/rm/stack.S b/arch/x86/realmode/rm/stack.S
index 8d4cb64799ea..0fca64061ad2 100644
--- a/arch/x86/realmode/rm/stack.S
+++ b/arch/x86/realmode/rm/stack.S
@@ -6,15 +6,13 @@
#include <linux/linkage.h>

.data
-GLOBAL(HEAP)
- .long rm_heap
-GLOBAL(heap_end)
- .long rm_stack
+SYM_DATA(HEAP, .long rm_heap)
+SYM_DATA(heap_end, .long rm_stack)

.bss
.balign 16
-GLOBAL(rm_heap)
- .space 2048
-GLOBAL(rm_stack)
+SYM_DATA(rm_heap, .space 2048)
+
+SYM_DATA_START(rm_stack)
.space 2048
-GLOBAL(rm_stack_end)
+SYM_DATA_END_LABEL(rm_stack, SYM_L_GLOBAL, rm_stack_end)
diff --git a/arch/x86/realmode/rm/trampoline_32.S b/arch/x86/realmode/rm/trampoline_32.S
index 2dd866c9e21e..e96efcd60bf7 100644
--- a/arch/x86/realmode/rm/trampoline_32.S
+++ b/arch/x86/realmode/rm/trampoline_32.S
@@ -65,10 +65,10 @@ ENTRY(startup_32) # note: also used from wakeup_asm.S

.bss
.balign 8
-GLOBAL(trampoline_header)
- tr_start: .space 4
- tr_gdt_pad: .space 2
- tr_gdt: .space 6
-END(trampoline_header)
+SYM_DATA_START(trampoline_header)
+ SYM_DATA_LOCAL(tr_start, .space 4)
+ SYM_DATA_LOCAL(tr_gdt_pad, .space 2)
+ SYM_DATA_LOCAL(tr_gdt, .space 6)
+SYM_DATA_END(trampoline_header)

#include "trampoline_common.S"
diff --git a/arch/x86/realmode/rm/trampoline_64.S b/arch/x86/realmode/rm/trampoline_64.S
index 24bb7598774e..9e5f9ade43c8 100644
--- a/arch/x86/realmode/rm/trampoline_64.S
+++ b/arch/x86/realmode/rm/trampoline_64.S
@@ -152,26 +152,25 @@ ENTRY(startup_64)
# Duplicate the global descriptor table
# so the kernel can live anywhere
.balign 16
- .globl tr_gdt
-tr_gdt:
+SYM_DATA_START(tr_gdt)
.short tr_gdt_end - tr_gdt - 1 # gdt limit
.long pa_tr_gdt
.short 0
.quad 0x00cf9b000000ffff # __KERNEL32_CS
.quad 0x00af9b000000ffff # __KERNEL_CS
.quad 0x00cf93000000ffff # __KERNEL_DS
-tr_gdt_end:
+SYM_DATA_END_LABEL(tr_gdt, SYM_L_LOCAL, tr_gdt_end)

.bss
.balign PAGE_SIZE
-GLOBAL(trampoline_pgd) .space PAGE_SIZE
+SYM_DATA(trampoline_pgd, .space PAGE_SIZE)

.balign 8
-GLOBAL(trampoline_header)
- tr_start: .space 8
- GLOBAL(tr_efer) .space 8
- GLOBAL(tr_cr4) .space 4
- GLOBAL(tr_flags) .space 4
-END(trampoline_header)
+SYM_DATA_START(trampoline_header)
+ SYM_DATA_LOCAL(tr_start, .space 8)
+ SYM_DATA(tr_efer, .space 8)
+ SYM_DATA(tr_cr4, .space 4)
+ SYM_DATA(tr_flags, .space 4)
+SYM_DATA_END(trampoline_header)

#include "trampoline_common.S"
diff --git a/arch/x86/realmode/rm/trampoline_common.S b/arch/x86/realmode/rm/trampoline_common.S
index 7c706772ab59..fc000089e2da 100644
--- a/arch/x86/realmode/rm/trampoline_common.S
+++ b/arch/x86/realmode/rm/trampoline_common.S
@@ -1,8 +1,8 @@
/* SPDX-License-Identifier: GPL-2.0 */
.section ".rodata","a"
.balign 16
-tr_idt: .fill 1, 6, 0
+SYM_DATA_LOCAL(tr_idt, .fill 1, 6, 0)

.bss
.balign 4
-GLOBAL(trampoline_status) .space 4
+SYM_DATA(trampoline_status, .space 4)
diff --git a/arch/x86/realmode/rm/wakeup_asm.S b/arch/x86/realmode/rm/wakeup_asm.S
index 05ac9c17c811..0af6b30d3c68 100644
--- a/arch/x86/realmode/rm/wakeup_asm.S
+++ b/arch/x86/realmode/rm/wakeup_asm.S
@@ -17,7 +17,7 @@
.section ".data", "aw"

.balign 16
-GLOBAL(wakeup_header)
+SYM_DATA_START(wakeup_header)
video_mode: .short 0 /* Video mode number */
pmode_entry: .long 0
pmode_cs: .short __KERNEL_CS
@@ -31,7 +31,7 @@ GLOBAL(wakeup_header)
realmode_flags: .long 0
real_magic: .long 0
signature: .long WAKEUP_HEADER_SIGNATURE
-END(wakeup_header)
+SYM_DATA_END(wakeup_header)

.text
.code16
@@ -152,7 +152,7 @@ bogus_real_magic:
*/

.balign 16
-GLOBAL(wakeup_gdt)
+SYM_DATA_START(wakeup_gdt)
.word 3*8-1 /* Self-descriptor */
.long pa_wakeup_gdt
.word 0
@@ -164,15 +164,15 @@ GLOBAL(wakeup_gdt)
.word 0xffff /* 16-bit data segment @ real_mode_base */
.long 0x93000000 + pa_real_mode_base
.word 0x008f /* big real mode */
-END(wakeup_gdt)
+SYM_DATA_END(wakeup_gdt)

.section ".rodata","a"
.balign 8

/* This is the standard real-mode IDT */
.balign 16
-GLOBAL(wakeup_idt)
+SYM_DATA_START_LOCAL(wakeup_idt)
.word 0xffff /* limit */
.long 0 /* address */
.word 0
-END(wakeup_idt)
+SYM_DATA_END(wakeup_idt)
diff --git a/arch/x86/realmode/rmpiggy.S b/arch/x86/realmode/rmpiggy.S
index c078dba40cef..c8fef76743f6 100644
--- a/arch/x86/realmode/rmpiggy.S
+++ b/arch/x86/realmode/rmpiggy.S
@@ -10,12 +10,10 @@

.balign PAGE_SIZE

-GLOBAL(real_mode_blob)
+SYM_DATA_START(real_mode_blob)
.incbin "arch/x86/realmode/rm/realmode.bin"
-END(real_mode_blob)
+SYM_DATA_END_LABEL(real_mode_blob, SYM_L_GLOBAL, real_mode_blob_end)

-GLOBAL(real_mode_blob_end);
-
-GLOBAL(real_mode_relocs)
+SYM_DATA_START(real_mode_relocs)
.incbin "arch/x86/realmode/rm/realmode.relocs"
-END(real_mode_relocs)
+SYM_DATA_END(real_mode_relocs)
--
2.16.3


2018-05-18 09:22:49

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 21/28] x86/asm/ftrace: mark function_hook as function

Relabel function_hook to be marked really as a function. It is called
from C and has the same expectations towards the stack etc.

Signed-off-by: Jiri Slaby <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
---
arch/x86/kernel/ftrace_32.S | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/ftrace_32.S b/arch/x86/kernel/ftrace_32.S
index 0206fc7e86b0..b855dc10daeb 100644
--- a/arch/x86/kernel/ftrace_32.S
+++ b/arch/x86/kernel/ftrace_32.S
@@ -31,9 +31,9 @@ EXPORT_SYMBOL(mcount)
# define MCOUNT_FRAME 0 /* using frame = false */
#endif

-ENTRY(function_hook)
+SYM_FUNC_START(function_hook)
ret
-END(function_hook)
+SYM_FUNC_END(function_hook)

ENTRY(ftrace_caller)

--
2.16.3


2018-05-18 09:22:55

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 20/28] x86/asm: make some functions local

There is a couple of assembly functions, which are invoked only locally
in the file they are defined. In C, we mark them "static". In assembly,
annotate them using SYM_{FUNC,CODE}_START_LOCAL (and switch their
ENDPROC to SYM_{FUNC,CODE}_END too). Whether we use FUNC or CODE,
depends on whether ENDPROC or END was used for a particular function
before.

Signed-off-by: Jiri Slaby <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: [email protected]
Cc: Matt Fleming <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
arch/x86/boot/compressed/efi_thunk_64.S | 8 ++++----
arch/x86/entry/entry_64.S | 21 +++++++++++----------
arch/x86/lib/copy_page_64.S | 4 ++--
arch/x86/lib/memcpy_64.S | 12 ++++++------
arch/x86/lib/memset_64.S | 8 ++++----
arch/x86/platform/efi/efi_thunk_64.S | 12 ++++++------
arch/x86/xen/xen-pvh.S | 4 ++--
7 files changed, 35 insertions(+), 34 deletions(-)

diff --git a/arch/x86/boot/compressed/efi_thunk_64.S b/arch/x86/boot/compressed/efi_thunk_64.S
index d66000d23921..31312070db22 100644
--- a/arch/x86/boot/compressed/efi_thunk_64.S
+++ b/arch/x86/boot/compressed/efi_thunk_64.S
@@ -99,12 +99,12 @@ ENTRY(efi64_thunk)
ret
ENDPROC(efi64_thunk)

-ENTRY(efi_exit32)
+SYM_FUNC_START_LOCAL(efi_exit32)
movq func_rt_ptr(%rip), %rax
push %rax
mov %rdi, %rax
ret
-ENDPROC(efi_exit32)
+SYM_FUNC_END(efi_exit32)

.code32
/*
@@ -112,7 +112,7 @@ ENDPROC(efi_exit32)
*
* The stack should represent the 32-bit calling convention.
*/
-ENTRY(efi_enter32)
+SYM_FUNC_START_LOCAL(efi_enter32)
movl $__KERNEL_DS, %eax
movl %eax, %ds
movl %eax, %es
@@ -172,7 +172,7 @@ ENTRY(efi_enter32)
btsl $X86_CR0_PG_BIT, %eax
movl %eax, %cr0
lret
-ENDPROC(efi_enter32)
+SYM_FUNC_END(efi_enter32)

.data
.balign 8
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index d55228508e80..d2e2de100b16 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1068,7 +1068,8 @@ idtentry hypervisor_callback xen_do_hypervisor_callback has_error_code=0
* existing activation in its critical region -- if so, we pop the current
* activation and restart the handler using the previous one.
*/
-ENTRY(xen_do_hypervisor_callback) /* do_hypervisor_callback(struct *pt_regs) */
+/* do_hypervisor_callback(struct *pt_regs) */
+SYM_CODE_START_LOCAL(xen_do_hypervisor_callback)

/*
* Since we don't modify %rdi, evtchn_do_upall(struct *pt_regs) will
@@ -1086,7 +1087,7 @@ ENTRY(xen_do_hypervisor_callback) /* do_hypervisor_callback(struct *pt_regs) */
call xen_maybe_preempt_hcall
#endif
jmp error_exit
-END(xen_do_hypervisor_callback)
+SYM_CODE_END(xen_do_hypervisor_callback)

/*
* Hypervisor uses this for application faults while it executes.
@@ -1175,7 +1176,7 @@ idtentry machine_check do_mce has_error_code=0 paranoid=1
* Use slow, but surefire "are we in kernel?" check.
* Return: ebx=0: need swapgs on exit, ebx=1: otherwise
*/
-ENTRY(paranoid_entry)
+SYM_CODE_START_LOCAL(paranoid_entry)
UNWIND_HINT_FUNC
cld
PUSH_AND_CLEAR_REGS save_ret=1
@@ -1192,7 +1193,7 @@ ENTRY(paranoid_entry)
SAVE_AND_SWITCH_TO_KERNEL_CR3 scratch_reg=%rax save_reg=%r14

ret
-END(paranoid_entry)
+SYM_CODE_END(paranoid_entry)

/*
* "Paranoid" exit path from exception stack. This is invoked
@@ -1206,7 +1207,7 @@ END(paranoid_entry)
*
* On entry, ebx is "no swapgs" flag (1: don't need swapgs, 0: need it)
*/
-ENTRY(paranoid_exit)
+SYM_CODE_START_LOCAL(paranoid_exit)
UNWIND_HINT_REGS
DISABLE_INTERRUPTS(CLBR_ANY)
TRACE_IRQS_OFF_DEBUG
@@ -1221,13 +1222,13 @@ ENTRY(paranoid_exit)
RESTORE_CR3 scratch_reg=%rbx save_reg=%r14
.Lparanoid_exit_restore:
jmp restore_regs_and_return_to_kernel
-END(paranoid_exit)
+SYM_CODE_END(paranoid_exit)

/*
* Save all registers in pt_regs, and switch GS if needed.
* Return: EBX=0: came from user mode; EBX=1: otherwise
*/
-ENTRY(error_entry)
+SYM_CODE_START_LOCAL(error_entry)
UNWIND_HINT_FUNC
cld
PUSH_AND_CLEAR_REGS save_ret=1
@@ -1314,7 +1315,7 @@ ENTRY(error_entry)
mov %rax, %rsp
decl %ebx
jmp .Lerror_entry_from_usermode_after_swapgs
-END(error_entry)
+SYM_CODE_END(error_entry)


/*
@@ -1322,14 +1323,14 @@ END(error_entry)
* 1: already in kernel mode, don't need SWAPGS
* 0: user gsbase is loaded, we need SWAPGS and standard preparation for return to usermode
*/
-ENTRY(error_exit)
+SYM_CODE_START_LOCAL(error_exit)
UNWIND_HINT_REGS
DISABLE_INTERRUPTS(CLBR_ANY)
TRACE_IRQS_OFF
testl %ebx, %ebx
jnz retint_kernel
jmp retint_user
-END(error_exit)
+SYM_CODE_END(error_exit)

/*
* Runs on exception stack. Xen PV does not go through this path at all,
diff --git a/arch/x86/lib/copy_page_64.S b/arch/x86/lib/copy_page_64.S
index fd2d09afa097..f505870bd93b 100644
--- a/arch/x86/lib/copy_page_64.S
+++ b/arch/x86/lib/copy_page_64.S
@@ -21,7 +21,7 @@ ENTRY(copy_page)
ENDPROC(copy_page)
EXPORT_SYMBOL(copy_page)

-ENTRY(copy_page_regs)
+SYM_FUNC_START_LOCAL(copy_page_regs)
subq $2*8, %rsp
movq %rbx, (%rsp)
movq %r12, 1*8(%rsp)
@@ -86,4 +86,4 @@ ENTRY(copy_page_regs)
movq 1*8(%rsp), %r12
addq $2*8, %rsp
ret
-ENDPROC(copy_page_regs)
+SYM_FUNC_END(copy_page_regs)
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 4911b1c61aa8..728703c47d58 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -27,7 +27,7 @@
* rax original destination
*/
SYM_FUNC_START_ALIAS(__memcpy)
-ENTRY(memcpy)
+SYM_FUNC_START_LOCAL(memcpy)
ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \
"jmp memcpy_erms", X86_FEATURE_ERMS

@@ -39,7 +39,7 @@ ENTRY(memcpy)
movl %edx, %ecx
rep movsb
ret
-ENDPROC(memcpy)
+SYM_FUNC_END(memcpy)
SYM_FUNC_END_ALIAS(__memcpy)
EXPORT_SYMBOL(memcpy)
EXPORT_SYMBOL(__memcpy)
@@ -48,14 +48,14 @@ EXPORT_SYMBOL(__memcpy)
* memcpy_erms() - enhanced fast string memcpy. This is faster and
* simpler than memcpy. Use memcpy_erms when possible.
*/
-ENTRY(memcpy_erms)
+SYM_FUNC_START_LOCAL(memcpy_erms)
movq %rdi, %rax
movq %rdx, %rcx
rep movsb
ret
-ENDPROC(memcpy_erms)
+SYM_FUNC_END(memcpy_erms)

-ENTRY(memcpy_orig)
+SYM_FUNC_START_LOCAL(memcpy_orig)
movq %rdi, %rax

cmpq $0x20, %rdx
@@ -180,7 +180,7 @@ ENTRY(memcpy_orig)

.Lend:
retq
-ENDPROC(memcpy_orig)
+SYM_FUNC_END(memcpy_orig)

#ifndef CONFIG_UML
/*
diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
index 927ac44d34aa..564abf9ecedb 100644
--- a/arch/x86/lib/memset_64.S
+++ b/arch/x86/lib/memset_64.S
@@ -59,16 +59,16 @@ EXPORT_SYMBOL(__memset)
*
* rax original destination
*/
-ENTRY(memset_erms)
+SYM_FUNC_START_LOCAL(memset_erms)
movq %rdi,%r9
movb %sil,%al
movq %rdx,%rcx
rep stosb
movq %r9,%rax
ret
-ENDPROC(memset_erms)
+SYM_FUNC_END(memset_erms)

-ENTRY(memset_orig)
+SYM_FUNC_START_LOCAL(memset_orig)
movq %rdi,%r10

/* expand byte value */
@@ -139,4 +139,4 @@ ENTRY(memset_orig)
subq %r8,%rdx
jmp .Lafter_bad_alignment
.Lfinal:
-ENDPROC(memset_orig)
+SYM_FUNC_END(memset_orig)
diff --git a/arch/x86/platform/efi/efi_thunk_64.S b/arch/x86/platform/efi/efi_thunk_64.S
index 46c58b08739c..d677a7eb2d0a 100644
--- a/arch/x86/platform/efi/efi_thunk_64.S
+++ b/arch/x86/platform/efi/efi_thunk_64.S
@@ -67,7 +67,7 @@ ENDPROC(efi64_thunk)
*
* This function must be invoked with a 1:1 mapped stack.
*/
-ENTRY(__efi64_thunk)
+SYM_FUNC_START_LOCAL(__efi64_thunk)
movl %ds, %eax
push %rax
movl %es, %eax
@@ -114,14 +114,14 @@ ENTRY(__efi64_thunk)
or %rcx, %rax
1:
ret
-ENDPROC(__efi64_thunk)
+SYM_FUNC_END(__efi64_thunk)

-ENTRY(efi_exit32)
+SYM_FUNC_START_LOCAL(efi_exit32)
movq func_rt_ptr(%rip), %rax
push %rax
mov %rdi, %rax
ret
-ENDPROC(efi_exit32)
+SYM_FUNC_END(efi_exit32)

.code32
/*
@@ -129,7 +129,7 @@ ENDPROC(efi_exit32)
*
* The stack should represent the 32-bit calling convention.
*/
-ENTRY(efi_enter32)
+SYM_FUNC_START_LOCAL(efi_enter32)
movl $__KERNEL_DS, %eax
movl %eax, %ds
movl %eax, %es
@@ -145,7 +145,7 @@ ENTRY(efi_enter32)
pushl %eax

lret
-ENDPROC(efi_enter32)
+SYM_FUNC_END(efi_enter32)

.data
.balign 8
diff --git a/arch/x86/xen/xen-pvh.S b/arch/x86/xen/xen-pvh.S
index 52b28793a625..a20a55cc5135 100644
--- a/arch/x86/xen/xen-pvh.S
+++ b/arch/x86/xen/xen-pvh.S
@@ -54,7 +54,7 @@
* charge of setting up it's own stack, GDT and IDT.
*/

-ENTRY(pvh_start_xen)
+SYM_CODE_START_LOCAL(pvh_start_xen)
cld

lgdt (_pa(gdt))
@@ -133,7 +133,7 @@ ENTRY(pvh_start_xen)

ljmp $__BOOT_CS, $_pa(startup_32)
#endif
-END(pvh_start_xen)
+SYM_CODE_END(pvh_start_xen)

.section ".init.data","aw"
.balign 8
--
2.16.3


2018-05-18 09:23:42

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 19/28] x86/asm: kill the last GLOBAL user and remove the macro

Convert the remaining 32bit users and remove GLOBAL macro finally. In
particular, this means to use SYM_ENTRY for the singlestepping hack
region.

Signed-off-by: Jiri Slaby <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
---
arch/x86/entry/entry_32.S | 4 ++--
arch/x86/include/asm/linkage.h | 8 --------
2 files changed, 2 insertions(+), 10 deletions(-)

diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 4b80b2fb4de1..f701541ecf10 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -361,7 +361,7 @@ ENTRY(resume_kernel)
END(resume_kernel)
#endif

-GLOBAL(__begin_SYSENTER_singlestep_region)
+SYM_ENTRY(__begin_SYSENTER_singlestep_region, SYM_L_GLOBAL, SYM_A_NONE)
/*
* All code from here through __end_SYSENTER_singlestep_region is subject
* to being single-stepped if a user program sets TF and executes SYSENTER.
@@ -502,7 +502,7 @@ ENTRY(entry_SYSENTER_32)
pushl $X86_EFLAGS_FIXED
popfl
jmp .Lsysenter_flags_fixed
-GLOBAL(__end_SYSENTER_singlestep_region)
+SYM_ENTRY(__end_SYSENTER_singlestep_region, SYM_L_GLOBAL, SYM_A_NONE)
ENDPROC(entry_SYSENTER_32)

/*
diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h
index e07188e8d763..365111789cc6 100644
--- a/arch/x86/include/asm/linkage.h
+++ b/arch/x86/include/asm/linkage.h
@@ -13,14 +13,6 @@

#ifdef __ASSEMBLY__

-/*
- * GLOBAL is DEPRECATED
- *
- * use SYM_DATA_START, SYM_FUNC_START, SYM_INNER_LABEL, SYM_CODE_START, or
- * similar
- */
-#define GLOBAL(name) SYM_ENTRY(name, SYM_L_GLOBAL, SYM_A_NONE)
-
#if defined(CONFIG_X86_64) || defined(CONFIG_X86_ALIGNMENT_16)
#define __ALIGN .p2align 4, 0x90
#define __ALIGN_STR __stringify(__ALIGN)
--
2.16.3


2018-05-18 09:25:15

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 12/28] x86/boot/compressed: annotate data appropriatelly

Use the new SYM_DATA, SYM_DATA_START, and SYM_DATA_END* macros for data.

Now, the data in the object file look sane:
Value Size Type Bind Vis Ndx Name
0000 10 OBJECT GLOBAL DEFAULT 3 efi32_boot_gdt
000a 10 OBJECT LOCAL DEFAULT 3 save_gdt
0014 8 OBJECT LOCAL DEFAULT 3 func_rt_ptr
001c 48 OBJECT GLOBAL DEFAULT 3 efi_gdt64
004c 0 OBJECT LOCAL DEFAULT 3 efi_gdt64_end

0000 48 OBJECT LOCAL DEFAULT 3 gdt
0030 0 OBJECT LOCAL DEFAULT 3 gdt_end
0030 8 OBJECT LOCAL DEFAULT 3 efi_config
0038 49 OBJECT GLOBAL DEFAULT 3 efi32_config
0069 49 OBJECT GLOBAL DEFAULT 3 efi64_config

All have correct size and type now.

Signed-off-by: Jiri Slaby <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: [email protected]
---
arch/x86/boot/compressed/efi_thunk_64.S | 21 ++++++++++++---------
arch/x86/boot/compressed/head_64.S | 29 ++++++++++++++---------------
arch/x86/boot/compressed/mem_encrypt.S | 6 ++----
3 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/arch/x86/boot/compressed/efi_thunk_64.S b/arch/x86/boot/compressed/efi_thunk_64.S
index bff9ab7c6317..d66000d23921 100644
--- a/arch/x86/boot/compressed/efi_thunk_64.S
+++ b/arch/x86/boot/compressed/efi_thunk_64.S
@@ -176,16 +176,19 @@ ENDPROC(efi_enter32)

.data
.balign 8
- .global efi32_boot_gdt
-efi32_boot_gdt: .word 0
- .quad 0
+SYM_DATA_START(efi32_boot_gdt)
+ .word 0
+ .quad 0
+SYM_DATA_END(efi32_boot_gdt)
+
+SYM_DATA_START_LOCAL(save_gdt)
+ .word 0
+ .quad 0
+SYM_DATA_END(save_gdt)

-save_gdt: .word 0
- .quad 0
-func_rt_ptr: .quad 0
+SYM_DATA_LOCAL(func_rt_ptr, .quad 0)

- .global efi_gdt64
-efi_gdt64:
+SYM_DATA_START(efi_gdt64)
.word efi_gdt64_end - efi_gdt64
.long 0 /* Filled out by user */
.word 0
@@ -194,4 +197,4 @@ efi_gdt64:
.quad 0x00cf92000000ffff /* __KERNEL_DS */
.quad 0x0080890000000000 /* TS descriptor */
.quad 0x0000000000000000 /* TS continued */
-efi_gdt64_end:
+SYM_DATA_END_LABEL(efi_gdt64, SYM_L_LOCAL, efi_gdt64_end)
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 614dc4868915..a1a92f6fc8e4 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -599,12 +599,13 @@ SYM_FUNC_END(no_longmode)
#include "../../kernel/verify_cpu.S"

.data
-gdt64:
+SYM_DATA_START_LOCAL(gdt64)
.word gdt_end - gdt
.long 0
.word 0
.quad 0
-gdt:
+SYM_DATA_END(gdt64)
+SYM_DATA_START_LOCAL(gdt)
.word gdt_end - gdt
.long gdt
.word 0
@@ -613,25 +614,24 @@ gdt:
.quad 0x00cf92000000ffff /* __KERNEL_DS */
.quad 0x0080890000000000 /* TS descriptor */
.quad 0x0000000000000000 /* TS continued */
-gdt_end:
+SYM_DATA_END_LABEL(gdt, SYM_L_LOCAL, gdt_end)

#ifdef CONFIG_EFI_STUB
-efi_config:
- .quad 0
+SYM_DATA_LOCAL(efi_config, .quad 0)

#ifdef CONFIG_EFI_MIXED
- .global efi32_config
-efi32_config:
+SYM_DATA_START(efi32_config)
.fill 5,8,0
.quad efi64_thunk
.byte 0
+SYM_DATA_END(efi32_config)
#endif

- .global efi64_config
-efi64_config:
+SYM_DATA_START(efi64_config)
.fill 5,8,0
.quad efi_call
.byte 1
+SYM_DATA_END(efi64_config)
#endif /* CONFIG_EFI_STUB */

/*
@@ -639,16 +639,15 @@ efi64_config:
*/
.bss
.balign 4
-boot_heap:
- .fill BOOT_HEAP_SIZE, 1, 0
-boot_stack:
+SYM_DATA_LOCAL(boot_heap, .fill BOOT_HEAP_SIZE, 1, 0)
+
+SYM_DATA_START_LOCAL(boot_stack)
.fill BOOT_STACK_SIZE, 1, 0
-boot_stack_end:
+SYM_DATA_END_LABEL(boot_stack, SYM_L_LOCAL, boot_stack_end)

/*
* Space for page tables (not in .bss so not zeroed)
*/
.section ".pgtable","a",@nobits
.balign 4096
-pgtable:
- .fill BOOT_PGT_SIZE, 1, 0
+SYM_DATA_LOCAL(pgtable, .fill BOOT_PGT_SIZE, 1, 0)
diff --git a/arch/x86/boot/compressed/mem_encrypt.S b/arch/x86/boot/compressed/mem_encrypt.S
index eaa843a52907..fabed28d2edd 100644
--- a/arch/x86/boot/compressed/mem_encrypt.S
+++ b/arch/x86/boot/compressed/mem_encrypt.S
@@ -113,11 +113,9 @@ ENTRY(set_sev_encryption_mask)
ENDPROC(set_sev_encryption_mask)

.data
-enc_bit:
- .int 0xffffffff
+SYM_DATA_LOCAL(enc_bit, .int 0xffffffff)

#ifdef CONFIG_AMD_MEM_ENCRYPT
.balign 8
-GLOBAL(sme_me_mask)
- .quad 0
+SYM_DATA(sme_me_mask, .quad 0)
#endif
--
2.16.3


2018-05-18 09:25:23

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 17/28] x86/asm: use SYM_INNER_LABEL instead of GLOBAL

GLOBAL had several meanings and is going away. In this patch, convert
all the inner function labels marked with GLOBAL to use SYM_INNER_LABEL
instead.

Note that retint_user needs not be global, perhaps since commit
2ec67971facc ("x86/entry/64/compat: Remove most of the fast system call
machinery"), where entry_64_compat's caller was removed. So mark the
label as LOCAL.

Signed-off-by: Jiri Slaby <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: Andy Lutomirski <[email protected]>
---
arch/x86/entry/entry_64.S | 8 ++++----
arch/x86/entry/entry_64_compat.S | 4 ++--
arch/x86/entry/vdso/vdso32/system_call.S | 2 +-
arch/x86/kernel/ftrace_32.S | 2 +-
arch/x86/kernel/ftrace_64.S | 16 ++++++++--------
arch/x86/realmode/rm/reboot.S | 2 +-
6 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index e0798c044055..d55228508e80 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -225,7 +225,7 @@ ENTRY(entry_SYSCALL_64)
pushq %r11 /* pt_regs->flags */
pushq $__USER_CS /* pt_regs->cs */
pushq %rcx /* pt_regs->ip */
-GLOBAL(entry_SYSCALL_64_after_hwframe)
+SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL)
pushq %rax /* pt_regs->orig_ax */

PUSH_AND_CLEAR_REGS rax=$-ENOSYS
@@ -653,12 +653,12 @@ ret_from_intr:
jz retint_kernel

/* Interrupt came from user space */
-GLOBAL(retint_user)
+SYM_INNER_LABEL(retint_user, SYM_L_LOCAL)
mov %rsp,%rdi
call prepare_exit_to_usermode
TRACE_IRQS_IRETQ

-GLOBAL(swapgs_restore_regs_and_return_to_usermode)
+SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL)
#ifdef CONFIG_DEBUG_ENTRY
/* Assert that pt_regs indicates user mode. */
testb $3, CS(%rsp)
@@ -717,7 +717,7 @@ retint_kernel:
*/
TRACE_IRQS_IRETQ

-GLOBAL(restore_regs_and_return_to_kernel)
+SYM_INNER_LABEL(restore_regs_and_return_to_kernel, SYM_L_GLOBAL)
#ifdef CONFIG_DEBUG_ENTRY
/* Assert that pt_regs indicates kernel mode. */
testb $3, CS(%rsp)
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
index 2d10f72697f3..d0880bef86c3 100644
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -146,7 +146,7 @@ ENTRY(entry_SYSENTER_compat)
pushq $X86_EFLAGS_FIXED
popfq
jmp .Lsysenter_flags_fixed
-GLOBAL(__end_entry_SYSENTER_compat)
+SYM_INNER_LABEL(__end_entry_SYSENTER_compat, SYM_L_GLOBAL)
ENDPROC(entry_SYSENTER_compat)

/*
@@ -215,7 +215,7 @@ ENTRY(entry_SYSCALL_compat)
pushq %r11 /* pt_regs->flags */
pushq $__USER32_CS /* pt_regs->cs */
pushq %rcx /* pt_regs->ip */
-GLOBAL(entry_SYSCALL_compat_after_hwframe)
+SYM_INNER_LABEL(entry_SYSCALL_compat_after_hwframe, SYM_L_GLOBAL)
movl %eax, %eax /* discard orig_ax high bits */
pushq %rax /* pt_regs->orig_ax */
pushq %rdi /* pt_regs->di */
diff --git a/arch/x86/entry/vdso/vdso32/system_call.S b/arch/x86/entry/vdso/vdso32/system_call.S
index 263d7433dea8..de1fff7188aa 100644
--- a/arch/x86/entry/vdso/vdso32/system_call.S
+++ b/arch/x86/entry/vdso/vdso32/system_call.S
@@ -62,7 +62,7 @@ __kernel_vsyscall:

/* Enter using int $0x80 */
int $0x80
-GLOBAL(int80_landing_pad)
+SYM_INNER_LABEL(int80_landing_pad, SYM_L_GLOBAL)

/*
* Restore EDX and ECX in case they were clobbered. EBP is not
diff --git a/arch/x86/kernel/ftrace_32.S b/arch/x86/kernel/ftrace_32.S
index 4c8440de3355..0206fc7e86b0 100644
--- a/arch/x86/kernel/ftrace_32.S
+++ b/arch/x86/kernel/ftrace_32.S
@@ -141,7 +141,7 @@ ENTRY(ftrace_regs_caller)
movl function_trace_op, %ecx /* Save ftrace_pos in 3rd parameter */
pushl %esp /* Save pt_regs as 4th parameter */

-GLOBAL(ftrace_regs_call)
+SYM_INNER_LABEL(ftrace_regs_call, SYM_L_GLOBAL)
call ftrace_stub

addl $4, %esp /* Skip pt_regs */
diff --git a/arch/x86/kernel/ftrace_64.S b/arch/x86/kernel/ftrace_64.S
index 91b2cff4b79a..14df6cf07b7e 100644
--- a/arch/x86/kernel/ftrace_64.S
+++ b/arch/x86/kernel/ftrace_64.S
@@ -158,14 +158,14 @@ ENTRY(ftrace_caller)
/* save_mcount_regs fills in first two parameters */
save_mcount_regs

-GLOBAL(ftrace_caller_op_ptr)
+SYM_INNER_LABEL(ftrace_caller_op_ptr, SYM_L_GLOBAL)
/* Load the ftrace_ops into the 3rd parameter */
movq function_trace_op(%rip), %rdx

/* regs go into 4th parameter (but make it NULL) */
movq $0, %rcx

-GLOBAL(ftrace_call)
+SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
call ftrace_stub

restore_mcount_regs
@@ -178,10 +178,10 @@ GLOBAL(ftrace_call)
* think twice before adding any new code or changing the
* layout here.
*/
-GLOBAL(ftrace_epilogue)
+SYM_INNER_LABEL(ftrace_epilogue, SYM_L_GLOBAL)

#ifdef CONFIG_FUNCTION_GRAPH_TRACER
-GLOBAL(ftrace_graph_call)
+SYM_INNER_LABEL(ftrace_graph_call, SYM_L_GLOBAL)
jmp ftrace_stub
#endif

@@ -198,7 +198,7 @@ ENTRY(ftrace_regs_caller)
save_mcount_regs 8
/* save_mcount_regs fills in first two parameters */

-GLOBAL(ftrace_regs_caller_op_ptr)
+SYM_INNER_LABEL(ftrace_regs_caller_op_ptr, SYM_L_GLOBAL)
/* Load the ftrace_ops into the 3rd parameter */
movq function_trace_op(%rip), %rdx

@@ -225,7 +225,7 @@ GLOBAL(ftrace_regs_caller_op_ptr)
/* regs go into 4th parameter */
leaq (%rsp), %rcx

-GLOBAL(ftrace_regs_call)
+SYM_INNER_LABEL(ftrace_regs_call, SYM_L_GLOBAL)
call ftrace_stub

/* Copy flags back to SS, to restore them */
@@ -255,7 +255,7 @@ GLOBAL(ftrace_regs_call)
* The trampoline will add the code to jump
* to the return.
*/
-GLOBAL(ftrace_regs_caller_end)
+SYM_INNER_LABEL(ftrace_regs_caller_end, SYM_L_GLOBAL)

jmp ftrace_epilogue

@@ -277,7 +277,7 @@ fgraph_trace:
jnz ftrace_graph_caller
#endif

-GLOBAL(ftrace_stub)
+SYM_INNER_LABEL(ftrace_stub, SYM_L_GLOBAL)
retq

trace:
diff --git a/arch/x86/realmode/rm/reboot.S b/arch/x86/realmode/rm/reboot.S
index cd2f97b9623b..f91425a01f8f 100644
--- a/arch/x86/realmode/rm/reboot.S
+++ b/arch/x86/realmode/rm/reboot.S
@@ -33,7 +33,7 @@ ENTRY(machine_real_restart_asm)
movl %eax, %cr0
ljmpl $__KERNEL32_CS, $pa_machine_real_restart_paging_off

-GLOBAL(machine_real_restart_paging_off)
+SYM_INNER_LABEL(machine_real_restart_paging_off, SYM_L_GLOBAL)
xorl %eax, %eax
xorl %edx, %edx
movl $MSR_EFER, %ecx
--
2.16.3


2018-05-18 09:25:26

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 04/28] x86/asm: annotate relocate_kernel

There are functions in relocate_kernel which are not annotated. This
makes automatic annotations rather hard. So annotate all the functions
now.

Note that these are not C-like functions, so we do not use FUNC, but
CODE markers. Also they are not aligned, so we use the NOALIGN versions:
- SYM_CODE_START_NOALIGN
- SYM_CODE_START_LOCAL_NOALIGN
- SYM_CODE_END

In return, we get:
0000 108 NOTYPE GLOBAL DEFAULT 1 relocate_kernel
006c 165 NOTYPE LOCAL DEFAULT 1 identity_mapped
0146 127 NOTYPE LOCAL DEFAULT 1 swap_pages
0111 53 NOTYPE LOCAL DEFAULT 1 virtual_mapped

Signed-off-by: Jiri Slaby <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
---
arch/x86/kernel/relocate_kernel_32.S | 13 ++++++++-----
arch/x86/kernel/relocate_kernel_64.S | 13 ++++++++-----
2 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kernel/relocate_kernel_32.S b/arch/x86/kernel/relocate_kernel_32.S
index 77630d57e7bf..74d7891fc026 100644
--- a/arch/x86/kernel/relocate_kernel_32.S
+++ b/arch/x86/kernel/relocate_kernel_32.S
@@ -37,8 +37,7 @@
#define CP_PA_BACKUP_PAGES_MAP DATA(0x1c)

.text
- .globl relocate_kernel
-relocate_kernel:
+SYM_CODE_START_NOALIGN(relocate_kernel)
/* Save the CPU context, used for jumping back */

pushl %ebx
@@ -95,8 +94,9 @@ relocate_kernel:
addl $(identity_mapped - relocate_kernel), %eax
pushl %eax
ret
+SYM_CODE_END(relocate_kernel)

-identity_mapped:
+SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
/* set return address to 0 if not preserving context */
pushl $0
/* store the start address on the stack */
@@ -193,8 +193,9 @@ identity_mapped:
addl $(virtual_mapped - relocate_kernel), %eax
pushl %eax
ret
+SYM_CODE_END(identity_mapped)

-virtual_mapped:
+SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped)
movl CR4(%edi), %eax
movl %eax, %cr4
movl CR3(%edi), %eax
@@ -210,9 +211,10 @@ virtual_mapped:
popl %esi
popl %ebx
ret
+SYM_CODE_END(virtual_mapped)

/* Do the copies */
-swap_pages:
+SYM_CODE_START_LOCAL_NOALIGN(swap_pages)
movl 8(%esp), %edx
movl 4(%esp), %ecx
pushl %ebp
@@ -272,6 +274,7 @@ swap_pages:
popl %ebx
popl %ebp
ret
+SYM_CODE_END(swap_pages)

.globl kexec_control_code_size
.set kexec_control_code_size, . - relocate_kernel
diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
index 11eda21eb697..beb78767a5b3 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -40,8 +40,7 @@
.text
.align PAGE_SIZE
.code64
- .globl relocate_kernel
-relocate_kernel:
+SYM_CODE_START_NOALIGN(relocate_kernel)
/*
* %rdi indirection_page
* %rsi page_list
@@ -105,8 +104,9 @@ relocate_kernel:
addq $(identity_mapped - relocate_kernel), %r8
pushq %r8
ret
+SYM_CODE_END(relocate_kernel)

-identity_mapped:
+SYM_CODE_START_LOCAL_NOALIGN(identity_mapped)
/* set return address to 0 if not preserving context */
pushq $0
/* store the start address on the stack */
@@ -211,8 +211,9 @@ identity_mapped:
movq $virtual_mapped, %rax
pushq %rax
ret
+SYM_CODE_END(identity_mapped)

-virtual_mapped:
+SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped)
movq RSP(%r8), %rsp
movq CR4(%r8), %rax
movq %rax, %cr4
@@ -230,9 +231,10 @@ virtual_mapped:
popq %rbp
popq %rbx
ret
+SYM_CODE_END(virtual_mapped)

/* Do the copies */
-swap_pages:
+SYM_CODE_START_LOCAL_NOALIGN(swap_pages)
movq %rdi, %rcx /* Put the page_list in %rcx */
xorl %edi, %edi
xorl %esi, %esi
@@ -285,6 +287,7 @@ swap_pages:
jmp 0b
3:
ret
+SYM_CODE_END(swap_pages)

.globl kexec_control_code_size
.set kexec_control_code_size, . - relocate_kernel
--
2.16.3


2018-05-18 09:25:33

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 11/28] x86/asm/head: annotate data appropriatelly

Use the new SYM_DATA, SYM_DATA_START, and SYM_DATA_END in both 32 and 64
bit heads. In the 64-bit version, define also
SYM_DATA_START_PAGE_ALIGNED locally using the new SYM_START. It is used
in the code instead of NEXT_PAGE() which was defined in this file and
has been using the obsolete macro GLOBAL().

Now, the data in the 64-bit object file look sane:
Value Size Type Bind Vis Ndx Name
0000 4096 OBJECT GLOBAL DEFAULT 15 init_level4_pgt
1000 4096 OBJECT GLOBAL DEFAULT 15 level3_kernel_pgt
2000 2048 OBJECT GLOBAL DEFAULT 15 level2_kernel_pgt
3000 4096 OBJECT GLOBAL DEFAULT 15 level2_fixmap_pgt
4000 4096 OBJECT GLOBAL DEFAULT 15 level1_fixmap_pgt
5000 2 OBJECT GLOBAL DEFAULT 15 early_gdt_descr
5002 8 OBJECT LOCAL DEFAULT 15 early_gdt_descr_base
500a 8 OBJECT GLOBAL DEFAULT 15 phys_base
0000 8 OBJECT GLOBAL DEFAULT 17 initial_code
0008 8 OBJECT GLOBAL DEFAULT 17 initial_gs
0010 8 OBJECT GLOBAL DEFAULT 17 initial_stack
0000 4 OBJECT GLOBAL DEFAULT 19 early_recursion_flag
1000 4096 OBJECT GLOBAL DEFAULT 19 early_level4_pgt
2000 0x40000 OBJECT GLOBAL DEFAULT 19 early_dynamic_pgts
0000 4096 OBJECT GLOBAL DEFAULT 22 empty_zero_page

All have correct size and type.

Note, that we can now see that it might be worth pushing
early_recursion_flag after early_dynamic_pgts -- we are wasting almost
4K of .init.data.

Signed-off-by: Jiri Slaby <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
---
arch/x86/kernel/head_32.S | 29 ++++++++++--------
arch/x86/kernel/head_64.S | 78 +++++++++++++++++++++++++----------------------
2 files changed, 58 insertions(+), 49 deletions(-)

diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
index 727632a20110..1a6a6b4e4b4c 100644
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -502,8 +502,7 @@ ENDPROC(early_ignore_irq)

__INITDATA
.align 4
-GLOBAL(early_recursion_flag)
- .long 0
+SYM_DATA(early_recursion_flag, .long 0)

__REFDATA
.align 4
@@ -541,7 +540,7 @@ EXPORT_SYMBOL(empty_zero_page)
__PAGE_ALIGNED_DATA
/* Page-aligned for the benefit of paravirt? */
.align PAGE_SIZE
-ENTRY(initial_page_table)
+SYM_DATA_START(initial_page_table)
.long pa(initial_pg_pmd+PGD_IDENT_ATTR),0 /* low identity map */
# if KPMDS == 3
.long pa(initial_pg_pmd+PGD_IDENT_ATTR),0
@@ -559,17 +558,18 @@ ENTRY(initial_page_table)
# error "Kernel PMDs should be 1, 2 or 3"
# endif
.align PAGE_SIZE /* needs to be page-sized too */
+SYM_DATA_END(initial_page_table)
#endif

.data
.balign 4
-ENTRY(initial_stack)
- /*
- * The SIZEOF_PTREGS gap is a convention which helps the in-kernel
- * unwinder reliably detect the end of the stack.
- */
- .long init_thread_union + THREAD_SIZE - SIZEOF_PTREGS - \
- TOP_OF_KERNEL_STACK_PADDING;
+/*
+ * The SIZEOF_PTREGS gap is a convention which helps the in-kernel unwinder
+ * reliably detect the end of the stack.
+ */
+SYM_DATA(initial_stack,
+ .long init_thread_union + THREAD_SIZE -
+ SIZEOF_PTREGS - TOP_OF_KERNEL_STACK_PADDING)

__INITRODATA
int_msg:
@@ -590,22 +590,25 @@ int_msg:
ALIGN
# early boot GDT descriptor (must use 1:1 address mapping)
.word 0 # 32 bit align gdt_desc.address
-boot_gdt_descr:
+SYM_DATA_START(boot_gdt_descr)
.word __BOOT_DS+7
.long boot_gdt - __PAGE_OFFSET
+SYM_DATA_END(boot_gdt_descr)

# boot GDT descriptor (later on used by CPU#0):
.word 0 # 32 bit align gdt_desc.address
-ENTRY(early_gdt_descr)
+SYM_DATA_START(early_gdt_descr)
.word GDT_ENTRIES*8-1
.long gdt_page /* Overwritten for secondary CPUs */
+SYM_DATA_END(early_gdt_descr)

/*
* The boot_gdt must mirror the equivalent in setup.S and is
* used only for booting.
*/
.align L1_CACHE_BYTES
-ENTRY(boot_gdt)
+SYM_DATA_START(boot_gdt)
.fill GDT_ENTRY_BOOT_CS,8,0
.quad 0x00cf9a000000ffff /* kernel 4GB code at 0x00000000 */
.quad 0x00cf92000000ffff /* kernel 4GB data at 0x00000000 */
+SYM_DATA_END(boot_gdt)
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index d3a0f5b1f1b6..80b620c824eb 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -261,16 +261,14 @@ ENDPROC(start_cpu0)
/* Both SMP bootup and ACPI suspend change these variables */
__REFDATA
.balign 8
- GLOBAL(initial_code)
- .quad x86_64_start_kernel
- GLOBAL(initial_gs)
- .quad INIT_PER_CPU_VAR(irq_stack_union)
- GLOBAL(initial_stack)
- /*
- * The SIZEOF_PTREGS gap is a convention which helps the in-kernel
- * unwinder reliably detect the end of the stack.
- */
- .quad init_thread_union + THREAD_SIZE - SIZEOF_PTREGS
+SYM_DATA(initial_code, .quad x86_64_start_kernel)
+SYM_DATA(initial_gs, .quad INIT_PER_CPU_VAR(irq_stack_union))
+/*
+ * The SIZEOF_PTREGS gap is a convention which helps the in-kernel unwinder
+ * reliably detect the end of the stack.
+ */
+SYM_DATA(initial_stack,
+ .quad init_thread_union + THREAD_SIZE - SIZEOF_PTREGS)
__FINITDATA

__INIT
@@ -339,12 +337,10 @@ SYM_CODE_END(early_idt_handler_common)
__INITDATA

.balign 4
-GLOBAL(early_recursion_flag)
- .long 0
+SYM_DATA(early_recursion_flag, .long 0)

-#define NEXT_PAGE(name) \
- .balign PAGE_SIZE; \
-GLOBAL(name)
+#define SYM_DATA_START_PAGE_ALIGNED(name) \
+ SYM_START(name, SYM_L_GLOBAL, .balign PAGE_SIZE)

#ifdef CONFIG_PAGE_TABLE_ISOLATION
/*
@@ -359,11 +355,11 @@ GLOBAL(name)
*/
#define PTI_USER_PGD_FILL 512
/* This ensures they are 8k-aligned: */
-#define NEXT_PGD_PAGE(name) \
- .balign 2 * PAGE_SIZE; \
-GLOBAL(name)
+#define SYM_DATA_START_KAISER_ALIGNED(name) \
+ SYM_START(name, SYM_L_GLOBAL, .balign 2 * PAGE_SIZE)
#else
-#define NEXT_PGD_PAGE(name) NEXT_PAGE(name)
+#define SYM_DATA_START_KAISER_ALIGNED(name) \
+ SYM_DATA_START_PAGE_ALIGNED(name)
#define PTI_USER_PGD_FILL 0
#endif

@@ -376,17 +372,19 @@ GLOBAL(name)
.endr

__INITDATA
-NEXT_PGD_PAGE(early_top_pgt)
+SYM_DATA_START_KAISER_ALIGNED(early_top_pgt)
.fill 512,8,0
.fill PTI_USER_PGD_FILL,8,0
+SYM_DATA_END(early_top_pgt)

-NEXT_PAGE(early_dynamic_pgts)
+SYM_DATA_START_PAGE_ALIGNED(early_dynamic_pgts)
.fill 512*EARLY_DYNAMIC_PAGE_TABLES,8,0
+SYM_DATA_END(early_dynamic_pgts)

.data

#if defined(CONFIG_XEN_PV) || defined(CONFIG_XEN_PVH)
-NEXT_PGD_PAGE(init_top_pgt)
+SYM_DATA_START_KAISER_ALIGNED(init_top_pgt)
.quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC
.org init_top_pgt + L4_PAGE_OFFSET*8, 0
.quad level3_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC
@@ -394,11 +392,13 @@ NEXT_PGD_PAGE(init_top_pgt)
/* (2^48-(2*1024*1024*1024))/(2^39) = 511 */
.quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC
.fill PTI_USER_PGD_FILL,8,0
+SYM_DATA_END(init_top_pgt)

-NEXT_PAGE(level3_ident_pgt)
+SYM_DATA_START_PAGE_ALIGNED(level3_ident_pgt)
.quad level2_ident_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC
.fill 511, 8, 0
-NEXT_PAGE(level2_ident_pgt)
+SYM_DATA_END(level3_ident_pgt)
+SYM_DATA_START_PAGE_ALIGNED(level2_ident_pgt)
/*
* Since I easily can, map the first 1G.
* Don't set NX because code runs from these pages.
@@ -408,25 +408,29 @@ NEXT_PAGE(level2_ident_pgt)
* the CPU should ignore the bit.
*/
PMDS(0, __PAGE_KERNEL_IDENT_LARGE_EXEC, PTRS_PER_PMD)
+SYM_DATA_END(level2_ident_pgt)
#else
-NEXT_PGD_PAGE(init_top_pgt)
+SYM_DATA_START_KAISER_ALIGNED(init_top_pgt)
.fill 512,8,0
.fill PTI_USER_PGD_FILL,8,0
+SYM_DATA_END(init_top_pgt)
#endif

#ifdef CONFIG_X86_5LEVEL
-NEXT_PAGE(level4_kernel_pgt)
+SYM_DATA_START_PAGE_ALIGNED(level4_kernel_pgt)
.fill 511,8,0
.quad level3_kernel_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC
+SYM_DATA_END(level4_kernel_pgt)
#endif

-NEXT_PAGE(level3_kernel_pgt)
+SYM_DATA_START_PAGE_ALIGNED(level3_kernel_pgt)
.fill L3_START_KERNEL,8,0
/* (2^48-(2*1024*1024*1024)-((2^39)*511))/(2^30) = 510 */
.quad level2_kernel_pgt - __START_KERNEL_map + _KERNPG_TABLE_NOENC
.quad level2_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC
+SYM_DATA_END(level3_kernel_pgt)

-NEXT_PAGE(level2_kernel_pgt)
+SYM_DATA_START_PAGE_ALIGNED(level2_kernel_pgt)
/*
* 512 MB kernel mapping. We spend a full page on this pagetable
* anyway.
@@ -443,25 +447,26 @@ NEXT_PAGE(level2_kernel_pgt)
*/
PMDS(0, __PAGE_KERNEL_LARGE_EXEC,
KERNEL_IMAGE_SIZE/PMD_SIZE)
+SYM_DATA_END(level2_kernel_pgt)

-NEXT_PAGE(level2_fixmap_pgt)
+SYM_DATA_START_PAGE_ALIGNED(level2_fixmap_pgt)
.fill 506,8,0
.quad level1_fixmap_pgt - __START_KERNEL_map + _PAGE_TABLE_NOENC
/* 8MB reserved for vsyscalls + a 2MB hole = 4 + 1 entries */
.fill 5,8,0
+SYM_DATA_END(level2_fixmap_pgt)

-NEXT_PAGE(level1_fixmap_pgt)
+SYM_DATA_START_PAGE_ALIGNED(level1_fixmap_pgt)
.fill 512,8,0
+SYM_DATA_END(level1_fixmap_pgt)

#undef PMDS

.data
.align 16
- .globl early_gdt_descr
-early_gdt_descr:
- .word GDT_ENTRIES*8-1
-early_gdt_descr_base:
- .quad INIT_PER_CPU_VAR(gdt_page)
+
+SYM_DATA(early_gdt_descr, .word GDT_ENTRIES*8-1)
+SYM_DATA_LOCAL(early_gdt_descr_base, .quad INIT_PER_CPU_VAR(gdt_page))

/* This must match the first entry in level2_kernel_pgt */
SYM_DATA(phys_base, .quad 0x0000000000000000)
@@ -470,7 +475,8 @@ EXPORT_SYMBOL(phys_base)
#include "../../x86/xen/xen-head.S"

__PAGE_ALIGNED_BSS
-NEXT_PAGE(empty_zero_page)
+SYM_DATA_START_PAGE_ALIGNED(empty_zero_page)
.skip PAGE_SIZE
+SYM_DATA_END(empty_zero_page)
EXPORT_SYMBOL(empty_zero_page)

--
2.16.3


2018-05-18 09:25:45

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 09/28] x86/asm: annotate aliases

_key_expansion_128 is an alias to _key_expansion_256a, __memcpy to
memcpy, xen_syscall32_target to xen_sysenter_target, and so on. Annotate
them all using the new SYM_FUNC_START_ALIAS, SYM_FUNC_START_LOCAL_ALIAS,
and SYM_FUNC_END_ALIAS. This will make the tools generating the
debuginfo happy.

Signed-off-by: Jiri Slaby <[email protected]>
Cc: Herbert Xu <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: <[email protected]>
Cc: Boris Ostrovsky <[email protected]>
Cc: Juergen Gross <[email protected]>
Reviewed-by: Juergen Gross <[email protected]> [xen parts]
Cc: <[email protected]>
Cc: <[email protected]>
---
arch/x86/crypto/aesni-intel_asm.S | 5 ++---
arch/x86/lib/memcpy_64.S | 4 ++--
arch/x86/lib/memmove_64.S | 4 ++--
arch/x86/lib/memset_64.S | 4 ++--
arch/x86/xen/xen-asm_64.S | 4 ++--
5 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S
index b482ac1a1fb3..c85ecb163c78 100644
--- a/arch/x86/crypto/aesni-intel_asm.S
+++ b/arch/x86/crypto/aesni-intel_asm.S
@@ -1761,8 +1761,7 @@ ENDPROC(aesni_gcm_finalize)
#endif


-.align 4
-_key_expansion_128:
+SYM_FUNC_START_LOCAL_ALIAS(_key_expansion_128)
SYM_FUNC_START_LOCAL(_key_expansion_256a)
pshufd $0b11111111, %xmm1, %xmm1
shufps $0b00010000, %xmm0, %xmm4
@@ -1773,8 +1772,8 @@ SYM_FUNC_START_LOCAL(_key_expansion_256a)
movaps %xmm0, (TKEYP)
add $0x10, TKEYP
ret
-ENDPROC(_key_expansion_128)
SYM_FUNC_END(_key_expansion_256a)
+SYM_FUNC_END_ALIAS(_key_expansion_128)

SYM_FUNC_START_LOCAL(_key_expansion_192a)
pshufd $0b01010101, %xmm1, %xmm1
diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 9a53a06e5a3e..4911b1c61aa8 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -26,7 +26,7 @@
* Output:
* rax original destination
*/
-ENTRY(__memcpy)
+SYM_FUNC_START_ALIAS(__memcpy)
ENTRY(memcpy)
ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_REP_GOOD, \
"jmp memcpy_erms", X86_FEATURE_ERMS
@@ -40,7 +40,7 @@ ENTRY(memcpy)
rep movsb
ret
ENDPROC(memcpy)
-ENDPROC(__memcpy)
+SYM_FUNC_END_ALIAS(__memcpy)
EXPORT_SYMBOL(memcpy)
EXPORT_SYMBOL(__memcpy)

diff --git a/arch/x86/lib/memmove_64.S b/arch/x86/lib/memmove_64.S
index bbec69d8223b..50c1648311b3 100644
--- a/arch/x86/lib/memmove_64.S
+++ b/arch/x86/lib/memmove_64.S
@@ -26,7 +26,7 @@
*/
.weak memmove

-ENTRY(memmove)
+SYM_FUNC_START_ALIAS(memmove)
ENTRY(__memmove)

/* Handle more 32 bytes in loop */
@@ -208,6 +208,6 @@ ENTRY(__memmove)
13:
retq
ENDPROC(__memmove)
-ENDPROC(memmove)
+SYM_FUNC_END_ALIAS(memmove)
EXPORT_SYMBOL(__memmove)
EXPORT_SYMBOL(memmove)
diff --git a/arch/x86/lib/memset_64.S b/arch/x86/lib/memset_64.S
index 9bc861c71e75..927ac44d34aa 100644
--- a/arch/x86/lib/memset_64.S
+++ b/arch/x86/lib/memset_64.S
@@ -19,7 +19,7 @@
*
* rax original destination
*/
-ENTRY(memset)
+SYM_FUNC_START_ALIAS(memset)
ENTRY(__memset)
/*
* Some CPUs support enhanced REP MOVSB/STOSB feature. It is recommended
@@ -43,8 +43,8 @@ ENTRY(__memset)
rep stosb
movq %r9,%rax
ret
-ENDPROC(memset)
ENDPROC(__memset)
+SYM_FUNC_END_ALIAS(memset)
EXPORT_SYMBOL(memset)
EXPORT_SYMBOL(__memset)

diff --git a/arch/x86/xen/xen-asm_64.S b/arch/x86/xen/xen-asm_64.S
index 417b339e5c8e..e8f6f482bb20 100644
--- a/arch/x86/xen/xen-asm_64.S
+++ b/arch/x86/xen/xen-asm_64.S
@@ -164,13 +164,13 @@ ENDPROC(xen_sysenter_target)

#else /* !CONFIG_IA32_EMULATION */

-ENTRY(xen_syscall32_target)
+SYM_FUNC_START_ALIAS(xen_syscall32_target)
ENTRY(xen_sysenter_target)
lea 16(%rsp), %rsp /* strip %rcx, %r11 */
mov $-ENOSYS, %rax
pushq $0
jmp hypercall_iret
-ENDPROC(xen_syscall32_target)
ENDPROC(xen_sysenter_target)
+SYM_FUNC_END_ALIAS(xen_syscall32_target)

#endif /* CONFIG_IA32_EMULATION */
--
2.16.3


2018-05-18 09:25:48

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 13/28] um: annotate data appropriatelly

Use the new SYM_DATA_START and SYM_DATA_END_LABEL macros for vdso_start.

We get:
0000 2376 OBJECT GLOBAL DEFAULT 4 vdso_start
0948 0 OBJECT GLOBAL DEFAULT 4 vdso_end

Signed-off-by: Jiri Slaby <[email protected]>
Cc: Jeff Dike <[email protected]>
Cc: Richard Weinberger <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
---
arch/x86/um/vdso/vdso.S | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/x86/um/vdso/vdso.S b/arch/x86/um/vdso/vdso.S
index a4a3870dc059..a6eaf293a73b 100644
--- a/arch/x86/um/vdso/vdso.S
+++ b/arch/x86/um/vdso/vdso.S
@@ -1,11 +1,11 @@
/* SPDX-License-Identifier: GPL-2.0 */
#include <linux/init.h>
+#include <linux/linkage.h>

__INITDATA

- .globl vdso_start, vdso_end
-vdso_start:
+SYM_DATA_START(vdso_start)
.incbin "arch/x86/um/vdso/vdso.so"
-vdso_end:
+SYM_DATA_END_LABEL(vdso_start, SYM_L_GLOBAL, vdso_end)

__FINIT
--
2.16.3


2018-05-18 09:26:07

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 08/28] x86/boot/compressed: annotate local functions

relocated, paging_enabled, and no_longmode are self-standing local
functions, annotate them as such. paging_enabled is annotated as
NOALIGN, since the trampoline code has to be compact.

Signed-off-by: Jiri Slaby <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: [email protected]
---
arch/x86/boot/compressed/head_32.S | 3 ++-
arch/x86/boot/compressed/head_64.S | 9 ++++++---
2 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S
index 37380c0d5999..7e8ab0bb6968 100644
--- a/arch/x86/boot/compressed/head_32.S
+++ b/arch/x86/boot/compressed/head_32.S
@@ -209,7 +209,7 @@ ENDPROC(efi32_stub_entry)
#endif

.text
-relocated:
+SYM_FUNC_START_LOCAL(relocated)

/*
* Clear BSS (stack is currently empty)
@@ -260,6 +260,7 @@ relocated:
*/
xorl %ebx, %ebx
jmp *%eax
+SYM_FUNC_END(relocated)

#ifdef CONFIG_EFI_STUB
.data
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index fca012baba19..614dc4868915 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -469,7 +469,7 @@ ENDPROC(efi64_stub_entry)
#endif

.text
-relocated:
+SYM_FUNC_START_LOCAL(relocated)

/*
* Clear BSS (stack is currently empty)
@@ -511,6 +511,7 @@ relocated:
* Jump to the decompressed kernel.
*/
jmp *%rax
+SYM_FUNC_END(relocated)

.code32
/*
@@ -575,9 +576,10 @@ ENTRY(trampoline_32bit_src)
lret

.code64
-paging_enabled:
+SYM_FUNC_START_LOCAL_NOALIGN(paging_enabled)
/* Return from the trampoline */
jmp *%rdi
+SYM_FUNC_END(paging_enabled)

/*
* The trampoline code has a size limit.
@@ -587,11 +589,12 @@ paging_enabled:
.org trampoline_32bit_src + TRAMPOLINE_32BIT_CODE_SIZE

.code32
-no_longmode:
+SYM_FUNC_START_LOCAL(no_longmode)
/* This isn't an x86-64 CPU, so hang intentionally, we cannot continue */
1:
hlt
jmp 1b
+SYM_FUNC_END(no_longmode)

#include "../../kernel/verify_cpu.S"

--
2.16.3


2018-05-18 09:26:56

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 10/28] x86/asm/entry: annotate interrupt symbols properly

* annotate functions properly by SYM_CODE_START, SYM_CODE_START_LOCAL*
and SYM_CODE_END -- these are not C-like functions, so we have to
annotate them using CODE.
* use SYM_INNER_LABEL* for labels being in the middle of other functions

[v4] alignments preserved

Signed-off-by: Jiri Slaby <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: <[email protected]>
---
arch/x86/entry/entry_32.S | 15 ++++++++-------
arch/x86/entry/entry_64.S | 9 ++++-----
2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 030fbeba5f4a..4b80b2fb4de1 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -340,7 +340,7 @@ ret_from_intr:
cmpl $USER_RPL, %eax
jb resume_kernel # not returning to v8086 or userspace

-ENTRY(resume_userspace)
+SYM_INNER_LABEL_ALIGN(resume_userspace, SYM_L_LOCAL)
DISABLE_INTERRUPTS(CLBR_ANY)
TRACE_IRQS_OFF
movl %esp, %eax
@@ -579,10 +579,11 @@ restore_all:
INTERRUPT_RETURN

.section .fixup, "ax"
-ENTRY(iret_exc )
+SYM_CODE_START(iret_exc)
pushl $0 # no error code
pushl $do_iret_error
jmp common_exception
+SYM_CODE_END(iret_exc)
.previous
_ASM_EXTABLE(.Lirq_return, iret_exc)

@@ -674,7 +675,7 @@ END(irq_entries_start)
* so IRQ-flags tracing has to follow that:
*/
.p2align CONFIG_X86_L1_CACHE_SHIFT
-common_interrupt:
+SYM_CODE_START_LOCAL(common_interrupt)
ASM_CLAC
addl $-0x80, (%esp) /* Adjust vector into the [-256, -1] range */
SAVE_ALL
@@ -683,7 +684,7 @@ common_interrupt:
movl %esp, %eax
call do_IRQ
jmp ret_from_intr
-ENDPROC(common_interrupt)
+SYM_CODE_END(common_interrupt)

#define BUILD_INTERRUPT3(name, nr, fn) \
ENTRY(name) \
@@ -835,7 +836,7 @@ ENTRY(xen_hypervisor_callback)

jmp xen_iret_crit_fixup

-ENTRY(xen_do_upcall)
+SYM_INNER_LABEL_ALIGN(xen_do_upcall, SYM_L_GLOBAL)
1: mov %esp, %eax
call xen_evtchn_do_upcall
#ifndef CONFIG_PREEMPT
@@ -920,7 +921,7 @@ ENTRY(page_fault)
jmp common_exception
END(page_fault)

-common_exception:
+SYM_CODE_START_LOCAL_NOALIGN(common_exception)
/* the function address is in %gs's slot on the stack */
pushl %fs
pushl %es
@@ -950,7 +951,7 @@ common_exception:
movl %esp, %eax # pt_regs pointer
CALL_NOSPEC %edi
jmp ret_from_exception
-END(common_exception)
+SYM_CODE_END(common_exception)

ENTRY(debug)
/*
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 80045cb2f44c..e0798c044055 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -637,7 +637,7 @@ END(interrupt_entry)
* then jump to common_interrupt.
*/
.p2align CONFIG_X86_L1_CACHE_SHIFT
-common_interrupt:
+SYM_CODE_START_LOCAL(common_interrupt)
addq $-0x80, (%rsp) /* Adjust vector to [-256, -1] range */
call interrupt_entry
UNWIND_HINT_REGS indirect=1
@@ -733,7 +733,7 @@ GLOBAL(restore_regs_and_return_to_kernel)
*/
INTERRUPT_RETURN

-ENTRY(native_iret)
+SYM_INNER_LABEL_ALIGN(native_iret, SYM_L_GLOBAL)
UNWIND_HINT_IRET_REGS
/*
* Are we returning to a stack segment from the LDT? Note: in
@@ -744,8 +744,7 @@ ENTRY(native_iret)
jnz native_irq_return_ldt
#endif

-.global native_irq_return_iret
-native_irq_return_iret:
+SYM_INNER_LABEL(native_irq_return_iret, SYM_L_GLOBAL)
/*
* This may fault. Non-paranoid faults on return to userspace are
* handled by fixup_bad_iret. These include #SS, #GP, and #NP.
@@ -827,7 +826,7 @@ native_irq_return_ldt:
*/
jmp native_irq_return_iret
#endif
-END(common_interrupt)
+SYM_CODE_END(common_interrupt)

/*
* APIC interrupts.
--
2.16.3


2018-05-18 09:27:06

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 05/28] x86/asm/entry: annotate THUNKs

Place SYM_CODE_START_NOALIGN and SYM_CODE_END around the THUNK macro
body, given it generates:
1) non-C-like functions, and
2) was not marked as aligned.

The common tail .L_restore is put inside SYM_CODE_START_LOCAL_NOALIGN
and SYM_CODE_END too.

The result:
Value Size Type Bind Vis Ndx Name
0000 28 NOTYPE GLOBAL DEFAULT 1 trace_hardirqs_on_thunk
001c 28 NOTYPE GLOBAL DEFAULT 1 trace_hardirqs_off_thunk
0038 24 NOTYPE GLOBAL DEFAULT 1 lockdep_sys_exit_thunk
0050 24 NOTYPE GLOBAL DEFAULT 1 ___preempt_schedule
0068 24 NOTYPE GLOBAL DEFAULT 1 ___preempt_schedule_notrace

Signed-off-by: Jiri Slaby <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: <[email protected]>
---
arch/x86/entry/thunk_32.S | 4 ++--
arch/x86/entry/thunk_64.S | 8 ++++----
2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/x86/entry/thunk_32.S b/arch/x86/entry/thunk_32.S
index fee6bc79b987..422354b204f4 100644
--- a/arch/x86/entry/thunk_32.S
+++ b/arch/x86/entry/thunk_32.S
@@ -10,8 +10,7 @@

/* put return address in eax (arg1) */
.macro THUNK name, func, put_ret_addr_in_eax=0
- .globl \name
-\name:
+SYM_CODE_START_NOALIGN(\name)
pushl %eax
pushl %ecx
pushl %edx
@@ -27,6 +26,7 @@
popl %eax
ret
_ASM_NOKPROBE(\name)
+SYM_CODE_END(\name)
.endm

#ifdef CONFIG_TRACE_IRQFLAGS
diff --git a/arch/x86/entry/thunk_64.S b/arch/x86/entry/thunk_64.S
index be36bf4e0957..ea5e2b0e6611 100644
--- a/arch/x86/entry/thunk_64.S
+++ b/arch/x86/entry/thunk_64.S
@@ -12,9 +12,7 @@

/* rdi: arg1 ... normal C conventions. rax is saved/restored. */
.macro THUNK name, func, put_ret_addr_in_rdi=0
- .globl \name
- .type \name, @function
-\name:
+SYM_CODE_START_NOALIGN(\name)
pushq %rbp
movq %rsp, %rbp

@@ -36,6 +34,7 @@
call \func
jmp .L_restore
_ASM_NOKPROBE(\name)
+SYM_CODE_END(\name)
.endm

#ifdef CONFIG_TRACE_IRQFLAGS
@@ -57,7 +56,7 @@
#if defined(CONFIG_TRACE_IRQFLAGS) \
|| defined(CONFIG_DEBUG_LOCK_ALLOC) \
|| defined(CONFIG_PREEMPT)
-.L_restore:
+SYM_CODE_START_LOCAL_NOALIGN(.L_restore)
popq %r11
popq %r10
popq %r9
@@ -70,4 +69,5 @@
popq %rbp
ret
_ASM_NOKPROBE(.L_restore)
+SYM_CODE_END(.L_restore)
#endif
--
2.16.3


2018-05-18 09:27:10

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 06/28] x86/asm: annotate local pseudo-functions

Use the newly added SYM_CODE_START_LOCAL* to annotate starts of all
pseudo-functions (those ending END until now) which do not have ".globl"
annotation. This is needed to balance END for tools that generate
debuginfo. Note that we switch from END to SYM_CODE_END too so that
everybody can see the pairing.

We are not annotating C-like functions (which handle frame ptr etc.)
here, hence we use SYM_CODE_* macros here, not SYM_FUNC_*. Note that
early_idt_handler_common already had ENDPROC -- switch that to
SYM_CODE_END for the same reason as above.

bogus_64_magic, bad_address, bad_get_user*, and bad_put_user are now
aligned, as they are separate functions. They do not mind to be aligned
-- no need to be compact there.

early_idt_handler_common is aligned now too, as it is after
early_idt_handler_array, so as well no need to be compact there.

verify_cpu is self-standing and included in other .S files, so align it
too.

The others have alignment preserved to what it used to be (using the
_NOALIGN variant of macros).

[v3] annotate more functions
[v4] describe the alignments changes

Signed-off-by: Jiri Slaby <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: <[email protected]>
---
arch/x86/entry/entry_32.S | 5 ++---
arch/x86/entry/entry_64.S | 3 ++-
arch/x86/kernel/acpi/wakeup_64.S | 3 ++-
arch/x86/kernel/head_32.S | 4 ++--
arch/x86/kernel/head_64.S | 4 ++--
arch/x86/kernel/verify_cpu.S | 4 ++--
arch/x86/lib/getuser.S | 8 ++++----
arch/x86/lib/putuser.S | 4 ++--
8 files changed, 18 insertions(+), 17 deletions(-)

diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index bb4f540be234..030fbeba5f4a 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -323,8 +323,7 @@ END(ret_from_fork)
*/

# userspace resumption stub bypassing syscall exit tracing
- ALIGN
-ret_from_exception:
+SYM_CODE_START_LOCAL(ret_from_exception)
preempt_stop(CLBR_ANY)
ret_from_intr:
#ifdef CONFIG_VM86
@@ -347,7 +346,7 @@ ENTRY(resume_userspace)
movl %esp, %eax
call prepare_exit_to_usermode
jmp restore_all
-END(ret_from_exception)
+SYM_CODE_END(ret_from_exception)

#ifdef CONFIG_PREEMPT
ENTRY(resume_kernel)
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index c9648b287d7f..80045cb2f44c 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1028,7 +1028,7 @@ EXPORT_SYMBOL(native_load_gs_index)
_ASM_EXTABLE(.Lgs_change, bad_gs)
.section .fixup, "ax"
/* running with kernelgs */
-bad_gs:
+SYM_CODE_START_LOCAL_NOALIGN(bad_gs)
SWAPGS /* switch back to user gs */
.macro ZAP_GS
/* This can't be a string because the preprocessor needs to see it. */
@@ -1039,6 +1039,7 @@ bad_gs:
xorl %eax, %eax
movl %eax, %gs
jmp 2b
+SYM_CODE_END(bad_gs)
.previous

/* Call softirq on interrupt stack. Interrupts are off. */
diff --git a/arch/x86/kernel/acpi/wakeup_64.S b/arch/x86/kernel/acpi/wakeup_64.S
index 551758f48eb7..6c60fe346583 100644
--- a/arch/x86/kernel/acpi/wakeup_64.S
+++ b/arch/x86/kernel/acpi/wakeup_64.S
@@ -36,8 +36,9 @@ ENTRY(wakeup_long64)
jmp *%rax
ENDPROC(wakeup_long64)

-bogus_64_magic:
+SYM_CODE_START_LOCAL(bogus_64_magic)
jmp bogus_64_magic
+SYM_CODE_END(bogus_64_magic)

ENTRY(do_suspend_lowlevel)
FRAME_BEGIN
diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
index 80965fd75fea..727632a20110 100644
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -409,7 +409,7 @@ ENTRY(early_idt_handler_array)
.endr
ENDPROC(early_idt_handler_array)

-early_idt_handler_common:
+SYM_CODE_START_LOCAL(early_idt_handler_common)
/*
* The stack is the hardware frame, an error code or zero, and the
* vector number.
@@ -460,7 +460,7 @@ early_idt_handler_common:
decl %ss:early_recursion_flag
addl $4, %esp /* pop pt_regs->orig_ax */
iret
-ENDPROC(early_idt_handler_common)
+SYM_CODE_END(early_idt_handler_common)

/* This is the default interrupt "handler" :-) */
ENTRY(early_ignore_irq)
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 17543533642d..d3a0f5b1f1b6 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -292,7 +292,7 @@ ENTRY(early_idt_handler_array)
UNWIND_HINT_IRET_REGS offset=16
END(early_idt_handler_array)

-early_idt_handler_common:
+SYM_CODE_START_LOCAL(early_idt_handler_common)
/*
* The stack is the hardware frame, an error code or zero, and the
* vector number.
@@ -334,7 +334,7 @@ early_idt_handler_common:
20:
decl early_recursion_flag(%rip)
jmp restore_regs_and_return_to_kernel
-END(early_idt_handler_common)
+SYM_CODE_END(early_idt_handler_common)

__INITDATA

diff --git a/arch/x86/kernel/verify_cpu.S b/arch/x86/kernel/verify_cpu.S
index 3d3c2f71f617..fd60f1ac5fec 100644
--- a/arch/x86/kernel/verify_cpu.S
+++ b/arch/x86/kernel/verify_cpu.S
@@ -33,7 +33,7 @@
#include <asm/cpufeatures.h>
#include <asm/msr-index.h>

-ENTRY(verify_cpu)
+SYM_FUNC_START_LOCAL(verify_cpu)
pushf # Save caller passed flags
push $0 # Kill any dangerous flags
popf
@@ -139,4 +139,4 @@ ENTRY(verify_cpu)
popf # Restore caller passed flags
xorl %eax, %eax
ret
-ENDPROC(verify_cpu)
+SYM_FUNC_END(verify_cpu)
diff --git a/arch/x86/lib/getuser.S b/arch/x86/lib/getuser.S
index 49b167f73215..a5d7fe7fe401 100644
--- a/arch/x86/lib/getuser.S
+++ b/arch/x86/lib/getuser.S
@@ -115,21 +115,21 @@ ENDPROC(__get_user_8)
EXPORT_SYMBOL(__get_user_8)


-bad_get_user:
+SYM_CODE_START_LOCAL(bad_get_user)
xor %edx,%edx
mov $(-EFAULT),%_ASM_AX
ASM_CLAC
ret
-END(bad_get_user)
+SYM_CODE_END(bad_get_user)

#ifdef CONFIG_X86_32
-bad_get_user_8:
+SYM_CODE_START_LOCAL(bad_get_user_8)
xor %edx,%edx
xor %ecx,%ecx
mov $(-EFAULT),%_ASM_AX
ASM_CLAC
ret
-END(bad_get_user_8)
+SYM_CODE_END(bad_get_user_8)
#endif

_ASM_EXTABLE(1b,bad_get_user)
diff --git a/arch/x86/lib/putuser.S b/arch/x86/lib/putuser.S
index 96dce5fe2a35..8234d8559385 100644
--- a/arch/x86/lib/putuser.S
+++ b/arch/x86/lib/putuser.S
@@ -89,10 +89,10 @@ ENTRY(__put_user_8)
ENDPROC(__put_user_8)
EXPORT_SYMBOL(__put_user_8)

-bad_put_user:
+SYM_CODE_START_LOCAL(bad_put_user)
movl $-EFAULT,%eax
EXIT
-END(bad_put_user)
+SYM_CODE_END(bad_put_user)

_ASM_EXTABLE(1b,bad_put_user)
_ASM_EXTABLE(2b,bad_put_user)
--
2.16.3


2018-05-18 09:27:44

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 02/28] x86/asm/suspend: drop ENTRY from local data

ENTRY was intended for functions and shall be paired with ENDPROC.
ENTRY also aligns symbols which creates unnecessary holes here between
data.

So drop ENTRY from saved_eip in wakeup_32 and many saved_* in wakeup_64,
as these symbols are local only.

We could use SYM_DATA_LOCAL for these symbols, but it was discouraged
earlier [1].

[1] https://lkml.org/lkml/2017/4/27/244

Signed-off-by: Jiri Slaby <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Len Brown <[email protected]>
Cc: Pavel Machek <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
arch/x86/kernel/acpi/wakeup_32.S | 2 +-
arch/x86/kernel/acpi/wakeup_64.S | 12 ++++++------
2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kernel/acpi/wakeup_32.S b/arch/x86/kernel/acpi/wakeup_32.S
index 0c26b1b44e51..4203d4f0c68d 100644
--- a/arch/x86/kernel/acpi/wakeup_32.S
+++ b/arch/x86/kernel/acpi/wakeup_32.S
@@ -90,7 +90,7 @@ ret_point:
.data
ALIGN
ENTRY(saved_magic) .long 0
-ENTRY(saved_eip) .long 0
+saved_eip: .long 0

# saved registers
saved_idt: .long 0,0
diff --git a/arch/x86/kernel/acpi/wakeup_64.S b/arch/x86/kernel/acpi/wakeup_64.S
index 50b8ed0317a3..510fa12aab73 100644
--- a/arch/x86/kernel/acpi/wakeup_64.S
+++ b/arch/x86/kernel/acpi/wakeup_64.S
@@ -125,12 +125,12 @@ ENTRY(do_suspend_lowlevel)
ENDPROC(do_suspend_lowlevel)

.data
-ENTRY(saved_rbp) .quad 0
-ENTRY(saved_rsi) .quad 0
-ENTRY(saved_rdi) .quad 0
-ENTRY(saved_rbx) .quad 0
+saved_rbp: .quad 0
+saved_rsi: .quad 0
+saved_rdi: .quad 0
+saved_rbx: .quad 0

-ENTRY(saved_rip) .quad 0
-ENTRY(saved_rsp) .quad 0
+saved_rip: .quad 0
+saved_rsp: .quad 0

ENTRY(saved_magic) .quad 0
--
2.16.3


2018-05-18 09:28:28

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 07/28] x86/asm/crypto: annotate local functions

Use the newly added SYM_FUNC_START_LOCAL to annotate starts of all
functions which do not have ".globl" annotation, but their ends are
annotated by ENDPROC. This is needed to balance ENDPROC for tools that
generate debuginfo.

To be symmetric, we also convert their ENDPROCs to the new SYM_FUNC_END.

Signed-off-by: Jiri Slaby <[email protected]>
Cc: Herbert Xu <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: <[email protected]>
Cc: <[email protected]>
---
arch/x86/crypto/aesni-intel_asm.S | 49 ++++++++++++----------------
arch/x86/crypto/camellia-aesni-avx-asm_64.S | 20 ++++++------
arch/x86/crypto/camellia-aesni-avx2-asm_64.S | 20 ++++++------
arch/x86/crypto/cast5-avx-x86_64-asm_64.S | 8 ++---
arch/x86/crypto/cast6-avx-x86_64-asm_64.S | 8 ++---
arch/x86/crypto/ghash-clmulni-intel_asm.S | 4 +--
arch/x86/crypto/serpent-avx-x86_64-asm_64.S | 8 ++---
arch/x86/crypto/serpent-avx2-asm_64.S | 8 ++---
arch/x86/crypto/twofish-avx-x86_64-asm_64.S | 8 ++---
9 files changed, 62 insertions(+), 71 deletions(-)

diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S
index e762ef417562..b482ac1a1fb3 100644
--- a/arch/x86/crypto/aesni-intel_asm.S
+++ b/arch/x86/crypto/aesni-intel_asm.S
@@ -1763,7 +1763,7 @@ ENDPROC(aesni_gcm_finalize)

.align 4
_key_expansion_128:
-_key_expansion_256a:
+SYM_FUNC_START_LOCAL(_key_expansion_256a)
pshufd $0b11111111, %xmm1, %xmm1
shufps $0b00010000, %xmm0, %xmm4
pxor %xmm4, %xmm0
@@ -1774,10 +1774,9 @@ _key_expansion_256a:
add $0x10, TKEYP
ret
ENDPROC(_key_expansion_128)
-ENDPROC(_key_expansion_256a)
+SYM_FUNC_END(_key_expansion_256a)

-.align 4
-_key_expansion_192a:
+SYM_FUNC_START_LOCAL(_key_expansion_192a)
pshufd $0b01010101, %xmm1, %xmm1
shufps $0b00010000, %xmm0, %xmm4
pxor %xmm4, %xmm0
@@ -1799,10 +1798,9 @@ _key_expansion_192a:
movaps %xmm1, 0x10(TKEYP)
add $0x20, TKEYP
ret
-ENDPROC(_key_expansion_192a)
+SYM_FUNC_END(_key_expansion_192a)

-.align 4
-_key_expansion_192b:
+SYM_FUNC_START_LOCAL(_key_expansion_192b)
pshufd $0b01010101, %xmm1, %xmm1
shufps $0b00010000, %xmm0, %xmm4
pxor %xmm4, %xmm0
@@ -1819,10 +1817,9 @@ _key_expansion_192b:
movaps %xmm0, (TKEYP)
add $0x10, TKEYP
ret
-ENDPROC(_key_expansion_192b)
+SYM_FUNC_END(_key_expansion_192b)

-.align 4
-_key_expansion_256b:
+SYM_FUNC_START_LOCAL(_key_expansion_256b)
pshufd $0b10101010, %xmm1, %xmm1
shufps $0b00010000, %xmm2, %xmm4
pxor %xmm4, %xmm2
@@ -1832,7 +1829,7 @@ _key_expansion_256b:
movaps %xmm2, (TKEYP)
add $0x10, TKEYP
ret
-ENDPROC(_key_expansion_256b)
+SYM_FUNC_END(_key_expansion_256b)

/*
* int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key,
@@ -1985,8 +1982,7 @@ ENDPROC(aesni_enc)
* KEY
* TKEYP (T1)
*/
-.align 4
-_aesni_enc1:
+SYM_FUNC_START_LOCAL(_aesni_enc1)
movaps (KEYP), KEY # key
mov KEYP, TKEYP
pxor KEY, STATE # round 0
@@ -2029,7 +2025,7 @@ _aesni_enc1:
movaps 0x70(TKEYP), KEY
AESENCLAST KEY STATE
ret
-ENDPROC(_aesni_enc1)
+SYM_FUNC_END(_aesni_enc1)

/*
* _aesni_enc4: internal ABI
@@ -2049,8 +2045,7 @@ ENDPROC(_aesni_enc1)
* KEY
* TKEYP (T1)
*/
-.align 4
-_aesni_enc4:
+SYM_FUNC_START_LOCAL(_aesni_enc4)
movaps (KEYP), KEY # key
mov KEYP, TKEYP
pxor KEY, STATE1 # round 0
@@ -2138,7 +2133,7 @@ _aesni_enc4:
AESENCLAST KEY STATE3
AESENCLAST KEY STATE4
ret
-ENDPROC(_aesni_enc4)
+SYM_FUNC_END(_aesni_enc4)

/*
* void aesni_dec (struct crypto_aes_ctx *ctx, u8 *dst, const u8 *src)
@@ -2177,8 +2172,7 @@ ENDPROC(aesni_dec)
* KEY
* TKEYP (T1)
*/
-.align 4
-_aesni_dec1:
+SYM_FUNC_START_LOCAL(_aesni_dec1)
movaps (KEYP), KEY # key
mov KEYP, TKEYP
pxor KEY, STATE # round 0
@@ -2221,7 +2215,7 @@ _aesni_dec1:
movaps 0x70(TKEYP), KEY
AESDECLAST KEY STATE
ret
-ENDPROC(_aesni_dec1)
+SYM_FUNC_END(_aesni_dec1)

/*
* _aesni_dec4: internal ABI
@@ -2241,8 +2235,7 @@ ENDPROC(_aesni_dec1)
* KEY
* TKEYP (T1)
*/
-.align 4
-_aesni_dec4:
+SYM_FUNC_START_LOCAL(_aesni_dec4)
movaps (KEYP), KEY # key
mov KEYP, TKEYP
pxor KEY, STATE1 # round 0
@@ -2330,7 +2323,7 @@ _aesni_dec4:
AESDECLAST KEY STATE3
AESDECLAST KEY STATE4
ret
-ENDPROC(_aesni_dec4)
+SYM_FUNC_END(_aesni_dec4)

/*
* void aesni_ecb_enc(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src,
@@ -2608,8 +2601,7 @@ ENDPROC(aesni_cbc_dec)
* INC: == 1, in little endian
* BSWAP_MASK == endian swapping mask
*/
-.align 4
-_aesni_inc_init:
+SYM_FUNC_START_LOCAL(_aesni_inc_init)
movaps .Lbswap_mask, BSWAP_MASK
movaps IV, CTR
PSHUFB_XMM BSWAP_MASK CTR
@@ -2617,7 +2609,7 @@ _aesni_inc_init:
MOVQ_R64_XMM TCTR_LOW INC
MOVQ_R64_XMM CTR TCTR_LOW
ret
-ENDPROC(_aesni_inc_init)
+SYM_FUNC_END(_aesni_inc_init)

/*
* _aesni_inc: internal ABI
@@ -2634,8 +2626,7 @@ ENDPROC(_aesni_inc_init)
* CTR: == output IV, in little endian
* TCTR_LOW: == lower qword of CTR
*/
-.align 4
-_aesni_inc:
+SYM_FUNC_START_LOCAL(_aesni_inc)
paddq INC, CTR
add $1, TCTR_LOW
jnc .Linc_low
@@ -2646,7 +2637,7 @@ _aesni_inc:
movaps CTR, IV
PSHUFB_XMM BSWAP_MASK IV
ret
-ENDPROC(_aesni_inc)
+SYM_FUNC_END(_aesni_inc)

/*
* void aesni_ctr_enc(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src,
diff --git a/arch/x86/crypto/camellia-aesni-avx-asm_64.S b/arch/x86/crypto/camellia-aesni-avx-asm_64.S
index a14af6eb09cb..f4408ca55fdb 100644
--- a/arch/x86/crypto/camellia-aesni-avx-asm_64.S
+++ b/arch/x86/crypto/camellia-aesni-avx-asm_64.S
@@ -189,20 +189,20 @@
* larger and would only be 0.5% faster (on sandy-bridge).
*/
.align 8
-roundsm16_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd:
+SYM_FUNC_START_LOCAL(roundsm16_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd)
roundsm16(%xmm0, %xmm1, %xmm2, %xmm3, %xmm4, %xmm5, %xmm6, %xmm7,
%xmm8, %xmm9, %xmm10, %xmm11, %xmm12, %xmm13, %xmm14, %xmm15,
%rcx, (%r9));
ret;
-ENDPROC(roundsm16_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd)
+SYM_FUNC_END(roundsm16_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd)

.align 8
-roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab:
+SYM_FUNC_START_LOCAL(roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab)
roundsm16(%xmm4, %xmm5, %xmm6, %xmm7, %xmm0, %xmm1, %xmm2, %xmm3,
%xmm12, %xmm13, %xmm14, %xmm15, %xmm8, %xmm9, %xmm10, %xmm11,
%rax, (%r9));
ret;
-ENDPROC(roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab)
+SYM_FUNC_END(roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab)

/*
* IN/OUT:
@@ -722,7 +722,7 @@ ENDPROC(roundsm16_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab)
.text

.align 8
-__camellia_enc_blk16:
+SYM_FUNC_START_LOCAL(__camellia_enc_blk16)
/* input:
* %rdi: ctx, CTX
* %rax: temporary storage, 256 bytes
@@ -806,10 +806,10 @@ __camellia_enc_blk16:
%xmm15, %rax, %rcx, 24);

jmp .Lenc_done;
-ENDPROC(__camellia_enc_blk16)
+SYM_FUNC_END(__camellia_enc_blk16)

.align 8
-__camellia_dec_blk16:
+SYM_FUNC_START_LOCAL(__camellia_dec_blk16)
/* input:
* %rdi: ctx, CTX
* %rax: temporary storage, 256 bytes
@@ -891,7 +891,7 @@ __camellia_dec_blk16:
((key_table + (24) * 8) + 4)(CTX));

jmp .Ldec_max24;
-ENDPROC(__camellia_dec_blk16)
+SYM_FUNC_END(__camellia_dec_blk16)

ENTRY(camellia_ecb_enc_16way)
/* input:
@@ -1120,7 +1120,7 @@ ENDPROC(camellia_ctr_16way)
vpxor tmp, iv, iv;

.align 8
-camellia_xts_crypt_16way:
+SYM_FUNC_START_LOCAL(camellia_xts_crypt_16way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (16 blocks)
@@ -1254,7 +1254,7 @@ camellia_xts_crypt_16way:

FRAME_END
ret;
-ENDPROC(camellia_xts_crypt_16way)
+SYM_FUNC_END(camellia_xts_crypt_16way)

ENTRY(camellia_xts_enc_16way)
/* input:
diff --git a/arch/x86/crypto/camellia-aesni-avx2-asm_64.S b/arch/x86/crypto/camellia-aesni-avx2-asm_64.S
index b66bbfa62f50..916a3e2b8ea4 100644
--- a/arch/x86/crypto/camellia-aesni-avx2-asm_64.S
+++ b/arch/x86/crypto/camellia-aesni-avx2-asm_64.S
@@ -228,20 +228,20 @@
* larger and would only marginally faster.
*/
.align 8
-roundsm32_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd:
+SYM_FUNC_START_LOCAL(roundsm32_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd)
roundsm32(%ymm0, %ymm1, %ymm2, %ymm3, %ymm4, %ymm5, %ymm6, %ymm7,
%ymm8, %ymm9, %ymm10, %ymm11, %ymm12, %ymm13, %ymm14, %ymm15,
%rcx, (%r9));
ret;
-ENDPROC(roundsm32_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd)
+SYM_FUNC_END(roundsm32_x0_x1_x2_x3_x4_x5_x6_x7_y0_y1_y2_y3_y4_y5_y6_y7_cd)

.align 8
-roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab:
+SYM_FUNC_START_LOCAL(roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab)
roundsm32(%ymm4, %ymm5, %ymm6, %ymm7, %ymm0, %ymm1, %ymm2, %ymm3,
%ymm12, %ymm13, %ymm14, %ymm15, %ymm8, %ymm9, %ymm10, %ymm11,
%rax, (%r9));
ret;
-ENDPROC(roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab)
+SYM_FUNC_END(roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab)

/*
* IN/OUT:
@@ -765,7 +765,7 @@ ENDPROC(roundsm32_x4_x5_x6_x7_x0_x1_x2_x3_y4_y5_y6_y7_y0_y1_y2_y3_ab)
.text

.align 8
-__camellia_enc_blk32:
+SYM_FUNC_START_LOCAL(__camellia_enc_blk32)
/* input:
* %rdi: ctx, CTX
* %rax: temporary storage, 512 bytes
@@ -849,10 +849,10 @@ __camellia_enc_blk32:
%ymm15, %rax, %rcx, 24);

jmp .Lenc_done;
-ENDPROC(__camellia_enc_blk32)
+SYM_FUNC_END(__camellia_enc_blk32)

.align 8
-__camellia_dec_blk32:
+SYM_FUNC_START_LOCAL(__camellia_dec_blk32)
/* input:
* %rdi: ctx, CTX
* %rax: temporary storage, 512 bytes
@@ -934,7 +934,7 @@ __camellia_dec_blk32:
((key_table + (24) * 8) + 4)(CTX));

jmp .Ldec_max24;
-ENDPROC(__camellia_dec_blk32)
+SYM_FUNC_END(__camellia_dec_blk32)

ENTRY(camellia_ecb_enc_32way)
/* input:
@@ -1227,7 +1227,7 @@ ENDPROC(camellia_ctr_32way)
vpxor tmp1, iv, iv;

.align 8
-camellia_xts_crypt_32way:
+SYM_FUNC_START_LOCAL(camellia_xts_crypt_32way)
/* input:
* %rdi: ctx, CTX
* %rsi: dst (32 blocks)
@@ -1372,7 +1372,7 @@ camellia_xts_crypt_32way:

FRAME_END
ret;
-ENDPROC(camellia_xts_crypt_32way)
+SYM_FUNC_END(camellia_xts_crypt_32way)

ENTRY(camellia_xts_enc_32way)
/* input:
diff --git a/arch/x86/crypto/cast5-avx-x86_64-asm_64.S b/arch/x86/crypto/cast5-avx-x86_64-asm_64.S
index 86107c961bb4..b26df120413c 100644
--- a/arch/x86/crypto/cast5-avx-x86_64-asm_64.S
+++ b/arch/x86/crypto/cast5-avx-x86_64-asm_64.S
@@ -224,7 +224,7 @@
.text

.align 16
-__cast5_enc_blk16:
+SYM_FUNC_START_LOCAL(__cast5_enc_blk16)
/* input:
* %rdi: ctx
* RL1: blocks 1 and 2
@@ -295,10 +295,10 @@ __cast5_enc_blk16:
outunpack_blocks(RR4, RL4, RTMP, RX, RKM);

ret;
-ENDPROC(__cast5_enc_blk16)
+SYM_FUNC_END(__cast5_enc_blk16)

.align 16
-__cast5_dec_blk16:
+SYM_FUNC_START_LOCAL(__cast5_dec_blk16)
/* input:
* %rdi: ctx
* RL1: encrypted blocks 1 and 2
@@ -372,7 +372,7 @@ __cast5_dec_blk16:
.L__skip_dec:
vpsrldq $4, RKR, RKR;
jmp .L__dec_tail;
-ENDPROC(__cast5_dec_blk16)
+SYM_FUNC_END(__cast5_dec_blk16)

ENTRY(cast5_ecb_enc_16way)
/* input:
diff --git a/arch/x86/crypto/cast6-avx-x86_64-asm_64.S b/arch/x86/crypto/cast6-avx-x86_64-asm_64.S
index 7f30b6f0d72c..0a68e42a00f9 100644
--- a/arch/x86/crypto/cast6-avx-x86_64-asm_64.S
+++ b/arch/x86/crypto/cast6-avx-x86_64-asm_64.S
@@ -262,7 +262,7 @@
.text

.align 8
-__cast6_enc_blk8:
+SYM_FUNC_START_LOCAL(__cast6_enc_blk8)
/* input:
* %rdi: ctx
* RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: blocks
@@ -307,10 +307,10 @@ __cast6_enc_blk8:
outunpack_blocks(RA2, RB2, RC2, RD2, RTMP, RX, RKRF, RKM);

ret;
-ENDPROC(__cast6_enc_blk8)
+SYM_FUNC_END(__cast6_enc_blk8)

.align 8
-__cast6_dec_blk8:
+SYM_FUNC_START_LOCAL(__cast6_dec_blk8)
/* input:
* %rdi: ctx
* RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: encrypted blocks
@@ -354,7 +354,7 @@ __cast6_dec_blk8:
outunpack_blocks(RA2, RB2, RC2, RD2, RTMP, RX, RKRF, RKM);

ret;
-ENDPROC(__cast6_dec_blk8)
+SYM_FUNC_END(__cast6_dec_blk8)

ENTRY(cast6_ecb_enc_8way)
/* input:
diff --git a/arch/x86/crypto/ghash-clmulni-intel_asm.S b/arch/x86/crypto/ghash-clmulni-intel_asm.S
index f94375a8dcd1..c3db86842578 100644
--- a/arch/x86/crypto/ghash-clmulni-intel_asm.S
+++ b/arch/x86/crypto/ghash-clmulni-intel_asm.S
@@ -47,7 +47,7 @@
* T2
* T3
*/
-__clmul_gf128mul_ble:
+SYM_FUNC_START_LOCAL(__clmul_gf128mul_ble)
movaps DATA, T1
pshufd $0b01001110, DATA, T2
pshufd $0b01001110, SHASH, T3
@@ -90,7 +90,7 @@ __clmul_gf128mul_ble:
pxor T2, T1
pxor T1, DATA
ret
-ENDPROC(__clmul_gf128mul_ble)
+SYM_FUNC_END(__clmul_gf128mul_ble)

/* void clmul_ghash_mul(char *dst, const u128 *shash) */
ENTRY(clmul_ghash_mul)
diff --git a/arch/x86/crypto/serpent-avx-x86_64-asm_64.S b/arch/x86/crypto/serpent-avx-x86_64-asm_64.S
index 2925077f8c6a..c2d4a1fc9ee8 100644
--- a/arch/x86/crypto/serpent-avx-x86_64-asm_64.S
+++ b/arch/x86/crypto/serpent-avx-x86_64-asm_64.S
@@ -570,7 +570,7 @@
transpose_4x4(x0, x1, x2, x3, t0, t1, t2)

.align 8
-__serpent_enc_blk8_avx:
+SYM_FUNC_START_LOCAL(__serpent_enc_blk8_avx)
/* input:
* %rdi: ctx, CTX
* RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: blocks
@@ -621,10 +621,10 @@ __serpent_enc_blk8_avx:
write_blocks(RA2, RB2, RC2, RD2, RK0, RK1, RK2);

ret;
-ENDPROC(__serpent_enc_blk8_avx)
+SYM_FUNC_END(__serpent_enc_blk8_avx)

.align 8
-__serpent_dec_blk8_avx:
+SYM_FUNC_START_LOCAL(__serpent_dec_blk8_avx)
/* input:
* %rdi: ctx, CTX
* RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: encrypted blocks
@@ -675,7 +675,7 @@ __serpent_dec_blk8_avx:
write_blocks(RC2, RD2, RB2, RE2, RK0, RK1, RK2);

ret;
-ENDPROC(__serpent_dec_blk8_avx)
+SYM_FUNC_END(__serpent_dec_blk8_avx)

ENTRY(serpent_ecb_enc_8way_avx)
/* input:
diff --git a/arch/x86/crypto/serpent-avx2-asm_64.S b/arch/x86/crypto/serpent-avx2-asm_64.S
index d67888f2a52a..52c527ce4b18 100644
--- a/arch/x86/crypto/serpent-avx2-asm_64.S
+++ b/arch/x86/crypto/serpent-avx2-asm_64.S
@@ -566,7 +566,7 @@
transpose_4x4(x0, x1, x2, x3, t0, t1, t2)

.align 8
-__serpent_enc_blk16:
+SYM_FUNC_START_LOCAL(__serpent_enc_blk16)
/* input:
* %rdi: ctx, CTX
* RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: plaintext
@@ -617,10 +617,10 @@ __serpent_enc_blk16:
write_blocks(RA2, RB2, RC2, RD2, RK0, RK1, RK2);

ret;
-ENDPROC(__serpent_enc_blk16)
+SYM_FUNC_END(__serpent_enc_blk16)

.align 8
-__serpent_dec_blk16:
+SYM_FUNC_START_LOCAL(__serpent_dec_blk16)
/* input:
* %rdi: ctx, CTX
* RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: ciphertext
@@ -671,7 +671,7 @@ __serpent_dec_blk16:
write_blocks(RC2, RD2, RB2, RE2, RK0, RK1, RK2);

ret;
-ENDPROC(__serpent_dec_blk16)
+SYM_FUNC_END(__serpent_dec_blk16)

ENTRY(serpent_ecb_enc_16way)
/* input:
diff --git a/arch/x86/crypto/twofish-avx-x86_64-asm_64.S b/arch/x86/crypto/twofish-avx-x86_64-asm_64.S
index 73b471da3622..96ddfda4d7b2 100644
--- a/arch/x86/crypto/twofish-avx-x86_64-asm_64.S
+++ b/arch/x86/crypto/twofish-avx-x86_64-asm_64.S
@@ -249,7 +249,7 @@
vpxor x3, wkey, x3;

.align 8
-__twofish_enc_blk8:
+SYM_FUNC_START_LOCAL(__twofish_enc_blk8)
/* input:
* %rdi: ctx, CTX
* RA1, RB1, RC1, RD1, RA2, RB2, RC2, RD2: blocks
@@ -288,10 +288,10 @@ __twofish_enc_blk8:
outunpack_blocks(RC2, RD2, RA2, RB2, RK1, RX0, RY0, RK2);

ret;
-ENDPROC(__twofish_enc_blk8)
+SYM_FUNC_END(__twofish_enc_blk8)

.align 8
-__twofish_dec_blk8:
+SYM_FUNC_START_LOCAL(__twofish_dec_blk8)
/* input:
* %rdi: ctx, CTX
* RC1, RD1, RA1, RB1, RC2, RD2, RA2, RB2: encrypted blocks
@@ -328,7 +328,7 @@ __twofish_dec_blk8:
outunpack_blocks(RA2, RB2, RC2, RD2, RK1, RX0, RY0, RK2);

ret;
-ENDPROC(__twofish_dec_blk8)
+SYM_FUNC_END(__twofish_dec_blk8)

ENTRY(twofish_ecb_enc_8way)
/* input:
--
2.16.3


2018-05-18 09:29:12

by Jiri Slaby

[permalink] [raw]
Subject: [PATCH v6 01/28] linkage: new macros for assembler symbols

Introduce new C macros for annotations of functions and data in
assembly. There is a long-standing mess in macros like ENTRY, END,
ENDPROC and similar. They are used in different manners and sometimes
incorrectly.

So introduce macros with clear use to annotate assembly as follows:

a) Support macros for the ones below
SYM_T_FUNC -- type used by assembler to mark functions
SYM_T_OBJECT -- type used by assembler to mark data
SYM_T_NONE -- type used by assembler to mark entries of unknown type

They are defined as STT_FUNC, STT_OBJECT, and STT_NOTYPE
respectively. According to the gas manual, this is the most portable
way. I am not sure about other assemblers, so we can switch this back
to %function and %object if this turns into a problem. Architectures
can also override them by something like ", @function" if they need.

SYM_A_ALIGN, SYM_A_NONE -- align the symbol?
SYM_L_GLOBAL, SYM_L_WEAK, SYM_L_LOCAL -- linkage of symbols

b) Mostly internal annotations, used by the ones below
SYM_ENTRY -- use only if you have to (for non-paired symbols)
SYM_START -- use only if you have to (for paired symbols)
SYM_END -- use only if you have to (for paired symbols)

c) Annotations for code
SYM_INNER_LABEL_ALIGN -- only for labels in the middle of code
SYM_INNER_LABEL -- only for labels in the middle of code

SYM_FUNC_START_LOCAL_ALIAS -- use where there are two local names for
one function
SYM_FUNC_START_ALIAS -- use where there are two global names for one
function
SYM_FUNC_END_ALIAS -- the end of LOCAL_ALIASed or ALIASed function

SYM_FUNC_START -- use for global functions
SYM_FUNC_START_NOALIGN -- use for global functions, w/o alignment
SYM_FUNC_START_LOCAL -- use for local functions
SYM_FUNC_START_LOCAL_NOALIGN -- use for local functions, w/o
alignment
SYM_FUNC_START_WEAK -- use for weak functions
SYM_FUNC_START_WEAK_NOALIGN -- use for weak functions, w/o alignment
SYM_FUNC_END -- the end of SYM_FUNC_START_LOCAL, SYM_FUNC_START,
SYM_FUNC_START_WEAK, ...

For functions with special (non-C) calling conventions:
SYM_CODE_START -- use for non-C (special) functions
SYM_CODE_START_NOALIGN -- use for non-C (special) functions, w/o
alignment
SYM_CODE_START_LOCAL -- use for local non-C (special) functions
SYM_CODE_START_LOCAL_NOALIGN -- use for local non-C (special)
functions, w/o alignment
SYM_CODE_END -- the end of SYM_CODE_START_LOCAL or SYM_CODE_START

d) For data
SYM_DATA_START -- global data symbol
SYM_DATA_START_LOCAL -- local data symbol
SYM_DATA_END -- the end of the SYM_DATA_START symbol
SYM_DATA_END_LABEL -- the labeled end of SYM_DATA_START symbol
SYM_DATA -- start+end wrapper around simple global data
SYM_DATA_LOCAL -- start+end wrapper around simple local data

==========

The macros allow to pair starts and ends of functions and mark functions
correctly in the output ELF objects.

All users of the old macros in x86 are converted to use these in further
patches.

[v2]
* use SYM_ prefix and sane names
* add SYM_START and SYM_END and parametrize all the macros

[v3]
* add SYM_DATA, SYM_DATA_LOCAL, and SYM_DATA_END_LABEL

[v4]
* add _NOALIGN versions of some macros
* add _CODE_ derivates of _FUNC_ macros

[v5]
* drop "SIMPLE" from data annotations
* switch NOALIGN and ALIGN variants of inner labels
* s/visibility/linkage/; s@SYM_V_@SYM_L_@
* add Documentation

[v6]
* fixed typos found by Randy Dunlap
* remove doubled INNER_LABEL macros, one pair was unused

Signed-off-by: Jiri Slaby <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Boris Ostrovsky <[email protected]>
Cc: [email protected]
Cc: Ingo Molnar <[email protected]>
Cc: [email protected]
Cc: Juergen Gross <[email protected]>
Cc: Len Brown <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: Pavel Machek <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Cc: [email protected]
---
Documentation/asm-annotations.rst | 217 ++++++++++++++++++++++++++++++++++
arch/x86/include/asm/linkage.h | 10 +-
include/linux/linkage.h | 243 ++++++++++++++++++++++++++++++++++++--
3 files changed, 460 insertions(+), 10 deletions(-)
create mode 100644 Documentation/asm-annotations.rst

diff --git a/Documentation/asm-annotations.rst b/Documentation/asm-annotations.rst
new file mode 100644
index 000000000000..265d64a1fc0b
--- /dev/null
+++ b/Documentation/asm-annotations.rst
@@ -0,0 +1,217 @@
+Assembler Annotations
+=====================
+
+Copyright (c) 2017 Jiri Slaby
+
+This document describes the new macros for annotation of data and code in
+assembler. In particular, it contains information about ``SYM_FUNC_START``,
+``SYM_FUNC_END``, ``SYM_CODE_START``, and similar.
+
+Rationale
+---------
+Some code like entries, trampolines, or boot code needs to be written in
+assembly. The same as in C, we group such code into functions and accompany
+them with data. Standard assemblers do not force users into precisely marking
+these pieces as code, data, or even specifying their length. Nevertheless,
+assemblers provide developers with such marks to aid debuggers throughout
+assembly. On the top of that, developers also want to stamp some functions as
+*global* to be visible outside of their translation units.
+
+Over the time, the Linux kernel took over macros from various projects (like
+``binutils``) to ease these markings. So for historic reasons, we have been
+using ``ENTRY``, ``END``, ``ENDPROC``, and other annotations in assembly. Due
+to the lack of their documentation, the macros are used in rather wrong
+contexts at some locations. Clearly, ``ENTRY`` was intended for starts of
+global symbols (be it data or code). ``END`` used to be the end of data or end
+of special functions with *non-standard* calling convention. In contrast,
+``ENDPROC`` should annotate only ends of *standard* functions.
+
+When these macros are used correctly, they help assemblers to generate a nice
+object with both sizes and types set correctly. For example the result of
+``arch/x86/lib/putuser.S``::
+
+ Num: Value Size Type Bind Vis Ndx Name
+ 25: 0000000000000000 33 FUNC GLOBAL DEFAULT 1 __put_user_1
+ 29: 0000000000000030 37 FUNC GLOBAL DEFAULT 1 __put_user_2
+ 32: 0000000000000060 36 FUNC GLOBAL DEFAULT 1 __put_user_4
+ 35: 0000000000000090 37 FUNC GLOBAL DEFAULT 1 __put_user_8
+
+This is not only important for debugging purposes. When we have properly
+marked objects like this, we can run tools on them and let the tools generate
+more useful information. In particular, on properly marked objects, we can run
+``objtool`` and let it check and fix the object if needed. Currently, it can
+report missing frame pointer setup/destruction in functions. It can also
+automatically generate annotations for *ORC unwinder* (cf.
+<Documentation/x86/orc-unwinder.txt>) for most code. Both of these are
+especially important to support reliable stack traces which are in turn
+necessary for *Kernel live patching* (see
+<Documentation/livepatch/livepatch.txt>).
+
+Caveat and Discussion
+---------------------
+As one might realize, there were only three macros previously. That is indeed
+insufficient to cover all the combinations of cases:
+
+* standard/non-standard function
+* code/data
+* global/local symbol
+
+We had a discussion_ and instead of extending the current ``ENTRY/END*``
+macros, it was decided that we should introduce brand new macros instead::
+
+ So how about using macro names that actually show the purpose, instead
+ of importing all the crappy, historic, essentially randomly chosen
+ debug symbol macro names from the binutils and older kernels?
+
+.. _discussion: https://marc.info/?i=20170217104757.28588-1-jslaby%40suse.cz
+
+Macros Description
+------------------
+
+The new macros are prefixed with the ``SYM_`` prefix and can be divided into
+three main groups:
+
+1. ``SYM_FUNC_*`` -- to annotate C-like functions. This means functions with
+ standard C calling conventions, i.e. the stack contains a return address at
+ the predefined place and a return from the function can happen in a
+ standard way. When frame pointers are enabled, save/restore of frame
+ pointer shall happen at the start/end of a function, respectively, too.
+
+ Checking tools like ``objtool`` should ensure such marked functions conform
+ to these rules. The tools can also easily annotate these functions with
+ debugging information (like *ORC data*) automatically.
+
+2. ``SYM_CODE_*`` -- special functions called with special stack. Be it
+ interrupt handlers with special stack content, trampolines, or startup
+ functions.
+
+ Checking tools mostly ignore checking of these functions. But some debug
+ information still can be generated automatically. For correct debug data,
+ this code needs hints like ``UNWIND_HINT_REGS`` provided by developers.
+
+3. ``SYM_DATA*`` -- obviously data belonging to ``.data`` sections and not to
+ ``.text``. Data do not contain instructions, so they have to be treated
+ specially by the tools: they should not treat the bytes as instructions,
+ nor assign any debug information to them.
+
+Instruction Macros
+~~~~~~~~~~~~~~~~~~
+This section covers ``SYM_FUNC_*`` and ``SYM_CODE_*`` enumerated above.
+
+* ``SYM_FUNC_START`` and ``SYM_FUNC_START_LOCAL`` are supposed to be **the
+ most frequent markings**. They are used for functions with standard calling
+ conventions -- global and local. Like in C, they both align the functions to
+ architecture specific ``__ALIGN`` bytes. There are also ``_NOALIGN`` variants
+ for special cases where developers do not want this implicit alignment.
+
+ We offer also ``SYM_FUNC_START_WEAK`` and ``SYM_FUNC_START_WEAK_NOALIGN``
+ marks as an assembler counterpart of the *weak* attribute known from C.
+
+ All of these **shall** be coupled with ``SYM_FUNC_END``. First, it marks
+ the sequence of instructions as a function and computes its size to the
+ generated object file. Second, it also eases checking and processing such
+ object files as the tools can trivially find exact start and end of a
+ function.
+
+ So in most cases, developers should write something like in the following
+ example, having more instructions in between the macros, of course::
+
+ SYM_FUNC_START(function_hook)
+ retq
+ SYM_FUNC_END(function_hook)
+
+ In fact, this kind of annotation corresponds to now deprecated ``ENTRY`` and
+ ``ENDPROC``.
+
+* ``SYM_FUNC_START_ALIAS`` and ``SYM_FUNC_START_LOCAL_ALIAS`` serve for those
+ who decided to have two or more names for one function. The typical use is::
+
+ SYM_FUNC_START_ALIAS(__memset)
+ SYM_FUNC_START(memset)
+ ...
+ SYM_FUNC_END(memset)
+ SYM_FUNC_END_ALIAS(__memset)
+
+ In this example, one can call ``__memset`` or ``memset`` with the same
+ result. Except the debug information for the instructions is generated to
+ the object file only once -- for the non-``ALIAS`` case.
+
+* ``SYM_CODE_START`` and ``SYM_CODE_START_LOCAL`` should be used only in
+ special cases -- if you know what you are doing. This is used exclusively
+ for interrupt handlers and similar where the calling convention is not the C
+ one. ``_NOALIGN`` variants exist too. The use is the same as for the ``FUNC``
+ category above::
+
+ SYM_CODE_START_LOCAL(bad_put_user)
+ movl $-EFAULT,%eax
+ EXIT
+ SYM_CODE_END(bad_put_user)
+
+ Again, every ``SYM_CODE_START*`` **shall** be coupled by ``SYM_CODE_END``.
+
+ To some extent, this category corresponds to deprecated ``ENTRY`` and
+ ``END``. Except ``END`` had several other meanings too.
+
+* ``SYM_INNER_LABEL*`` is used to denote a label inside some
+ ``SYM_{CODE,FUNC}_START`` and ``SYM_{CODE,FUNC}_END``. They are very similar
+ to C labels, except they can be made global. An example of use::
+
+ SYM_CODE_START(ftrace_caller)
+ /* save_mcount_regs fills in first two parameters */
+ ...
+
+ SYM_INNER_LABEL(ftrace_caller_op_ptr, SYM_L_GLOBAL)
+ /* Load the ftrace_ops into the 3rd parameter */
+ ...
+
+ SYM_INNER_LABEL(ftrace_call, SYM_L_GLOBAL)
+ call ftrace_stub
+ ...
+ retq
+ SYM_CODE_END(ftrace_caller)
+
+Data Macros
+~~~~~~~~~~~
+Similar to instructions, we have a couple of macros to describe data in the
+assembly. Again, they help debuggers to understand the layout of the resulting
+object files.
+
+* ``SYM_DATA_START`` and ``SYM_DATA_START_LOCAL`` mark the start of some data
+ and shall be used in conjunction with either ``SYM_DATA_END``, or
+ ``SYM_DATA_END_LABEL``. The latter adds also a label to the end, so that
+ people can use ``lstack`` and (local) ``lstack_end`` in the following
+ example::
+
+ SYM_DATA_START_LOCAL(lstack)
+ .skip 4096
+ SYM_DATA_END_LABEL(lstack, SYM_L_LOCAL, lstack_end)
+
+* ``SYM_DATA`` and ``SYM_DATA_LOCAL`` are variants for simple, mostly one-line
+ data::
+
+ SYM_DATA(HEAP, .long rm_heap)
+ SYM_DATA(heap_end, .long rm_stack)
+
+ In the end, they expand to ``SYM_DATA_START`` with ``SYM_DATA_END``
+ internally.
+
+Support Macros
+~~~~~~~~~~~~~~
+All the above reduce themselves to some invocation of ``SYM_START``,
+``SYM_END``, or ``SYM_ENTRY`` at last. Normally, developers should avoid using
+these.
+
+Further, in the above examples, one could see ``SYM_L_LOCAL``. There are also
+``SYM_L_GLOBAL`` and ``SYM_L_WEAK``. All are intended to denote linkage of a
+symbol marked by them. They are used either in ``_LABEL`` variants of the
+earlier macros, or in ``SYM_START``.
+
+
+Overriding Macros
+~~~~~~~~~~~~~~~~~
+Architecture can also override any of the macros in their own
+``asm/linkage.h``, including macros specifying the type of a symbol
+(``SYM_T_FUNC``, ``SYM_T_OBJECT``, and ``SYM_T_NONE``). As every macro
+described in this file is surrounded by ``#ifdef`` + ``#endif``, it is enough
+to define the macros differently in the aforementioned architecture-dependent
+header.
diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h
index 14caa9d9fb7f..e07188e8d763 100644
--- a/arch/x86/include/asm/linkage.h
+++ b/arch/x86/include/asm/linkage.h
@@ -13,9 +13,13 @@

#ifdef __ASSEMBLY__

-#define GLOBAL(name) \
- .globl name; \
- name:
+/*
+ * GLOBAL is DEPRECATED
+ *
+ * use SYM_DATA_START, SYM_FUNC_START, SYM_INNER_LABEL, SYM_CODE_START, or
+ * similar
+ */
+#define GLOBAL(name) SYM_ENTRY(name, SYM_L_GLOBAL, SYM_A_NONE)

#if defined(CONFIG_X86_64) || defined(CONFIG_X86_ALIGNMENT_16)
#define __ALIGN .p2align 4, 0x90
diff --git a/include/linux/linkage.h b/include/linux/linkage.h
index f68db9e450eb..e920ffa2a943 100644
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -75,25 +75,51 @@

#ifdef __ASSEMBLY__

+/* SYM_T_FUNC -- type used by assembler to mark functions */
+#ifndef SYM_T_FUNC
+#define SYM_T_FUNC STT_FUNC
+#endif
+
+/* SYM_T_OBJECT -- type used by assembler to mark data */
+#ifndef SYM_T_OBJECT
+#define SYM_T_OBJECT STT_OBJECT
+#endif
+
+/* SYM_T_NONE -- type used by assembler to mark entries of unknown type */
+#ifndef SYM_T_NONE
+#define SYM_T_NONE STT_NOTYPE
+#endif
+
+/* SYM_A_* -- align the symbol? */
+#define SYM_A_ALIGN ALIGN
+#define SYM_A_NONE /* nothing */
+
+/* SYM_L_* -- linkage of symbols */
+#define SYM_L_GLOBAL(name) .globl name
+#define SYM_L_WEAK(name) .weak name
+#define SYM_L_LOCAL(name) /* nothing */
+
#ifndef LINKER_SCRIPT
#define ALIGN __ALIGN
#define ALIGN_STR __ALIGN_STR

+/* === DEPRECATED annotations === */
+
#ifndef ENTRY
+/* deprecated, use SYM_FUNC_START */
#define ENTRY(name) \
- .globl name ASM_NL \
- ALIGN ASM_NL \
- name:
+ SYM_FUNC_START(name)
#endif
#endif /* LINKER_SCRIPT */

#ifndef WEAK
+/* deprecated, use SYM_FUNC_START_WEAK* */
#define WEAK(name) \
- .weak name ASM_NL \
- name:
+ SYM_FUNC_START_WEAK_NOALIGN(name)
#endif

#ifndef END
+/* deprecated, use SYM_FUNC_END, SYM_DATA_END, or SYM_END */
#define END(name) \
.size name, .-name
#endif
@@ -103,11 +129,214 @@
* static analysis tools such as stack depth analyzer.
*/
#ifndef ENDPROC
+/* deprecated, use SYM_FUNC_END */
#define ENDPROC(name) \
- .type name, @function ASM_NL \
- END(name)
+ SYM_FUNC_END(name)
#endif

+/* === generic annotations === */
+
+/* SYM_ENTRY -- use only if you have to for non-paired symbols */
+#ifndef SYM_ENTRY
+#define SYM_ENTRY(name, linkage, align...) \
+ linkage(name) ASM_NL \
+ align ASM_NL \
+ name:
#endif

+/* SYM_START -- use only if you have to */
+#ifndef SYM_START
+#define SYM_START(name, linkage, align...) \
+ SYM_ENTRY(name, linkage, align)
+#endif
+
+/* SYM_END -- use only if you have to */
+#ifndef SYM_END
+#define SYM_END(name, sym_type) \
+ .type name sym_type ASM_NL \
+ .size name, .-name
#endif
+
+/* === code annotations === */
+
+/*
+ * FUNC -- C-like functions (proper stack frame etc.)
+ * CODE -- non-C code (e.g. irq handlers with different, special stack etc.)
+ *
+ * Objtool validates stack for FUNC, but not for CODE.
+ * Objtool generates debug info for both FUNC & CODE, but needs special
+ * annotations for each CODE's start (to describe the actual stack frame).
+ *
+ * ALIAS -- does not generate debug info -- the aliased function will
+ */
+
+/* SYM_INNER_LABEL_ALIGN -- only for labels in the middle of code */
+#ifndef SYM_INNER_LABEL_ALIGN
+#define SYM_INNER_LABEL_ALIGN(name, linkage) \
+ .type name SYM_T_NONE ASM_NL \
+ SYM_ENTRY(name, linkage, SYM_A_ALIGN)
+#endif
+
+/* SYM_INNER_LABEL -- only for labels in the middle of code */
+#ifndef SYM_INNER_LABEL
+#define SYM_INNER_LABEL(name, linkage) \
+ .type name SYM_T_NONE ASM_NL \
+ SYM_ENTRY(name, linkage, SYM_A_NONE)
+#endif
+
+/*
+ * SYM_FUNC_START_LOCAL_ALIAS -- use where there are two local names for one
+ * function
+ */
+#ifndef SYM_FUNC_START_LOCAL_ALIAS
+#define SYM_FUNC_START_LOCAL_ALIAS(name) \
+ SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN)
+#endif
+
+/*
+ * SYM_FUNC_START_ALIAS -- use where there are two global names for one
+ * function
+ */
+#ifndef SYM_FUNC_START_ALIAS
+#define SYM_FUNC_START_ALIAS(name) \
+ SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
+#endif
+
+/* SYM_FUNC_START -- use for global functions */
+#ifndef SYM_FUNC_START
+/*
+ * The same as SYM_FUNC_START_ALIAS, but we will need to distinguish these two
+ * later.
+ */
+#define SYM_FUNC_START(name) \
+ SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
+#endif
+
+/* SYM_FUNC_START_NOALIGN -- use for global functions, w/o alignment */
+#ifndef SYM_FUNC_START_NOALIGN
+#define SYM_FUNC_START_NOALIGN(name) \
+ SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE)
+#endif
+
+/* SYM_FUNC_START_LOCAL -- use for local functions */
+#ifndef SYM_FUNC_START_LOCAL
+/* the same as SYM_FUNC_START_LOCAL_ALIAS, see comment near SYM_FUNC_START */
+#define SYM_FUNC_START_LOCAL(name) \
+ SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN)
+#endif
+
+/* SYM_FUNC_START_LOCAL_NOALIGN -- use for local functions, w/o alignment */
+#ifndef SYM_FUNC_START_LOCAL_NOALIGN
+#define SYM_FUNC_START_LOCAL_NOALIGN(name) \
+ SYM_START(name, SYM_L_LOCAL, SYM_A_NONE)
+#endif
+
+/* SYM_FUNC_START_WEAK -- use for weak functions */
+#ifndef SYM_FUNC_START_WEAK
+#define SYM_FUNC_START_WEAK(name) \
+ SYM_START(name, SYM_L_WEAK, SYM_A_ALIGN)
+#endif
+
+/* SYM_FUNC_START_WEAK_NOALIGN -- use for weak functions, w/o alignment */
+#ifndef SYM_FUNC_START_WEAK_NOALIGN
+#define SYM_FUNC_START_WEAK_NOALIGN(name) \
+ SYM_START(name, SYM_L_WEAK, SYM_A_NONE)
+#endif
+
+/* SYM_FUNC_END_ALIAS -- the end of LOCAL_ALIASed or ALIASed function */
+#ifndef SYM_FUNC_END_ALIAS
+#define SYM_FUNC_END_ALIAS(name) \
+ SYM_END(name, SYM_T_FUNC)
+#endif
+
+/*
+ * SYM_FUNC_END -- the end of SYM_FUNC_START_LOCAL, SYM_FUNC_START,
+ * SYM_FUNC_START_WEAK, ...
+ */
+#ifndef SYM_FUNC_END
+/* the same as SYM_FUNC_END_ALIAS, see comment near SYM_FUNC_START */
+#define SYM_FUNC_END(name) \
+ SYM_END(name, SYM_T_FUNC)
+#endif
+
+/* SYM_CODE_START -- use for non-C (special) functions */
+#ifndef SYM_CODE_START
+#define SYM_CODE_START(name) \
+ SYM_START(name, SYM_L_GLOBAL, SYM_A_ALIGN)
+#endif
+
+/* SYM_CODE_START_NOALIGN -- use for non-C (special) functions, w/o alignment */
+#ifndef SYM_CODE_START_NOALIGN
+#define SYM_CODE_START_NOALIGN(name) \
+ SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE)
+#endif
+
+/* SYM_CODE_START_LOCAL -- use for local non-C (special) functions */
+#ifndef SYM_CODE_START_LOCAL
+#define SYM_CODE_START_LOCAL(name) \
+ SYM_START(name, SYM_L_LOCAL, SYM_A_ALIGN)
+#endif
+
+/*
+ * SYM_CODE_START_LOCAL_NOALIGN -- use for local non-C (special) functions,
+ * w/o alignment
+ */
+#ifndef SYM_CODE_START_LOCAL_NOALIGN
+#define SYM_CODE_START_LOCAL_NOALIGN(name) \
+ SYM_START(name, SYM_L_LOCAL, SYM_A_NONE)
+#endif
+
+/* SYM_CODE_END -- the end of SYM_CODE_START_LOCAL, SYM_CODE_START, ... */
+#ifndef SYM_CODE_END
+#define SYM_CODE_END(name) \
+ SYM_END(name, SYM_T_NONE)
+#endif
+
+/* === data annotations === */
+
+/* SYM_DATA_START -- global data symbol */
+#ifndef SYM_DATA_START
+#define SYM_DATA_START(name) \
+ SYM_START(name, SYM_L_GLOBAL, SYM_A_NONE)
+#endif
+
+/* SYM_DATA_START -- local data symbol */
+#ifndef SYM_DATA_START_LOCAL
+#define SYM_DATA_START_LOCAL(name) \
+ SYM_START(name, SYM_L_LOCAL, SYM_A_NONE)
+#endif
+
+/* SYM_DATA_END -- the end of SYM_DATA_START symbol */
+#ifndef SYM_DATA_END
+#define SYM_DATA_END(name) \
+ SYM_END(name, SYM_T_OBJECT)
+#endif
+
+/* SYM_DATA_END_LABEL -- the labeled end of SYM_DATA_START symbol */
+#ifndef SYM_DATA_END_LABEL
+#define SYM_DATA_END_LABEL(name, linkage, label) \
+ linkage(label) ASM_NL \
+ .type label SYM_T_OBJECT ASM_NL \
+ label: \
+ SYM_END(name, SYM_T_OBJECT)
+#endif
+
+/* SYM_DATA -- start+end wrapper around simple global data */
+#ifndef SYM_DATA
+#define SYM_DATA(name, data...) \
+ SYM_DATA_START(name) ASM_NL \
+ data ASM_NL \
+ SYM_DATA_END(name)
+#endif
+
+/* SYM_DATA_LOCAL -- start+end wrapper around simple local data */
+#ifndef SYM_DATA_LOCAL
+#define SYM_DATA_LOCAL(name, data...) \
+ SYM_DATA_START_LOCAL(name) ASM_NL \
+ data ASM_NL \
+ SYM_DATA_END(name)
+#endif
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _LINUX_LINKAGE_H */
--
2.16.3


2018-05-18 10:04:02

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH v6 02/28] x86/asm/suspend: drop ENTRY from local data

On Fri, May 18, 2018 at 11:16 AM, Jiri Slaby <[email protected]> wrote:
> ENTRY was intended for functions and shall be paired with ENDPROC.
> ENTRY also aligns symbols which creates unnecessary holes here between
> data.
>
> So drop ENTRY from saved_eip in wakeup_32 and many saved_* in wakeup_64,
> as these symbols are local only.
>
> We could use SYM_DATA_LOCAL for these symbols, but it was discouraged
> earlier [1].
>
> [1] https://lkml.org/lkml/2017/4/27/244
>
> Signed-off-by: Jiri Slaby <[email protected]>
> Cc: "Rafael J. Wysocki" <[email protected]>
> Cc: Len Brown <[email protected]>
> Cc: Pavel Machek <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: "H. Peter Anvin" <[email protected]>
> Cc: [email protected]
> Cc: [email protected]

Acked-by: Rafael J. Wysocki <[email protected]>

> ---
> arch/x86/kernel/acpi/wakeup_32.S | 2 +-
> arch/x86/kernel/acpi/wakeup_64.S | 12 ++++++------
> 2 files changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/arch/x86/kernel/acpi/wakeup_32.S b/arch/x86/kernel/acpi/wakeup_32.S
> index 0c26b1b44e51..4203d4f0c68d 100644
> --- a/arch/x86/kernel/acpi/wakeup_32.S
> +++ b/arch/x86/kernel/acpi/wakeup_32.S
> @@ -90,7 +90,7 @@ ret_point:
> .data
> ALIGN
> ENTRY(saved_magic) .long 0
> -ENTRY(saved_eip) .long 0
> +saved_eip: .long 0
>
> # saved registers
> saved_idt: .long 0,0
> diff --git a/arch/x86/kernel/acpi/wakeup_64.S b/arch/x86/kernel/acpi/wakeup_64.S
> index 50b8ed0317a3..510fa12aab73 100644
> --- a/arch/x86/kernel/acpi/wakeup_64.S
> +++ b/arch/x86/kernel/acpi/wakeup_64.S
> @@ -125,12 +125,12 @@ ENTRY(do_suspend_lowlevel)
> ENDPROC(do_suspend_lowlevel)
>
> .data
> -ENTRY(saved_rbp) .quad 0
> -ENTRY(saved_rsi) .quad 0
> -ENTRY(saved_rdi) .quad 0
> -ENTRY(saved_rbx) .quad 0
> +saved_rbp: .quad 0
> +saved_rsi: .quad 0
> +saved_rdi: .quad 0
> +saved_rbx: .quad 0
>
> -ENTRY(saved_rip) .quad 0
> -ENTRY(saved_rsp) .quad 0
> +saved_rip: .quad 0
> +saved_rsp: .quad 0
>
> ENTRY(saved_magic) .quad 0
> --
> 2.16.3
>

2018-05-18 10:04:18

by Rafael J. Wysocki

[permalink] [raw]
Subject: Re: [PATCH v6 03/28] x86/asm/suspend: use SYM_DATA for data

On Fri, May 18, 2018 at 11:16 AM, Jiri Slaby <[email protected]> wrote:
> Some global data in the suspend code were marked as `ENTRY'. ENTRY was
> intended for functions and shall be paired with ENDPROC. ENTRY also
> aligns symbols which creates unnecessary holes here between data. Since
> we are dropping historical markings, make proper use of newly added
> SYM_DATA in this code.
>
> Signed-off-by: Jiri Slaby <[email protected]>
> Cc: "Rafael J. Wysocki" <[email protected]>
> Cc: Len Brown <[email protected]>
> Cc: Pavel Machek <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: "H. Peter Anvin" <[email protected]>
> Cc: [email protected]
> Cc: [email protected]

Acked-by: Rafael J. Wysocki <[email protected]>

> ---
> arch/x86/kernel/acpi/wakeup_32.S | 2 +-
> arch/x86/kernel/acpi/wakeup_64.S | 2 +-
> arch/x86/kernel/head_32.S | 6 ++----
> arch/x86/kernel/head_64.S | 5 ++---
> 4 files changed, 6 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/kernel/acpi/wakeup_32.S b/arch/x86/kernel/acpi/wakeup_32.S
> index 4203d4f0c68d..feac1e5ecba0 100644
> --- a/arch/x86/kernel/acpi/wakeup_32.S
> +++ b/arch/x86/kernel/acpi/wakeup_32.S
> @@ -89,7 +89,7 @@ ret_point:
>
> .data
> ALIGN
> -ENTRY(saved_magic) .long 0
> +SYM_DATA(saved_magic, .long 0)
> saved_eip: .long 0
>
> # saved registers
> diff --git a/arch/x86/kernel/acpi/wakeup_64.S b/arch/x86/kernel/acpi/wakeup_64.S
> index 510fa12aab73..551758f48eb7 100644
> --- a/arch/x86/kernel/acpi/wakeup_64.S
> +++ b/arch/x86/kernel/acpi/wakeup_64.S
> @@ -133,4 +133,4 @@ saved_rbx: .quad 0
> saved_rip: .quad 0
> saved_rsp: .quad 0
>
> -ENTRY(saved_magic) .quad 0
> +SYM_DATA(saved_magic, .quad 0)
> diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
> index b59e4fb40fd9..80965fd75fea 100644
> --- a/arch/x86/kernel/head_32.S
> +++ b/arch/x86/kernel/head_32.S
> @@ -507,10 +507,8 @@ GLOBAL(early_recursion_flag)
>
> __REFDATA
> .align 4
> -ENTRY(initial_code)
> - .long i386_start_kernel
> -ENTRY(setup_once_ref)
> - .long setup_once
> +SYM_DATA(initial_code, .long i386_start_kernel)
> +SYM_DATA(setup_once_ref, .long setup_once)
>
> /*
> * BSS section
> diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
> index 8344dd2f310a..17543533642d 100644
> --- a/arch/x86/kernel/head_64.S
> +++ b/arch/x86/kernel/head_64.S
> @@ -463,9 +463,8 @@ early_gdt_descr:
> early_gdt_descr_base:
> .quad INIT_PER_CPU_VAR(gdt_page)
>
> -ENTRY(phys_base)
> - /* This must match the first entry in level2_kernel_pgt */
> - .quad 0x0000000000000000
> +/* This must match the first entry in level2_kernel_pgt */
> +SYM_DATA(phys_base, .quad 0x0000000000000000)
> EXPORT_SYMBOL(phys_base)
>
> #include "../../x86/xen/xen-head.S"
> --
> 2.16.3
>

2018-05-18 19:41:41

by Andy Lutomirski

[permalink] [raw]
Subject: Re: [PATCH v6 17/28] x86/asm: use SYM_INNER_LABEL instead of GLOBAL

On Fri, May 18, 2018 at 2:17 AM Jiri Slaby <[email protected]> wrote:

> GLOBAL had several meanings and is going away. In this patch, convert
> all the inner function labels marked with GLOBAL to use SYM_INNER_LABEL
> instead.

> Note that retint_user needs not be global, perhaps since commit
> 2ec67971facc ("x86/entry/64/compat: Remove most of the fast system call
> machinery"), where entry_64_compat's caller was removed. So mark the
> label as LOCAL.


> -GLOBAL(entry_SYSCALL_64_after_hwframe)
> +SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL)

I've missed all the context here. I agree that GLOBAL is misleading, and
"inner label" is nice. But this is a rather wordy macro. Would:

INNER_LABEL_GLOBAL(name)

be better? (With just INNER_LABEL(name) for the local version?)

2018-05-19 07:46:01

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH v6 17/28] x86/asm: use SYM_INNER_LABEL instead of GLOBAL


* Andy Lutomirski <[email protected]> wrote:

> On Fri, May 18, 2018 at 2:17 AM Jiri Slaby <[email protected]> wrote:
>
> > GLOBAL had several meanings and is going away. In this patch, convert
> > all the inner function labels marked with GLOBAL to use SYM_INNER_LABEL
> > instead.
>
> > Note that retint_user needs not be global, perhaps since commit
> > 2ec67971facc ("x86/entry/64/compat: Remove most of the fast system call
> > machinery"), where entry_64_compat's caller was removed. So mark the
> > label as LOCAL.
>
>
> > -GLOBAL(entry_SYSCALL_64_after_hwframe)
> > +SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL)
>
> I've missed all the context here. I agree that GLOBAL is misleading, and
> "inner label" is nice. But this is a rather wordy macro. Would:
>
> INNER_LABEL_GLOBAL(name)
>
> be better? (With just INNER_LABEL(name) for the local version?)

Please keep the SYM_ global namespace for all these symbol macros - but the rest
of the name can be shortened.

Thanks,

Ingo

2018-05-19 19:44:10

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH v6 02/28] x86/asm/suspend: drop ENTRY from local data

On Fri 2018-05-18 11:16:55, Jiri Slaby wrote:
> ENTRY was intended for functions and shall be paired with ENDPROC.
> ENTRY also aligns symbols which creates unnecessary holes here between
> data.
>
> So drop ENTRY from saved_eip in wakeup_32 and many saved_* in wakeup_64,
> as these symbols are local only.
>
> We could use SYM_DATA_LOCAL for these symbols, but it was discouraged
> earlier [1].
>
> [1] https://lkml.org/lkml/2017/4/27/244
>
> Signed-off-by: Jiri Slaby <[email protected]>
> Cc: "Rafael J. Wysocki" <[email protected]>
> Cc: Len Brown <[email protected]>

Acked-by: Pavel Machek <[email protected]>

--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Attachments:
(No filename) (774.00 B)
signature.asc (188.00 B)
Digital signature
Download all attachments

2018-05-19 19:45:53

by Pavel Machek

[permalink] [raw]
Subject: Re: [PATCH v6 03/28] x86/asm/suspend: use SYM_DATA for data

On Fri 2018-05-18 11:16:56, Jiri Slaby wrote:
> Some global data in the suspend code were marked as `ENTRY'. ENTRY was
> intended for functions and shall be paired with ENDPROC. ENTRY also
> aligns symbols which creates unnecessary holes here between data. Since
> we are dropping historical markings, make proper use of newly added
> SYM_DATA in this code.
>
> Signed-off-by: Jiri Slaby <[email protected]>
> Cc: "Rafael J. Wysocki" <[email protected]>
> Cc: Len Brown <[email protected]>

Acked-by: Pavel Machek <[email protected]>

--
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Attachments:
(No filename) (682.00 B)
signature.asc (188.00 B)
Digital signature
Download all attachments

2018-05-21 07:13:56

by Jiri Slaby

[permalink] [raw]
Subject: Re: [PATCH v6 17/28] x86/asm: use SYM_INNER_LABEL instead of GLOBAL

On 05/18/2018, 09:41 PM, Andy Lutomirski wrote:
> On Fri, May 18, 2018 at 2:17 AM Jiri Slaby <[email protected]> wrote:
>
>> GLOBAL had several meanings and is going away. In this patch, convert
>> all the inner function labels marked with GLOBAL to use SYM_INNER_LABEL
>> instead.
>
>> Note that retint_user needs not be global, perhaps since commit
>> 2ec67971facc ("x86/entry/64/compat: Remove most of the fast system call
>> machinery"), where entry_64_compat's caller was removed. So mark the
>> label as LOCAL.
>
>
>> -GLOBAL(entry_SYSCALL_64_after_hwframe)
>> +SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL)
>
> I've missed all the context here. I agree that GLOBAL is misleading, and
> "inner label" is nice. But this is a rather wordy macro. Would:
>
> INNER_LABEL_GLOBAL(name)
>
> be better? (With just INNER_LABEL(name) for the local version?)

All the macros have SYM_ prefix. Other macros look like this:
SYM_FUNC_START_LOCAL(name)
SYM_FUNC_START(name)

So I can make the inner one:
SYM_INNER_LABEL_LOCAL(name)
SYM_INNER_LABEL(name)

to be consistent with the rest, if that is OK?

thanks,
--
js
suse labs