2009-03-24 05:31:38

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 0/4] Add support for compiling with -ffunction-sections -fdata-sections

Hi Linus,

I'd like to see this patch series merged for 2.6.30. It applies on
top of your current master (aka v2.6.29). Patch 1/4 would benefit
from special treatment during the merge window, since it makes many
small changes in lots of files, and thus is likely to conflict with
other changes; the other patches are fairly small.

The purpose of this patch series is to make it possible to build the
kernel with "gcc -ffunction-sections -fdata-sections". There are two
major applications for this functionality: decreasing vmlinux image
size with --gc-sections, and Ksplice.

The original motivation for this functionality was to allow using the
linker's unused section garbage collection support (ld --gc-sections)
in order to get a smaller vmlinux image for embedded systems. People
have been developing patches for supporting building the kernel with
-ffunction-sections -fdata-sections for this purpose for a few years
now (e.g. [1]). The most recent previous set of patches for
--gc-sections was by Denys Vlasenko, and saved 10% on vmlinux size
with CONFIG_MODULES=n and 1% with CONFIG_MODULES=y [2,3,4]. The
primary source of complexity in the various patch series for doing
section garbage collection has been in the patches adding support for
compiling the kernel with -ffunction-sections -fdata-sections, so
merging this patch series should be a big step towards those
significant savings in kernel image size.

[1] <http://lkml.org/lkml/2006/6/4/169>
[2] <http://lkml.org/lkml/2007/9/5/90>
[3] <http://lkml.org/lkml/2007/9/7/110>
[4] <http://lkml.org/lkml/2008/7/1/499>

Support for building the kernel with -ffunction-sections
-fdata-sections is the only significant prerequisite change currently
required by Ksplice. Ksplice is a system for updating the Linux
kernel without rebooting [5].

[5] <http://lkml.org/lkml/2009/2/24/362> is the latest version of the
Ksplice patch series

Patches 1/4, 2/4, and 3/4 are independent of each other, but
all three are prerequisites for patch 4/4.

--

Anders Kaseorg (1):
modpost: Check the section flags, not name, to catch missing
"ax"/"aw"

Denys Vlasenko (1):
modpost: support objects with more than 64k sections

Tim Abbott (1):
Make section names compatible with -ffunction-sections
-fdata-sections

Waseem Daher (1):
x86: Add an option to compile with -ffunction-sections
-fdata-sections

Documentation/mutex-design.txt | 4 +-
Makefile | 4 +
arch/alpha/kernel/head.S | 2 +-
arch/alpha/kernel/init_task.c | 2 +-
arch/alpha/kernel/vmlinux.lds.S | 14 ++--
arch/arm/kernel/head-nommu.S | 2 +-
arch/arm/kernel/head.S | 2 +-
arch/arm/kernel/init_task.c | 2 +-
arch/arm/kernel/vmlinux.lds.S | 14 ++--
arch/arm/mm/proc-v6.S | 2 +-
arch/arm/mm/proc-v7.S | 2 +-
arch/arm/mm/tlb-v6.S | 2 +-
arch/arm/mm/tlb-v7.S | 2 +-
arch/avr32/kernel/init_task.c | 2 +-
arch/avr32/kernel/vmlinux.lds.S | 6 +-
arch/avr32/mm/init.c | 2 +-
arch/blackfin/kernel/vmlinux.lds.S | 2 +-
arch/cris/kernel/process.c | 2 +-
arch/cris/kernel/vmlinux.lds.S | 2 +-
arch/frv/kernel/break.S | 4 +-
arch/frv/kernel/entry.S | 2 +-
arch/frv/kernel/head-mmu-fr451.S | 2 +-
arch/frv/kernel/head-uc-fr401.S | 2 +-
arch/frv/kernel/head-uc-fr451.S | 2 +-
arch/frv/kernel/head-uc-fr555.S | 2 +-
arch/frv/kernel/head.S | 4 +-
arch/frv/kernel/init_task.c | 2 +-
arch/frv/kernel/vmlinux.lds.S | 18 ++--
arch/frv/mm/tlb-miss.S | 2 +-
arch/h8300/boot/compressed/head.S | 2 +-
arch/h8300/boot/compressed/vmlinux.lds | 2 +-
arch/h8300/kernel/init_task.c | 2 +-
arch/h8300/kernel/vmlinux.lds.S | 2 +-
arch/ia64/include/asm/asmmacro.h | 12 +-
arch/ia64/include/asm/cache.h | 2 +-
arch/ia64/include/asm/percpu.h | 2 +-
arch/ia64/kernel/Makefile | 2 +-
arch/ia64/kernel/gate-data.S | 2 +-
arch/ia64/kernel/gate.S | 8 +-
arch/ia64/kernel/gate.lds.S | 10 +-
arch/ia64/kernel/head.S | 2 +-
arch/ia64/kernel/init_task.c | 4 +-
arch/ia64/kernel/ivt.S | 2 +-
arch/ia64/kernel/minstate.h | 4 +-
arch/ia64/kernel/paravirtentry.S | 2 +-
arch/ia64/kernel/vmlinux.lds.S | 48 ++++----
arch/ia64/kvm/vmm_ivt.S | 2 +-
arch/ia64/xen/xensetup.S | 2 +-
arch/m32r/kernel/head.S | 2 +-
arch/m32r/kernel/init_task.c | 2 +-
arch/m32r/kernel/vmlinux.lds.S | 8 +-
arch/m68k/kernel/head.S | 2 +-
arch/m68k/kernel/process.c | 2 +-
arch/m68k/kernel/sun3-head.S | 2 +-
arch/m68k/kernel/vmlinux-std.lds | 6 +-
arch/m68k/kernel/vmlinux-sun3.lds | 4 +-
arch/m68knommu/kernel/init_task.c | 2 +-
arch/m68knommu/kernel/vmlinux.lds.S | 6 +-
arch/m68knommu/platform/68360/head-ram.S | 2 +-
arch/m68knommu/platform/68360/head-rom.S | 2 +-
arch/mips/kernel/init_task.c | 2 +-
arch/mips/kernel/vmlinux.lds.S | 8 +-
arch/mips/lasat/image/head.S | 2 +-
arch/mips/lasat/image/romscript.normal | 2 +-
arch/mn10300/kernel/head.S | 2 +-
arch/mn10300/kernel/init_task.c | 2 +-
arch/mn10300/kernel/vmlinux.lds.S | 16 ++--
arch/parisc/include/asm/cache.h | 2 +-
arch/parisc/include/asm/system.h | 2 +-
arch/parisc/kernel/head.S | 2 +-
arch/parisc/kernel/init_task.c | 8 +-
arch/parisc/kernel/vmlinux.lds.S | 26 +++---
arch/powerpc/include/asm/cache.h | 2 +-
arch/powerpc/include/asm/page_64.h | 2 +-
arch/powerpc/include/asm/ppc_asm.h | 4 +-
arch/powerpc/kernel/head_32.S | 2 +-
arch/powerpc/kernel/head_40x.S | 2 +-
arch/powerpc/kernel/head_44x.S | 2 +-
arch/powerpc/kernel/head_8xx.S | 2 +-
arch/powerpc/kernel/head_fsl_booke.S | 2 +-
arch/powerpc/kernel/init_task.c | 2 +-
arch/powerpc/kernel/machine_kexec_64.c | 2 +-
arch/powerpc/kernel/vdso.c | 2 +-
arch/powerpc/kernel/vdso32/vdso32_wrapper.S | 2 +-
arch/powerpc/kernel/vdso64/vdso64_wrapper.S | 2 +-
arch/powerpc/kernel/vmlinux.lds.S | 28 +++---
arch/s390/include/asm/cache.h | 2 +-
arch/s390/kernel/head.S | 2 +-
arch/s390/kernel/init_task.c | 2 +-
arch/s390/kernel/vdso.c | 2 +-
arch/s390/kernel/vdso32/vdso32_wrapper.S | 2 +-
arch/s390/kernel/vdso64/vdso64_wrapper.S | 2 +-
arch/s390/kernel/vmlinux.lds.S | 20 ++--
arch/sh/include/asm/cache.h | 2 +-
arch/sh/kernel/cpu/sh5/entry.S | 4 +-
arch/sh/kernel/head_32.S | 2 +-
arch/sh/kernel/head_64.S | 2 +-
arch/sh/kernel/init_task.c | 2 +-
arch/sh/kernel/irq.c | 4 +-
arch/sh/kernel/vmlinux_32.lds.S | 14 ++--
arch/sh/kernel/vmlinux_64.lds.S | 14 ++--
arch/sparc/boot/btfixupprep.c | 4 +-
arch/sparc/include/asm/cache.h | 2 +-
arch/sparc/kernel/head_32.S | 4 +-
arch/sparc/kernel/head_64.S | 2 +-
arch/sparc/kernel/init_task.c | 2 +-
arch/sparc/kernel/vmlinux.lds.S | 14 ++--
arch/um/include/asm/common.lds.S | 4 +-
arch/um/kernel/dyn.lds.S | 4 +-
arch/um/kernel/init_task.c | 4 +-
arch/um/kernel/uml.lds.S | 4 +-
arch/x86/Kconfig | 1 +
arch/x86/boot/compressed/head_32.S | 2 +-
arch/x86/boot/compressed/head_64.S | 2 +-
arch/x86/boot/compressed/relocs.c | 2 +-
arch/x86/boot/compressed/vmlinux.scr | 2 +-
arch/x86/boot/compressed/vmlinux_32.lds | 14 ++-
arch/x86/boot/compressed/vmlinux_64.lds | 10 +-
arch/x86/include/asm/cache.h | 4 +-
arch/x86/kernel/acpi/wakeup_32.S | 2 +-
arch/x86/kernel/head_32.S | 6 +-
arch/x86/kernel/head_64.S | 4 +-
arch/x86/kernel/init_task.c | 4 +-
arch/x86/kernel/traps.c | 2 +-
arch/x86/kernel/vmlinux_32.lds.S | 37 ++++---
arch/x86/kernel/vmlinux_64.lds.S | 27 +++---
arch/xtensa/kernel/head.S | 2 +-
arch/xtensa/kernel/init_task.c | 2 +-
arch/xtensa/kernel/vmlinux.lds.S | 6 +-
include/asm-frv/init.h | 8 +-
include/asm-generic/vmlinux.lds.h | 19 ++--
include/linux/cache.h | 2 +-
include/linux/init.h | 8 +-
include/linux/linkage.h | 4 +-
include/linux/percpu.h | 10 +-
include/linux/spinlock.h | 2 +-
kernel/module.c | 2 +-
lib/Kconfig.debug | 18 +++
scripts/mod/file2alias.c | 6 +-
scripts/mod/modpost.c | 155 ++++++++++++++++-----------
scripts/mod/modpost.h | 43 ++++++++
scripts/recordmcount.pl | 6 +-
142 files changed, 515 insertions(+), 409 deletions(-)


2009-03-24 05:30:33

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 4/4] x86: Add an option to compile with -ffunction-sections -fdata-sections

From: Waseem Daher <[email protected]>

This patch makes it possible to link and boot an x86 kernel with
-ffunction-sections and -fdata-sections enabled.

Modpost currently warns whenever it sees a section with a name
matching [.][0-9]+$ because they are often caused by section flag
mismatch errors. When compiling with -ffunction-sections
-fdata-sections, gcc places various classes of local symbols in
sections with names such as .rodata.__func__.12345, causing these
warnings to be printed spuriously. The simplest fix is to disable the
warning when CONFIG_FUNCTION_DATA_SECTIONS is enabled.

Signed-off-by: Waseem Daher <[email protected]>
[[email protected]: modpost support]
Signed-off-by: Tim Abbott <[email protected]>
[[email protected]: depend on x86, update CONFIG_FUNCTION_TRACER conflict]
Signed-off-by: Anders Kaseorg <[email protected]>
---
Makefile | 4 ++++
arch/x86/Kconfig | 1 +
arch/x86/kernel/vmlinux_32.lds.S | 1 +
arch/x86/kernel/vmlinux_64.lds.S | 1 +
include/asm-generic/vmlinux.lds.h | 5 ++++-
lib/Kconfig.debug | 18 ++++++++++++++++++
6 files changed, 29 insertions(+), 1 deletions(-)

diff --git a/Makefile b/Makefile
index 1ab3ebf..6fe291a 100644
--- a/Makefile
+++ b/Makefile
@@ -551,6 +551,10 @@ ifdef CONFIG_FUNCTION_TRACER
KBUILD_CFLAGS += -pg
endif

+ifdef CONFIG_FUNCTION_DATA_SECTIONS
+KBUILD_CFLAGS += -ffunction-sections -fdata-sections
+endif
+
# We trigger additional mismatches with less inlining
ifdef CONFIG_DEBUG_SECTION_MISMATCH
KBUILD_CFLAGS += $(call cc-option, -fno-inline-functions-called-once)
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index bc2fbad..0af2ace 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -28,6 +28,7 @@ config X86
select HAVE_KPROBES
select ARCH_WANT_OPTIONAL_GPIOLIB
select ARCH_WANT_FRAME_POINTERS
+ select HAVE_FUNCTION_DATA_SECTIONS
select HAVE_KRETPROBES
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_DYNAMIC_FTRACE
diff --git a/arch/x86/kernel/vmlinux_32.lds.S b/arch/x86/kernel/vmlinux_32.lds.S
index 31c131a..fbc5c3f 100644
--- a/arch/x86/kernel/vmlinux_32.lds.S
+++ b/arch/x86/kernel/vmlinux_32.lds.S
@@ -194,6 +194,7 @@ SECTIONS
__bss_start = .; /* BSS */
*(.bss..page_aligned)
*(.bss)
+ *(.bss.[A-Za-z$_]*) /* handle -fdata-sections */
. = ALIGN(4);
__bss_stop = .;
_end = . ;
diff --git a/arch/x86/kernel/vmlinux_64.lds.S b/arch/x86/kernel/vmlinux_64.lds.S
index 56ed592..6e03cb2 100644
--- a/arch/x86/kernel/vmlinux_64.lds.S
+++ b/arch/x86/kernel/vmlinux_64.lds.S
@@ -223,6 +223,7 @@ SECTIONS
.bss : AT(ADDR(.bss) - LOAD_OFFSET) {
*(.bss..page_aligned)
*(.bss)
+ *(.bss.[A-Za-z$_]*) /* handle -fdata-sections */
}
__bss_stop = .;

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index 0485f0b..2b31598 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -64,6 +64,7 @@
/* .data section */
#define DATA_DATA \
*(.data) \
+ *(.data.[A-Za-z$_]*) /* handle -fdata-sections */ \
*(.data..init.refok) \
*(.ref.data) \
DEV_KEEP(init.data) \
@@ -87,7 +88,8 @@
. = ALIGN((align)); \
.rodata : AT(ADDR(.rodata) - LOAD_OFFSET) { \
VMLINUX_SYMBOL(__start_rodata) = .; \
- *(.rodata) *(.rodata.*) \
+ *(.rodata) \
+ *(.rodata.[A-Za-z$_]*) /* handle -fdata-sections */ \
*(__vermagic) /* Kernel version magic */ \
*(__markers_strings) /* Markers: strings */ \
*(__tracepoints_strings)/* Tracepoints: strings */ \
@@ -254,6 +256,7 @@
ALIGN_FUNCTION(); \
*(.text.hot) \
*(.text) \
+ *(.text.[A-Za-z$_]*) /* handle -ffunction-sections */\
*(.ref.text) \
*(.text..init.refok) \
*(.text..exit.refok) \
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 1bcf9cd..d2f7427 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -591,6 +591,24 @@ config FRAME_POINTER
larger and slower, but it gives very useful debugging information
in case of kernel bugs. (precise oopses/stacktraces/warnings)

+config HAVE_FUNCTION_DATA_SECTIONS
+ bool
+
+config FUNCTION_DATA_SECTIONS
+ bool "Compile with -ffunction-sections -fdata-sections"
+ depends on HAVE_FUNCTION_DATA_SECTIONS
+ depends on !FUNCTION_TRACER
+ help
+ If you say Y here the compiler will give each function
+ and data structure its own ELF section.
+
+ This option conflicts with CONFIG_FUNCTION_TRACER, which
+ enables profiling code generation, because current GCC does
+ not support compiling with -ffunction-sections -pg (see
+ <http://gcc.gnu.org/ml/gcc-help/2008-11/msg00128.html>).
+
+ If unsure, say N.
+
config BOOT_PRINTK_DELAY
bool "Delay each boot printk message by N milliseconds"
depends on DEBUG_KERNEL && PRINTK && GENERIC_CALIBRATE_DELAY
--
1.6.2

2009-03-24 05:30:06

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 2/4] modpost: Check the section flags, not name, to catch missing "ax"/"aw"

From: Anders Kaseorg <[email protected]>

When you put
.section ".foo"
in an assembly file instead of
.section "foo", "ax"
, one of the possible symptoms is that modpost will see an
ld-generated section name ".foo.1" in section_rel() or section_rela().
But this heuristic has two problems: it will miss a bad section that
has no relocations, and it will incorrectly flag many gcc-generated
sections as bad when compiling with -ffunction-sections
-fdata-sections.

So instead of checking whether the section name matches a particular
pattern, we directly check for a missing SHF_ALLOC in the section
flags.

Signed-off-by: Anders Kaseorg <[email protected]>
---
scripts/mod/modpost.c | 49 ++++++++++++++++++-------------------------------
1 files changed, 18 insertions(+), 31 deletions(-)

diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
index a7e282e..fba468a 100644
--- a/scripts/mod/modpost.c
+++ b/scripts/mod/modpost.c
@@ -713,41 +713,27 @@ int match(const char *sym, const char * const pat[])

/* sections that we do not want to do full section mismatch check on */
static const char *section_white_list[] =
- { ".debug*", ".stab*", ".note*", ".got*", ".toc*", NULL };
+ { ".comment", ".debug*", ".stab*", ".note*", ".got*", ".toc*", NULL };

/*
- * Is this section one we do not want to check?
- * This is often debug sections.
- * If we are going to check this section then
- * test if section name ends with a dot and a number.
- * This is used to find sections where the linker have
- * appended a dot-number to make the name unique.
+ * This is used to find sections missing the SHF_ALLOC flag.
* The cause of this is often a section specified in assembler
- * without "ax" / "aw" and the same section used in .c
- * code where gcc add these.
+ * without "ax" / "aw".
*/
-static int check_section(const char *modname, const char *sec)
-{
- const char *e = sec + strlen(sec) - 1;
- if (match(sec, section_white_list))
- return 1;
-
- if (*e && isdigit(*e)) {
- /* consume all digits */
- while (*e && e != sec && isdigit(*e))
- e--;
- if (*e == '.' && !strstr(sec, ".linkonce")) {
- warn("%s (%s): unexpected section name.\n"
- "The (.[number]+) following section name are "
- "ld generated and not expected.\n"
- "Did you forget to use \"ax\"/\"aw\" "
- "in a .S file?\n"
- "Note that for example <linux/init.h> contains\n"
- "section definitions for use in .S files.\n\n",
- modname, sec);
- }
+static void check_section(const char *modname, struct elf_info *elf,
+ Elf_Shdr *sechdr)
+{
+ const char *sec = sech_name(elf, sechdr);
+
+ if (sechdr->sh_type == SHT_PROGBITS &&
+ !(sechdr->sh_flags & SHF_ALLOC) &&
+ !match(sec, section_white_list)) {
+ warn("%s (%s): unexpected non-allocatable section.\n"
+ "Did you forget to use \"ax\"/\"aw\" in a .S file?\n"
+ "Note that for example <linux/init.h> contains\n"
+ "section definitions for use in .S files.\n\n",
+ modname, sec);
}
- return 0;
}


@@ -1374,7 +1360,7 @@ static void section_rela(const char *modname, struct elf_info *elf,
fromsec = sech_name(elf, sechdr);
fromsec += strlen(".rela");
/* if from section (name) is know good then skip it */
- if (check_section(modname, fromsec))
+ if (match(fromsec, section_white_list))
return;

for (rela = start; rela < stop; rela++) {
@@ -1418,7 +1404,7 @@ static void section_rel(const char *modname, struct elf_info *elf,
fromsec = sech_name(elf, sechdr);
fromsec += strlen(".rel");
/* if from section (name) is know good then skip it */
- if (check_section(modname, fromsec))
+ if (match(fromsec, section_white_list))
return;

for (rel = start; rel < stop; rel++) {
@@ -1481,6 +1467,7 @@ static void check_sec_ref(struct module *mod, const char *modname,

/* Walk through all sections */
for (i = 0; i < elf->hdr->e_shnum; i++) {
+ check_section(modname, elf, &elf->sechdrs[i]);
/* We want to process only relocation sections and not .init */
if (sechdrs[i].sh_type == SHT_RELA)
section_rela(modname, elf, &elf->sechdrs[i]);
--
1.6.2

2009-03-24 05:30:52

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 3/4] modpost: Support objects with more than 64k sections

From: Denys Vlasenko <[email protected]>

This patch makes modpost able to process object files with more than
64k sections. Needed for huge kernel builds (allyesconfig, for example)
with -ffunction-sections.

Signed-off-by: Denys Vlasenko <[email protected]>
[[email protected]: updated for 2.6.29]
Signed-off-by: Anders Kaseorg <[email protected]>
---
scripts/mod/file2alias.c | 6 +-
scripts/mod/modpost.c | 94 ++++++++++++++++++++++++++++++++-------------
scripts/mod/modpost.h | 43 +++++++++++++++++++++
3 files changed, 113 insertions(+), 30 deletions(-)

diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c
index 4eea60b..7d3ebb5 100644
--- a/scripts/mod/file2alias.c
+++ b/scripts/mod/file2alias.c
@@ -753,16 +753,16 @@ void handle_moddevtable(struct module *mod, struct elf_info *info,
char *zeros = NULL;

/* We're looking for a section relative symbol */
- if (!sym->st_shndx || sym->st_shndx >= info->hdr->e_shnum)
+ if (!sym->st_shndx || get_secindex(info, sym) >= info->num_sections)
return;

/* Handle all-NULL symbols allocated into .bss */
- if (info->sechdrs[sym->st_shndx].sh_type & SHT_NOBITS) {
+ if (info->sechdrs[get_secindex(info, sym)].sh_type & SHT_NOBITS) {
zeros = calloc(1, sym->st_size);
symval = zeros;
} else {
symval = (void *)info->hdr
- + info->sechdrs[sym->st_shndx].sh_offset
+ + info->sechdrs[get_secindex(info, sym)].sh_offset
+ sym->st_value;
}

diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
index fba468a..8ee2034 100644
--- a/scripts/mod/modpost.c
+++ b/scripts/mod/modpost.c
@@ -244,7 +244,7 @@ static enum export export_no(const char *s)
return export_unknown;
}

-static enum export export_from_sec(struct elf_info *elf, Elf_Section sec)
+static enum export export_from_sec(struct elf_info *elf, unsigned int sec)
{
if (sec == elf->export_sec)
return export_plain;
@@ -364,6 +364,8 @@ static int parse_elf(struct elf_info *info, const char *filename)
Elf_Ehdr *hdr;
Elf_Shdr *sechdrs;
Elf_Sym *sym;
+ const char *secstrings;
+ unsigned int symtab_idx = ~0U, symtab_shndx_idx = ~0U;

hdr = grab_file(filename, &info->size);
if (!hdr) {
@@ -400,8 +402,19 @@ static int parse_elf(struct elf_info *info, const char *filename)
return 0;
}

+ /* Fixups for more than 64k sections */
+ info->num_sections = hdr->e_shnum;
+ if (info->num_sections == 0) { /* more than 64k sections? */
+ /* doesn't need shndx2secindex() */
+ info->num_sections = TO_NATIVE(sechdrs[0].sh_size);
+ }
+ info->secindex_strings = hdr->e_shstrndx;
+ if (info->secindex_strings == SHN_XINDEX)
+ info->secindex_strings =
+ shndx2secindex(TO_NATIVE(sechdrs[0].sh_link));
+
/* Fix endianness in section headers */
- for (i = 0; i < hdr->e_shnum; i++) {
+ for (i = 0; i < info->num_sections; i++) {
sechdrs[i].sh_type = TO_NATIVE(sechdrs[i].sh_type);
sechdrs[i].sh_offset = TO_NATIVE(sechdrs[i].sh_offset);
sechdrs[i].sh_size = TO_NATIVE(sechdrs[i].sh_size);
@@ -411,9 +424,8 @@ static int parse_elf(struct elf_info *info, const char *filename)
sechdrs[i].sh_addr = TO_NATIVE(sechdrs[i].sh_addr);
}
/* Find symbol table. */
- for (i = 1; i < hdr->e_shnum; i++) {
- const char *secstrings
- = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
+ secstrings = (void *)hdr + sechdrs[info->secindex_strings].sh_offset;
+ for (i = 1; i < info->num_sections; i++) {
const char *secname;

if (sechdrs[i].sh_offset > info->size) {
@@ -440,14 +452,24 @@ static int parse_elf(struct elf_info *info, const char *filename)
else if (strcmp(secname, "__markers_strings") == 0)
info->markers_strings_sec = i;

- if (sechdrs[i].sh_type != SHT_SYMTAB)
- continue;
+ if (sechdrs[i].sh_type == SHT_SYMTAB) {
+ symtab_idx = i;
+ info->symtab_start = (void *)hdr +
+ sechdrs[i].sh_offset;
+ info->symtab_stop = (void *)hdr +
+ sechdrs[i].sh_offset + sechdrs[i].sh_size;
+ info->strtab = (void *)hdr +
+ sechdrs[shndx2secindex(sechdrs[i].sh_link)].sh_offset;
+ }

- info->symtab_start = (void *)hdr + sechdrs[i].sh_offset;
- info->symtab_stop = (void *)hdr + sechdrs[i].sh_offset
- + sechdrs[i].sh_size;
- info->strtab = (void *)hdr +
- sechdrs[sechdrs[i].sh_link].sh_offset;
+ /* 32bit section no. table? ("more than 64k sections") */
+ if (sechdrs[i].sh_type == SHT_SYMTAB_SHNDX) {
+ symtab_shndx_idx = i;
+ info->symtab_shndx_start = (void *)hdr +
+ sechdrs[i].sh_offset;
+ info->symtab_shndx_stop = (void *)hdr +
+ sechdrs[i].sh_offset + sechdrs[i].sh_size;
+ }
}
if (!info->symtab_start)
fatal("%s has no symtab?\n", filename);
@@ -459,6 +481,21 @@ static int parse_elf(struct elf_info *info, const char *filename)
sym->st_value = TO_NATIVE(sym->st_value);
sym->st_size = TO_NATIVE(sym->st_size);
}
+
+ if (symtab_shndx_idx != ~0U) {
+ Elf32_Word *p;
+ if (symtab_idx !=
+ shndx2secindex(sechdrs[symtab_shndx_idx].sh_link))
+ fatal("%s: SYMTAB_SHNDX has bad sh_link: %u!=%u\n",
+ filename,
+ shndx2secindex(sechdrs[symtab_shndx_idx].sh_link),
+ symtab_idx);
+ /* Fix endianness */
+ for (p = info->symtab_shndx_start; p < info->symtab_shndx_stop;
+ p++)
+ *p = TO_NATIVE(*p);
+ }
+
return 1;
}

@@ -493,7 +530,7 @@ static void handle_modversions(struct module *mod, struct elf_info *info,
Elf_Sym *sym, const char *symname)
{
unsigned int crc;
- enum export export = export_from_sec(info, sym->st_shndx);
+ enum export export = export_from_sec(info, get_secindex(info, sym));

switch (sym->st_shndx) {
case SHN_COMMON:
@@ -635,18 +672,18 @@ static const char *sym_name(struct elf_info *elf, Elf_Sym *sym)
return "(unknown)";
}

-static const char *sec_name(struct elf_info *elf, int shndx)
+static const char *sec_name(struct elf_info *elf, int secindex)
{
Elf_Shdr *sechdrs = elf->sechdrs;
return (void *)elf->hdr +
- elf->sechdrs[elf->hdr->e_shstrndx].sh_offset +
- sechdrs[shndx].sh_name;
+ elf->sechdrs[elf->secindex_strings].sh_offset +
+ sechdrs[secindex].sh_name;
}

static const char *sech_name(struct elf_info *elf, Elf_Shdr *sechdr)
{
return (void *)elf->hdr +
- elf->sechdrs[elf->hdr->e_shstrndx].sh_offset +
+ elf->sechdrs[elf->secindex_strings].sh_offset +
sechdr->sh_name;
}

@@ -983,11 +1020,14 @@ static Elf_Sym *find_elf_symbol(struct elf_info *elf, Elf64_Sword addr,
Elf_Sym *near = NULL;
Elf64_Sword distance = 20;
Elf64_Sword d;
+ unsigned int relsym_secindex;

if (relsym->st_name != 0)
return relsym;
+
+ relsym_secindex = get_secindex(elf, relsym);
for (sym = elf->symtab_start; sym < elf->symtab_stop; sym++) {
- if (sym->st_shndx != relsym->st_shndx)
+ if (get_secindex(elf, sym) != relsym_secindex)
continue;
if (ELF_ST_TYPE(sym->st_info) == STT_SECTION)
continue;
@@ -1049,9 +1089,9 @@ static Elf_Sym *find_elf_symbol2(struct elf_info *elf, Elf_Addr addr,
for (sym = elf->symtab_start; sym < elf->symtab_stop; sym++) {
const char *symsec;

- if (sym->st_shndx >= SHN_LORESERVE)
+ if (is_shndx_special(sym->st_shndx))
continue;
- symsec = sec_name(elf, sym->st_shndx);
+ symsec = sec_name(elf, get_secindex(elf, sym));
if (strcmp(symsec, sec) != 0)
continue;
if (!is_valid_name(elf, sym))
@@ -1248,7 +1288,7 @@ static void check_section_mismatch(const char *modname, struct elf_info *elf,
const char *tosec;
enum mismatch mismatch;

- tosec = sec_name(elf, sym->st_shndx);
+ tosec = sec_name(elf, get_secindex(elf, sym));
mismatch = section_mismatch(fromsec, tosec);
if (mismatch != NO_MISMATCH) {
Elf_Sym *to;
@@ -1275,7 +1315,7 @@ static unsigned int *reloc_location(struct elf_info *elf,
Elf_Shdr *sechdr, Elf_Rela *r)
{
Elf_Shdr *sechdrs = elf->sechdrs;
- int section = sechdr->sh_info;
+ int section = shndx2secindex(sechdr->sh_info);

return (void *)elf->hdr + sechdrs[section].sh_offset +
(r->r_offset - sechdrs[section].sh_addr);
@@ -1383,7 +1423,7 @@ static void section_rela(const char *modname, struct elf_info *elf,
r.r_addend = TO_NATIVE(rela->r_addend);
sym = elf->symtab_start + r_sym;
/* Skip special sections */
- if (sym->st_shndx >= SHN_LORESERVE)
+ if (is_shndx_special(sym->st_shndx))
continue;
check_section_mismatch(modname, elf, &r, sym, fromsec);
}
@@ -1441,7 +1481,7 @@ static void section_rel(const char *modname, struct elf_info *elf,
}
sym = elf->symtab_start + r_sym;
/* Skip special sections */
- if (sym->st_shndx >= SHN_LORESERVE)
+ if (is_shndx_special(sym->st_shndx))
continue;
check_section_mismatch(modname, elf, &r, sym, fromsec);
}
@@ -1466,7 +1506,7 @@ static void check_sec_ref(struct module *mod, const char *modname,
Elf_Shdr *sechdrs = elf->sechdrs;

/* Walk through all sections */
- for (i = 0; i < elf->hdr->e_shnum; i++) {
+ for (i = 0; i < elf->num_sections; i++) {
check_section(modname, elf, &elf->sechdrs[i]);
/* We want to process only relocation sections and not .init */
if (sechdrs[i].sh_type == SHT_RELA)
@@ -1496,7 +1536,7 @@ static void get_markers(struct elf_info *info, struct module *mod)
n = 0;
for (sym = info->symtab_start; sym < info->symtab_stop; sym++)
if (ELF_ST_TYPE(sym->st_info) == STT_OBJECT &&
- sym->st_shndx == info->markers_strings_sec &&
+ get_secindex(info, sym) == info->markers_strings_sec &&
!strncmp(info->strtab + sym->st_name,
"__mstrtab_", sizeof "__mstrtab_" - 1)) {
if (first_sym == NULL)
@@ -1520,7 +1560,7 @@ static void get_markers(struct elf_info *info, struct module *mod)
n = 0;
for (sym = first_sym; sym <= last_sym; sym++)
if (ELF_ST_TYPE(sym->st_info) == STT_OBJECT &&
- sym->st_shndx == info->markers_strings_sec &&
+ get_secindex(info, sym) == info->markers_strings_sec &&
!strncmp(info->strtab + sym->st_name,
"__mstrtab_", sizeof "__mstrtab_" - 1)) {
const char *name = strings + sym->st_value;
diff --git a/scripts/mod/modpost.h b/scripts/mod/modpost.h
index 09f58e3..dbde650 100644
--- a/scripts/mod/modpost.h
+++ b/scripts/mod/modpost.h
@@ -132,8 +132,51 @@ struct elf_info {
const char *strtab;
char *modinfo;
unsigned int modinfo_len;
+
+ /* support for 32bit section numbers */
+
+ unsigned int num_sections; /* max_secindex + 1 */
+ unsigned int secindex_strings;
+ /* if Nth symbol table entry has .st_shndx = SHN_XINDEX,
+ * take shndx from symtab_shndx_start[N] instead */
+ Elf32_Word *symtab_shndx_start;
+ Elf32_Word *symtab_shndx_stop;
};

+static inline int is_shndx_special(unsigned int i)
+{
+ return i != SHN_XINDEX && i >= SHN_LORESERVE && i <= SHN_HIRESERVE;
+}
+
+/* shndx is in [0..SHN_LORESERVE) U (SHN_HIRESERVE, 0xfffffff], thus:
+ * shndx == 0 <=> sechdrs[0]
+ * ......
+ * shndx == SHN_LORESERVE-1 <=> sechdrs[SHN_LORESERVE-1]
+ * shndx == SHN_HIRESERVE+1 <=> sechdrs[SHN_LORESERVE]
+ * shndx == SHN_HIRESERVE+2 <=> sechdrs[SHN_LORESERVE+1]
+ * ......
+ * fyi: sym->st_shndx is uint16, SHN_LORESERVE = ff00, SHN_HIRESERVE = ffff,
+ * so basically we map 0000..feff -> 0000..feff
+ * ff00..ffff -> (you are a bad boy, dont do it)
+ * 10000..xxxx -> ff00..(xxxx-0x100)
+ */
+static inline unsigned int shndx2secindex(unsigned int i)
+{
+ if (i <= SHN_HIRESERVE)
+ return i;
+ return i - (SHN_HIRESERVE + 1 - SHN_LORESERVE);
+}
+
+/* Accessor for sym->st_shndx, hides ugliness of "64k sections" */
+static inline unsigned int get_secindex(const struct elf_info *info,
+ const Elf_Sym *sym)
+{
+ if (sym->st_shndx != SHN_XINDEX)
+ return sym->st_shndx;
+ return shndx2secindex(info->symtab_shndx_start[sym -
+ info->symtab_start]);
+}
+
/* file2alias.c */
extern unsigned int cross_build;
void handle_moddevtable(struct module *mod, struct elf_info *info,
--
1.6.2

2009-03-24 05:31:20

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 1/4] Make section names compatible with -ffunction-sections -fdata-sections

The purpose of this patch is to make it possible to build the kernel
with "gcc -ffunction-sections -fdata-sections". This is a key step
towards being able to compile the kernel using the --gc-sections
linker option, which can be used to decrease the vmlinux size by
garbage collecting unused functions. Also, Ksplice's 'run-pre
matching' process is much simpler if the original kernel was compiled
with -ffunction-sections and -fdata-sections.

Currently, the kernel uses a number of "magic" section names such as
".data.nosave" and ".text.head".

The problem is that with -ffunction-sections -fdata-sections, gcc
places code like

static void head(...) {...}

in the .text.head section, and code like

static int nosave = 1;

in the .data.nosave section, causing code to be inappropriately placed
in the "magic" sections. This patch renames all "magic" section names
used by the kernel to use ".." rather than "." as the delimiter
between the section prefix (e.g. ".text") and suffix (e.g. "head"), so
that ".data.nosave" becomes ".data..nosave", etc. The key property of
these names is that there are no collisions between the kernel's
"magic" sections and the sections generated by gcc's
-ffunction-sections and -fdata-sections options. One can then
reference the sections generated by -ffunction-sections using a linker
script pattern such as *(.text.[A-Za-z$_]*)

This patch is based on an some earlier patches by Denys Vlasenko.
Those earlier patches used section names like ".kernel.text.head".
Because these names did not begin with e.g. ".text", the assembler did
not automatically use the correct section flags for them, and so one
needed to explicitly declare the sections flags when declaring a
section in assembly files using code such as:

.section .kernel.text.head, "ax", @progbits

These explicit section flags in assembly files turned out to be a
common source of confusion, and so I rewrote the patch with this new
section naming scheme suggested by Anders Kaseorg.

Signed-off-by: Tim Abbott <[email protected]>
Signed-off-by: Anders Kaseorg <[email protected]>
---
Documentation/mutex-design.txt | 4 +-
arch/alpha/kernel/head.S | 2 +-
arch/alpha/kernel/init_task.c | 2 +-
arch/alpha/kernel/vmlinux.lds.S | 14 ++++----
arch/arm/kernel/head-nommu.S | 2 +-
arch/arm/kernel/head.S | 2 +-
arch/arm/kernel/init_task.c | 2 +-
arch/arm/kernel/vmlinux.lds.S | 14 ++++----
arch/arm/mm/proc-v6.S | 2 +-
arch/arm/mm/proc-v7.S | 2 +-
arch/arm/mm/tlb-v6.S | 2 +-
arch/arm/mm/tlb-v7.S | 2 +-
arch/avr32/kernel/init_task.c | 2 +-
arch/avr32/kernel/vmlinux.lds.S | 6 ++--
arch/avr32/mm/init.c | 2 +-
arch/blackfin/kernel/vmlinux.lds.S | 2 +-
arch/cris/kernel/process.c | 2 +-
arch/cris/kernel/vmlinux.lds.S | 2 +-
arch/frv/kernel/break.S | 4 +-
arch/frv/kernel/entry.S | 2 +-
arch/frv/kernel/head-mmu-fr451.S | 2 +-
arch/frv/kernel/head-uc-fr401.S | 2 +-
arch/frv/kernel/head-uc-fr451.S | 2 +-
arch/frv/kernel/head-uc-fr555.S | 2 +-
arch/frv/kernel/head.S | 4 +-
arch/frv/kernel/init_task.c | 2 +-
arch/frv/kernel/vmlinux.lds.S | 18 +++++-----
arch/frv/mm/tlb-miss.S | 2 +-
arch/h8300/boot/compressed/head.S | 2 +-
arch/h8300/boot/compressed/vmlinux.lds | 2 +-
arch/h8300/kernel/init_task.c | 2 +-
arch/h8300/kernel/vmlinux.lds.S | 2 +-
arch/ia64/include/asm/asmmacro.h | 12 +++---
arch/ia64/include/asm/cache.h | 2 +-
arch/ia64/include/asm/percpu.h | 2 +-
arch/ia64/kernel/Makefile | 2 +-
arch/ia64/kernel/gate-data.S | 2 +-
arch/ia64/kernel/gate.S | 8 ++--
arch/ia64/kernel/gate.lds.S | 10 +++---
arch/ia64/kernel/head.S | 2 +-
arch/ia64/kernel/init_task.c | 4 +-
arch/ia64/kernel/ivt.S | 2 +-
arch/ia64/kernel/minstate.h | 4 +-
arch/ia64/kernel/paravirtentry.S | 2 +-
arch/ia64/kernel/vmlinux.lds.S | 48 +++++++++++++-------------
arch/ia64/kvm/vmm_ivt.S | 2 +-
arch/ia64/xen/xensetup.S | 2 +-
arch/m32r/kernel/head.S | 2 +-
arch/m32r/kernel/init_task.c | 2 +-
arch/m32r/kernel/vmlinux.lds.S | 8 ++--
arch/m68k/kernel/head.S | 2 +-
arch/m68k/kernel/process.c | 2 +-
arch/m68k/kernel/sun3-head.S | 2 +-
arch/m68k/kernel/vmlinux-std.lds | 6 ++--
arch/m68k/kernel/vmlinux-sun3.lds | 4 +-
arch/m68knommu/kernel/init_task.c | 2 +-
arch/m68knommu/kernel/vmlinux.lds.S | 6 ++--
arch/m68knommu/platform/68360/head-ram.S | 2 +-
arch/m68knommu/platform/68360/head-rom.S | 2 +-
arch/mips/kernel/init_task.c | 2 +-
arch/mips/kernel/vmlinux.lds.S | 8 ++--
arch/mips/lasat/image/head.S | 2 +-
arch/mips/lasat/image/romscript.normal | 2 +-
arch/mn10300/kernel/head.S | 2 +-
arch/mn10300/kernel/init_task.c | 2 +-
arch/mn10300/kernel/vmlinux.lds.S | 16 ++++----
arch/parisc/include/asm/cache.h | 2 +-
arch/parisc/include/asm/system.h | 2 +-
arch/parisc/kernel/head.S | 2 +-
arch/parisc/kernel/init_task.c | 8 ++--
arch/parisc/kernel/vmlinux.lds.S | 26 +++++++-------
arch/powerpc/include/asm/cache.h | 2 +-
arch/powerpc/include/asm/page_64.h | 2 +-
arch/powerpc/include/asm/ppc_asm.h | 4 +-
arch/powerpc/kernel/head_32.S | 2 +-
arch/powerpc/kernel/head_40x.S | 2 +-
arch/powerpc/kernel/head_44x.S | 2 +-
arch/powerpc/kernel/head_8xx.S | 2 +-
arch/powerpc/kernel/head_fsl_booke.S | 2 +-
arch/powerpc/kernel/init_task.c | 2 +-
arch/powerpc/kernel/machine_kexec_64.c | 2 +-
arch/powerpc/kernel/vdso.c | 2 +-
arch/powerpc/kernel/vdso32/vdso32_wrapper.S | 2 +-
arch/powerpc/kernel/vdso64/vdso64_wrapper.S | 2 +-
arch/powerpc/kernel/vmlinux.lds.S | 28 ++++++++--------
arch/s390/include/asm/cache.h | 2 +-
arch/s390/kernel/head.S | 2 +-
arch/s390/kernel/init_task.c | 2 +-
arch/s390/kernel/vdso.c | 2 +-
arch/s390/kernel/vdso32/vdso32_wrapper.S | 2 +-
arch/s390/kernel/vdso64/vdso64_wrapper.S | 2 +-
arch/s390/kernel/vmlinux.lds.S | 20 +++++-----
arch/sh/include/asm/cache.h | 2 +-
arch/sh/kernel/cpu/sh5/entry.S | 4 +-
arch/sh/kernel/head_32.S | 2 +-
arch/sh/kernel/head_64.S | 2 +-
arch/sh/kernel/init_task.c | 2 +-
arch/sh/kernel/irq.c | 4 +-
arch/sh/kernel/vmlinux_32.lds.S | 14 ++++----
arch/sh/kernel/vmlinux_64.lds.S | 14 ++++----
arch/sparc/boot/btfixupprep.c | 4 +-
arch/sparc/include/asm/cache.h | 2 +-
arch/sparc/kernel/head_32.S | 4 +-
arch/sparc/kernel/head_64.S | 2 +-
arch/sparc/kernel/init_task.c | 2 +-
arch/sparc/kernel/vmlinux.lds.S | 14 ++++----
arch/um/include/asm/common.lds.S | 4 +-
arch/um/kernel/dyn.lds.S | 4 +-
arch/um/kernel/init_task.c | 4 +-
arch/um/kernel/uml.lds.S | 4 +-
arch/x86/boot/compressed/head_32.S | 2 +-
arch/x86/boot/compressed/head_64.S | 2 +-
arch/x86/boot/compressed/relocs.c | 2 +-
arch/x86/boot/compressed/vmlinux.scr | 2 +-
arch/x86/boot/compressed/vmlinux_32.lds | 14 +++++--
arch/x86/boot/compressed/vmlinux_64.lds | 10 +++--
arch/x86/include/asm/cache.h | 4 +-
arch/x86/kernel/acpi/wakeup_32.S | 2 +-
arch/x86/kernel/head_32.S | 6 ++--
arch/x86/kernel/head_64.S | 4 +-
arch/x86/kernel/init_task.c | 4 +-
arch/x86/kernel/traps.c | 2 +-
arch/x86/kernel/vmlinux_32.lds.S | 36 ++++++++++----------
arch/x86/kernel/vmlinux_64.lds.S | 26 +++++++-------
arch/xtensa/kernel/head.S | 2 +-
arch/xtensa/kernel/init_task.c | 2 +-
arch/xtensa/kernel/vmlinux.lds.S | 6 ++--
include/asm-frv/init.h | 8 ++--
include/asm-generic/vmlinux.lds.h | 14 ++++----
include/linux/cache.h | 2 +-
include/linux/init.h | 8 ++--
include/linux/linkage.h | 4 +-
include/linux/percpu.h | 10 +++---
include/linux/spinlock.h | 2 +-
kernel/module.c | 2 +-
scripts/mod/modpost.c | 12 +++---
scripts/recordmcount.pl | 6 ++--
137 files changed, 355 insertions(+), 347 deletions(-)

diff --git a/Documentation/mutex-design.txt b/Documentation/mutex-design.txt
index aa60d1f..c91ccc0 100644
--- a/Documentation/mutex-design.txt
+++ b/Documentation/mutex-design.txt
@@ -66,14 +66,14 @@ of advantages of mutexes:

c0377ccb <mutex_lock>:
c0377ccb: f0 ff 08 lock decl (%eax)
- c0377cce: 78 0e js c0377cde <.text.lock.mutex>
+ c0377cce: 78 0e js c0377cde <.text..lock.mutex>
c0377cd0: c3 ret

the unlocking fastpath is equally tight:

c0377cd1 <mutex_unlock>:
c0377cd1: f0 ff 00 lock incl (%eax)
- c0377cd4: 7e 0f jle c0377ce5 <.text.lock.mutex+0x7>
+ c0377cd4: 7e 0f jle c0377ce5 <.text..lock.mutex+0x7>
c0377cd6: c3 ret

- 'struct mutex' semantics are well-defined and are enforced if
diff --git a/arch/alpha/kernel/head.S b/arch/alpha/kernel/head.S
index 7ac1f13..16293d4 100644
--- a/arch/alpha/kernel/head.S
+++ b/arch/alpha/kernel/head.S
@@ -10,7 +10,7 @@
#include <asm/system.h>
#include <asm/asm-offsets.h>

-.section .text.head, "ax"
+.section .text..head, "ax"
.globl swapper_pg_dir
.globl _stext
swapper_pg_dir=SWAPPER_PGD
diff --git a/arch/alpha/kernel/init_task.c b/arch/alpha/kernel/init_task.c
index c2938e5..7929755 100644
--- a/arch/alpha/kernel/init_task.c
+++ b/arch/alpha/kernel/init_task.c
@@ -17,5 +17,5 @@ EXPORT_SYMBOL(init_mm);
EXPORT_SYMBOL(init_task);

union thread_union init_thread_union
- __attribute__((section(".data.init_thread")))
+ __attribute__((section(".data..init_thread")))
= { INIT_THREAD_INFO(init_task) };
diff --git a/arch/alpha/kernel/vmlinux.lds.S b/arch/alpha/kernel/vmlinux.lds.S
index ef37fc1..511f8ca 100644
--- a/arch/alpha/kernel/vmlinux.lds.S
+++ b/arch/alpha/kernel/vmlinux.lds.S
@@ -16,7 +16,7 @@ SECTIONS

_text = .; /* Text and read-only data */
.text : {
- *(.text.head)
+ *(.text..head)
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
@@ -93,18 +93,18 @@ SECTIONS
/* Freed after init ends here */

/* Note 2 page alignment above. */
- .data.init_thread : {
- *(.data.init_thread)
+ .data..init_thread : {
+ *(.data..init_thread)
}

. = ALIGN(PAGE_SIZE);
- .data.page_aligned : {
- *(.data.page_aligned)
+ .data..page_aligned : {
+ *(.data..page_aligned)
}

. = ALIGN(64);
- .data.cacheline_aligned : {
- *(.data.cacheline_aligned)
+ .data..cacheline_aligned : {
+ *(.data..cacheline_aligned)
}

_data = .;
diff --git a/arch/arm/kernel/head-nommu.S b/arch/arm/kernel/head-nommu.S
index cc87e17..fcd93f5 100644
--- a/arch/arm/kernel/head-nommu.S
+++ b/arch/arm/kernel/head-nommu.S
@@ -32,7 +32,7 @@
* numbers for r1.
*
*/
- .section ".text.head", "ax"
+ .section ".text..head", "ax"
ENTRY(stext)
msr cpsr_c, #PSR_F_BIT | PSR_I_BIT | SVC_MODE @ ensure svc mode
@ and irqs disabled
diff --git a/arch/arm/kernel/head.S b/arch/arm/kernel/head.S
index 21e17dc..705e759 100644
--- a/arch/arm/kernel/head.S
+++ b/arch/arm/kernel/head.S
@@ -74,7 +74,7 @@
* crap here - that's what the boot loader (or in extreme, well justified
* circumstances, zImage) is for.
*/
- .section ".text.head", "ax"
+ .section ".text..head", "ax"
ENTRY(stext)
msr cpsr_c, #PSR_F_BIT | PSR_I_BIT | SVC_MODE @ ensure svc mode
@ and irqs disabled
diff --git a/arch/arm/kernel/init_task.c b/arch/arm/kernel/init_task.c
index e859af3..0b81b8f 100644
--- a/arch/arm/kernel/init_task.c
+++ b/arch/arm/kernel/init_task.c
@@ -29,7 +29,7 @@ EXPORT_SYMBOL(init_mm);
* The things we do for performance..
*/
union thread_union init_thread_union
- __attribute__((__section__(".data.init_task"))) =
+ __attribute__((__section__(".data..init_task"))) =
{ INIT_THREAD_INFO(init_task) };

/*
diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index 0021607..4ec9516 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -23,10 +23,10 @@ SECTIONS
#else
. = PAGE_OFFSET + TEXT_OFFSET;
#endif
- .text.head : {
+ .text..head : {
_stext = .;
_sinittext = .;
- *(.text.head)
+ *(.text..head)
}

.init : { /* Init code and data */
@@ -65,8 +65,8 @@ SECTIONS
#endif
. = ALIGN(4096);
__per_cpu_start = .;
- *(.data.percpu)
- *(.data.percpu.shared_aligned)
+ *(.data..percpu)
+ *(.data..percpu.shared_aligned)
__per_cpu_end = .;
#ifndef CONFIG_XIP_KERNEL
__init_begin = _stext;
@@ -125,7 +125,7 @@ SECTIONS
* first, the init task union, aligned
* to an 8192 byte boundary.
*/
- *(.data.init_task)
+ *(.data..init_task)

#ifdef CONFIG_XIP_KERNEL
. = ALIGN(4096);
@@ -137,7 +137,7 @@ SECTIONS

. = ALIGN(4096);
__nosave_begin = .;
- *(.data.nosave)
+ *(.data..nosave)
. = ALIGN(4096);
__nosave_end = .;

@@ -145,7 +145,7 @@ SECTIONS
* then the cacheline aligned data
*/
. = ALIGN(32);
- *(.data.cacheline_aligned)
+ *(.data..cacheline_aligned)

/*
* The exception fixup table (might need resorting at runtime)
diff --git a/arch/arm/mm/proc-v6.S b/arch/arm/mm/proc-v6.S
index f0cc599..39d8dd1 100644
--- a/arch/arm/mm/proc-v6.S
+++ b/arch/arm/mm/proc-v6.S
@@ -132,7 +132,7 @@ cpu_v6_name:
.asciz "ARMv6-compatible processor"
.align

- .section ".text.init", #alloc, #execinstr
+ .section ".text..init", #alloc, #execinstr

/*
* __v6_setup
diff --git a/arch/arm/mm/proc-v7.S b/arch/arm/mm/proc-v7.S
index d1ebec4..f03c14c 100644
--- a/arch/arm/mm/proc-v7.S
+++ b/arch/arm/mm/proc-v7.S
@@ -153,7 +153,7 @@ cpu_v7_name:
.ascii "ARMv7 Processor"
.align

- .section ".text.init", #alloc, #execinstr
+ .section ".text..init", #alloc, #execinstr

/*
* __v7_setup
diff --git a/arch/arm/mm/tlb-v6.S b/arch/arm/mm/tlb-v6.S
index 20f84bb..7e9a62a 100644
--- a/arch/arm/mm/tlb-v6.S
+++ b/arch/arm/mm/tlb-v6.S
@@ -87,7 +87,7 @@ ENTRY(v6wbi_flush_kern_tlb_range)
mcr p15, 0, r2, c7, c5, 4 @ prefetch flush
mov pc, lr

- .section ".text.init", #alloc, #execinstr
+ .section ".text..init", #alloc, #execinstr

.type v6wbi_tlb_fns, #object
ENTRY(v6wbi_tlb_fns)
diff --git a/arch/arm/mm/tlb-v7.S b/arch/arm/mm/tlb-v7.S
index 24ba510..875a0bd 100644
--- a/arch/arm/mm/tlb-v7.S
+++ b/arch/arm/mm/tlb-v7.S
@@ -80,7 +80,7 @@ ENTRY(v7wbi_flush_kern_tlb_range)
mov pc, lr
ENDPROC(v7wbi_flush_kern_tlb_range)

- .section ".text.init", #alloc, #execinstr
+ .section ".text..init", #alloc, #execinstr

.type v7wbi_tlb_fns, #object
ENTRY(v7wbi_tlb_fns)
diff --git a/arch/avr32/kernel/init_task.c b/arch/avr32/kernel/init_task.c
index 993d56e..4678bc6 100644
--- a/arch/avr32/kernel/init_task.c
+++ b/arch/avr32/kernel/init_task.c
@@ -23,7 +23,7 @@ EXPORT_SYMBOL(init_mm);
* Initial thread structure. Must be aligned on an 8192-byte boundary.
*/
union thread_union init_thread_union
- __attribute__((__section__(".data.init_task"))) =
+ __attribute__((__section__(".data..init_task"))) =
{ INIT_THREAD_INFO(init_task) };

/*
diff --git a/arch/avr32/kernel/vmlinux.lds.S b/arch/avr32/kernel/vmlinux.lds.S
index 7910d41..5e73a02 100644
--- a/arch/avr32/kernel/vmlinux.lds.S
+++ b/arch/avr32/kernel/vmlinux.lds.S
@@ -95,15 +95,15 @@ SECTIONS
/*
* First, the init task union, aligned to an 8K boundary.
*/
- *(.data.init_task)
+ *(.data..init_task)

/* Then, the page-aligned data */
. = ALIGN(PAGE_SIZE);
- *(.data.page_aligned)
+ *(.data..page_aligned)

/* Then, the cacheline aligned data */
. = ALIGN(L1_CACHE_BYTES);
- *(.data.cacheline_aligned)
+ *(.data..cacheline_aligned)

/* And the rest... */
*(.data.rel*)
diff --git a/arch/avr32/mm/init.c b/arch/avr32/mm/init.c
index e819fa6..533a011 100644
--- a/arch/avr32/mm/init.c
+++ b/arch/avr32/mm/init.c
@@ -24,7 +24,7 @@
#include <asm/setup.h>
#include <asm/sections.h>

-#define __page_aligned __attribute__((section(".data.page_aligned")))
+#define __page_aligned __attribute__((section(".data..page_aligned")))

DEFINE_PER_CPU(struct mmu_gather, mmu_gathers);

diff --git a/arch/blackfin/kernel/vmlinux.lds.S b/arch/blackfin/kernel/vmlinux.lds.S
index 4b4341d..027ff23 100644
--- a/arch/blackfin/kernel/vmlinux.lds.S
+++ b/arch/blackfin/kernel/vmlinux.lds.S
@@ -94,7 +94,7 @@ SECTIONS
__sdata = .;
/* This gets done first, so the glob doesn't suck it in */
. = ALIGN(32);
- *(.data.cacheline_aligned)
+ *(.data..cacheline_aligned)

#if !L1_DATA_A_LENGTH
. = ALIGN(32);
diff --git a/arch/cris/kernel/process.c b/arch/cris/kernel/process.c
index 60816e8..d31e27e 100644
--- a/arch/cris/kernel/process.c
+++ b/arch/cris/kernel/process.c
@@ -51,7 +51,7 @@ EXPORT_SYMBOL(init_mm);
* "init_task" linker map entry..
*/
union thread_union init_thread_union
- __attribute__((__section__(".data.init_task"))) =
+ __attribute__((__section__(".data..init_task"))) =
{ INIT_THREAD_INFO(init_task) };

/*
diff --git a/arch/cris/kernel/vmlinux.lds.S b/arch/cris/kernel/vmlinux.lds.S
index 0d2adfc..f603bc8 100644
--- a/arch/cris/kernel/vmlinux.lds.S
+++ b/arch/cris/kernel/vmlinux.lds.S
@@ -68,7 +68,7 @@ SECTIONS
_edata = . ;

. = ALIGN(PAGE_SIZE); /* init_task and stack, must be aligned. */
- .data.init_task : { *(.data.init_task) }
+ .data..init_task : { *(.data..init_task) }

. = ALIGN(PAGE_SIZE); /* Init code and data. */
__init_begin = .;
diff --git a/arch/frv/kernel/break.S b/arch/frv/kernel/break.S
index bd0bdf9..cbb6958 100644
--- a/arch/frv/kernel/break.S
+++ b/arch/frv/kernel/break.S
@@ -21,7 +21,7 @@
#
# the break handler has its own stack
#
- .section .bss.stack
+ .section .bss..stack
.globl __break_user_context
.balign THREAD_SIZE
__break_stack:
@@ -63,7 +63,7 @@ __break_trace_through_exceptions:
# entry point for Break Exceptions/Interrupts
#
###############################################################################
- .section .text.break
+ .section .text..break
.balign 4
.globl __entry_break
__entry_break:
diff --git a/arch/frv/kernel/entry.S b/arch/frv/kernel/entry.S
index 99060ab..fe5479b 100644
--- a/arch/frv/kernel/entry.S
+++ b/arch/frv/kernel/entry.S
@@ -38,7 +38,7 @@

#define nr_syscalls ((syscall_table_size)/4)

- .section .text.entry
+ .section .text..entry
.balign 4

.macro LEDS val
diff --git a/arch/frv/kernel/head-mmu-fr451.S b/arch/frv/kernel/head-mmu-fr451.S
index c8f210d..5900967 100644
--- a/arch/frv/kernel/head-mmu-fr451.S
+++ b/arch/frv/kernel/head-mmu-fr451.S
@@ -31,7 +31,7 @@
#define __400_LCR 0xfe000100
#define __400_LSBR 0xfe000c00

- .section .text.init,"ax"
+ .section .text..init,"ax"
.balign 4

###############################################################################
diff --git a/arch/frv/kernel/head-uc-fr401.S b/arch/frv/kernel/head-uc-fr401.S
index ee282be..ecba176 100644
--- a/arch/frv/kernel/head-uc-fr401.S
+++ b/arch/frv/kernel/head-uc-fr401.S
@@ -30,7 +30,7 @@
#define __400_LCR 0xfe000100
#define __400_LSBR 0xfe000c00

- .section .text.init,"ax"
+ .section .text..init,"ax"
.balign 4

###############################################################################
diff --git a/arch/frv/kernel/head-uc-fr451.S b/arch/frv/kernel/head-uc-fr451.S
index b10d9c8..ed9dd17 100644
--- a/arch/frv/kernel/head-uc-fr451.S
+++ b/arch/frv/kernel/head-uc-fr451.S
@@ -30,7 +30,7 @@
#define __400_LCR 0xfe000100
#define __400_LSBR 0xfe000c00

- .section .text.init,"ax"
+ .section .text..init,"ax"
.balign 4

###############################################################################
diff --git a/arch/frv/kernel/head-uc-fr555.S b/arch/frv/kernel/head-uc-fr555.S
index 39937c1..ee46b0b 100644
--- a/arch/frv/kernel/head-uc-fr555.S
+++ b/arch/frv/kernel/head-uc-fr555.S
@@ -29,7 +29,7 @@
#define __551_LCR 0xfeff1100
#define __551_LSBR 0xfeff1c00

- .section .text.init,"ax"
+ .section .text..init,"ax"
.balign 4

###############################################################################
diff --git a/arch/frv/kernel/head.S b/arch/frv/kernel/head.S
index fecf751..35e6391 100644
--- a/arch/frv/kernel/head.S
+++ b/arch/frv/kernel/head.S
@@ -27,7 +27,7 @@
# command line string
#
###############################################################################
- .section .text.head,"ax"
+ .section .text..head,"ax"
.balign 4

.globl _boot, __head_reference
@@ -541,7 +541,7 @@ __head_end:
.size _boot, .-_boot

# provide a point for GDB to place a break
- .section .text.start,"ax"
+ .section .text..start,"ax"
.globl _start
.balign 4
_start:
diff --git a/arch/frv/kernel/init_task.c b/arch/frv/kernel/init_task.c
index 29429a8..f57ec19 100644
--- a/arch/frv/kernel/init_task.c
+++ b/arch/frv/kernel/init_task.c
@@ -24,7 +24,7 @@ EXPORT_SYMBOL(init_mm);
* "init_task" linker map entry..
*/
union thread_union init_thread_union
- __attribute__((__section__(".data.init_task"))) =
+ __attribute__((__section__(".data..init_task"))) =
{ INIT_THREAD_INFO(init_task) };

/*
diff --git a/arch/frv/kernel/vmlinux.lds.S b/arch/frv/kernel/vmlinux.lds.S
index b95c4ea..a7bcf9b 100644
--- a/arch/frv/kernel/vmlinux.lds.S
+++ b/arch/frv/kernel/vmlinux.lds.S
@@ -26,7 +26,7 @@ SECTIONS

_sinittext = .;
.init.text : {
- *(.text.head)
+ *(.text..head)
#ifndef CONFIG_DEBUG_INFO
INIT_TEXT
EXIT_TEXT
@@ -71,13 +71,13 @@ SECTIONS

/* put sections together that have massive alignment issues */
. = ALIGN(THREAD_SIZE);
- .data.init_task : {
+ .data..init_task : {
/* init task record & stack */
- *(.data.init_task)
+ *(.data..init_task)
}

. = ALIGN(L1_CACHE_BYTES);
- .data.cacheline_aligned : { *(.data.cacheline_aligned) }
+ .data..cacheline_aligned : { *(.data..cacheline_aligned) }

.trap : {
/* trap table management - read entry-table.S before modifying */
@@ -94,10 +94,10 @@ SECTIONS
_text = .;
_stext = .;
.text : {
- *(.text.start)
- *(.text.entry)
- *(.text.break)
- *(.text.tlbmiss)
+ *(.text..start)
+ *(.text..entry)
+ *(.text..break)
+ *(.text..tlbmiss)
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
@@ -152,7 +152,7 @@ SECTIONS

.sbss : { *(.sbss .sbss.*) }
.bss : { *(.bss .bss.*) }
- .bss.stack : { *(.bss) }
+ .bss..stack : { *(.bss) }

__bss_stop = .;
_end = . ;
diff --git a/arch/frv/mm/tlb-miss.S b/arch/frv/mm/tlb-miss.S
index 0764348..0f41b41 100644
--- a/arch/frv/mm/tlb-miss.S
+++ b/arch/frv/mm/tlb-miss.S
@@ -16,7 +16,7 @@
#include <asm/highmem.h>
#include <asm/spr-regs.h>

- .section .text.tlbmiss
+ .section .text..tlbmiss
.balign 4

.globl __entry_insn_mmu_miss
diff --git a/arch/h8300/boot/compressed/head.S b/arch/h8300/boot/compressed/head.S
index 985a81a..10e9a2d 100644
--- a/arch/h8300/boot/compressed/head.S
+++ b/arch/h8300/boot/compressed/head.S
@@ -9,7 +9,7 @@

#define SRAM_START 0xff4000

- .section .text.startup
+ .section .text..startup
.global startup
startup:
mov.l #SRAM_START+0x8000, sp
diff --git a/arch/h8300/boot/compressed/vmlinux.lds b/arch/h8300/boot/compressed/vmlinux.lds
index 65e2a0d..a0a3a0e 100644
--- a/arch/h8300/boot/compressed/vmlinux.lds
+++ b/arch/h8300/boot/compressed/vmlinux.lds
@@ -4,7 +4,7 @@ SECTIONS
{
__stext = . ;
__text = .;
- *(.text.startup)
+ *(.text..startup)
*(.text)
__etext = . ;
}
diff --git a/arch/h8300/kernel/init_task.c b/arch/h8300/kernel/init_task.c
index cb5dc55..fb473b1 100644
--- a/arch/h8300/kernel/init_task.c
+++ b/arch/h8300/kernel/init_task.c
@@ -36,6 +36,6 @@ EXPORT_SYMBOL(init_task);
* "init_task" linker map entry..
*/
union thread_union init_thread_union
- __attribute__((__section__(".data.init_task"))) =
+ __attribute__((__section__(".data..init_task"))) =
{ INIT_THREAD_INFO(init_task) };

diff --git a/arch/h8300/kernel/vmlinux.lds.S b/arch/h8300/kernel/vmlinux.lds.S
index 43a87b9..c0e3635 100644
--- a/arch/h8300/kernel/vmlinux.lds.S
+++ b/arch/h8300/kernel/vmlinux.lds.S
@@ -101,7 +101,7 @@ SECTIONS
___data_start = . ;

. = ALIGN(0x2000) ;
- *(.data.init_task)
+ *(.data..init_task)
. = ALIGN(0x4) ;
DATA_DATA
. = ALIGN(0x4) ;
diff --git a/arch/ia64/include/asm/asmmacro.h b/arch/ia64/include/asm/asmmacro.h
index c1642fd..3ab6d75 100644
--- a/arch/ia64/include/asm/asmmacro.h
+++ b/arch/ia64/include/asm/asmmacro.h
@@ -70,12 +70,12 @@ name:
* path (ivt.S - TLB miss processing) or in places where it might not be
* safe to use a "tpa" instruction (mca_asm.S - error recovery).
*/
- .section ".data.patch.vtop", "a" // declare section & section attributes
+ .section ".data..patch.vtop", "a" // declare section & section attributes
.previous

#define LOAD_PHYSICAL(pr, reg, obj) \
[1:](pr)movl reg = obj; \
- .xdata4 ".data.patch.vtop", 1b-.
+ .xdata4 ".data..patch.vtop", 1b-.

/*
* For now, we always put in the McKinley E9 workaround. On CPUs that don't need it,
@@ -84,11 +84,11 @@ name:
#define DO_MCKINLEY_E9_WORKAROUND

#ifdef DO_MCKINLEY_E9_WORKAROUND
- .section ".data.patch.mckinley_e9", "a"
+ .section ".data..patch.mckinley_e9", "a"
.previous
/* workaround for Itanium 2 Errata 9: */
# define FSYS_RETURN \
- .xdata4 ".data.patch.mckinley_e9", 1f-.; \
+ .xdata4 ".data..patch.mckinley_e9", 1f-.; \
1:{ .mib; \
nop.m 0; \
mov r16=ar.pfs; \
@@ -107,11 +107,11 @@ name:
* If physical stack register size is different from DEF_NUM_STACK_REG,
* dynamically patch the kernel for correct size.
*/
- .section ".data.patch.phys_stack_reg", "a"
+ .section ".data..patch.phys_stack_reg", "a"
.previous
#define LOAD_PHYS_STACK_REG_SIZE(reg) \
[1:] adds reg=IA64_NUM_PHYS_STACK_REG*8+8,r0; \
- .xdata4 ".data.patch.phys_stack_reg", 1b-.
+ .xdata4 ".data..patch.phys_stack_reg", 1b-.

/*
* Up until early 2004, use of .align within a function caused bad unwind info.
diff --git a/arch/ia64/include/asm/cache.h b/arch/ia64/include/asm/cache.h
index e7482bd..988254a 100644
--- a/arch/ia64/include/asm/cache.h
+++ b/arch/ia64/include/asm/cache.h
@@ -24,6 +24,6 @@
# define SMP_CACHE_BYTES (1 << 3)
#endif

-#define __read_mostly __attribute__((__section__(".data.read_mostly")))
+#define __read_mostly __attribute__((__section__(".data..read_mostly")))

#endif /* _ASM_IA64_CACHE_H */
diff --git a/arch/ia64/include/asm/percpu.h b/arch/ia64/include/asm/percpu.h
index 77f30b6..d9ec30b 100644
--- a/arch/ia64/include/asm/percpu.h
+++ b/arch/ia64/include/asm/percpu.h
@@ -27,7 +27,7 @@ extern void *per_cpu_init(void);

#else /* ! SMP */

-#define PER_CPU_ATTRIBUTES __attribute__((__section__(".data.percpu")))
+#define PER_CPU_ATTRIBUTES __attribute__((__section__(".data..percpu")))

#define per_cpu_init() (__phys_per_cpu_start)

diff --git a/arch/ia64/kernel/Makefile b/arch/ia64/kernel/Makefile
index c381ea9..61900c7 100644
--- a/arch/ia64/kernel/Makefile
+++ b/arch/ia64/kernel/Makefile
@@ -72,7 +72,7 @@ GATECFLAGS_gate-syms.o = -r
$(obj)/gate-syms.o: $(obj)/gate.lds $(obj)/gate.o FORCE
$(call if_changed,gate)

-# gate-data.o contains the gate DSO image as data in section .data.gate.
+# gate-data.o contains the gate DSO image as data in section .data..gate.
# We must build gate.so before we can assemble it.
# Note: kbuild does not track this dependency due to usage of .incbin
$(obj)/gate-data.o: $(obj)/gate.so
diff --git a/arch/ia64/kernel/gate-data.S b/arch/ia64/kernel/gate-data.S
index 258c0a3..b3ef1c7 100644
--- a/arch/ia64/kernel/gate-data.S
+++ b/arch/ia64/kernel/gate-data.S
@@ -1,3 +1,3 @@
- .section .data.gate, "aw"
+ .section .data..gate, "aw"

.incbin "arch/ia64/kernel/gate.so"
diff --git a/arch/ia64/kernel/gate.S b/arch/ia64/kernel/gate.S
index 74b1ccc..8ecaf37 100644
--- a/arch/ia64/kernel/gate.S
+++ b/arch/ia64/kernel/gate.S
@@ -20,18 +20,18 @@
* to targets outside the shared object) and to avoid multi-phase kernel builds, we
* simply create minimalistic "patch lists" in special ELF sections.
*/
- .section ".data.patch.fsyscall_table", "a"
+ .section ".data..patch.fsyscall_table", "a"
.previous
#define LOAD_FSYSCALL_TABLE(reg) \
[1:] movl reg=0; \
- .xdata4 ".data.patch.fsyscall_table", 1b-.
+ .xdata4 ".data..patch.fsyscall_table", 1b-.

- .section ".data.patch.brl_fsys_bubble_down", "a"
+ .section ".data..patch.brl_fsys_bubble_down", "a"
.previous
#define BRL_COND_FSYS_BUBBLE_DOWN(pr) \
[1:](pr)brl.cond.sptk 0; \
;; \
- .xdata4 ".data.patch.brl_fsys_bubble_down", 1b-.
+ .xdata4 ".data..patch.brl_fsys_bubble_down", 1b-.

GLOBAL_ENTRY(__kernel_syscall_via_break)
.prologue
diff --git a/arch/ia64/kernel/gate.lds.S b/arch/ia64/kernel/gate.lds.S
index 3cb1abc..9ef2444 100644
--- a/arch/ia64/kernel/gate.lds.S
+++ b/arch/ia64/kernel/gate.lds.S
@@ -32,21 +32,21 @@ SECTIONS
*/
. = GATE_ADDR + 0x600;

- .data.patch : {
+ .data..patch : {
__start_gate_mckinley_e9_patchlist = .;
- *(.data.patch.mckinley_e9)
+ *(.data..patch.mckinley_e9)
__end_gate_mckinley_e9_patchlist = .;

__start_gate_vtop_patchlist = .;
- *(.data.patch.vtop)
+ *(.data..patch.vtop)
__end_gate_vtop_patchlist = .;

__start_gate_fsyscall_patchlist = .;
- *(.data.patch.fsyscall_table)
+ *(.data..patch.fsyscall_table)
__end_gate_fsyscall_patchlist = .;

__start_gate_brl_fsys_bubble_down_patchlist = .;
- *(.data.patch.brl_fsys_bubble_down)
+ *(.data..patch.brl_fsys_bubble_down)
__end_gate_brl_fsys_bubble_down_patchlist = .;
} :readable

diff --git a/arch/ia64/kernel/head.S b/arch/ia64/kernel/head.S
index 59301c4..3617aaf 100644
--- a/arch/ia64/kernel/head.S
+++ b/arch/ia64/kernel/head.S
@@ -181,7 +181,7 @@ swapper_pg_dir:
halt_msg:
stringz "Halting kernel\n"

- .section .text.head,"ax"
+ .section .text..head,"ax"

.global start_ap

diff --git a/arch/ia64/kernel/init_task.c b/arch/ia64/kernel/init_task.c
index 5b0e830..8a5028c 100644
--- a/arch/ia64/kernel/init_task.c
+++ b/arch/ia64/kernel/init_task.c
@@ -27,7 +27,7 @@ EXPORT_SYMBOL(init_mm);
* Initial task structure.
*
* We need to make sure that this is properly aligned due to the way process stacks are
- * handled. This is done by having a special ".data.init_task" section...
+ * handled. This is done by having a special ".data..init_task" section...
*/
#define init_thread_info init_task_mem.s.thread_info

@@ -37,7 +37,7 @@ union {
struct thread_info thread_info;
} s;
unsigned long stack[KERNEL_STACK_SIZE/sizeof (unsigned long)];
-} init_task_mem asm ("init_task") __attribute__((section(".data.init_task"))) = {{
+} init_task_mem asm ("init_task") __attribute__((section(".data..init_task"))) = {{
.task = INIT_TASK(init_task_mem.s.task),
.thread_info = INIT_THREAD_INFO(init_task_mem.s.task)
}};
diff --git a/arch/ia64/kernel/ivt.S b/arch/ia64/kernel/ivt.S
index f675d8e..7214046 100644
--- a/arch/ia64/kernel/ivt.S
+++ b/arch/ia64/kernel/ivt.S
@@ -83,7 +83,7 @@
mov r19=n;; /* prepare to save predicates */ \
br.sptk.many dispatch_to_fault_handler

- .section .text.ivt,"ax"
+ .section .text..ivt,"ax"

.align 32768 // align on 32KB boundary
.global ia64_ivt
diff --git a/arch/ia64/kernel/minstate.h b/arch/ia64/kernel/minstate.h
index 292e214..d56753a 100644
--- a/arch/ia64/kernel/minstate.h
+++ b/arch/ia64/kernel/minstate.h
@@ -16,7 +16,7 @@
#define ACCOUNT_SYS_ENTER
#endif

-.section ".data.patch.rse", "a"
+.section ".data..patch.rse", "a"
.previous

/*
@@ -215,7 +215,7 @@
(pUStk) extr.u r17=r18,3,6; \
(pUStk) sub r16=r18,r22; \
[1:](pKStk) br.cond.sptk.many 1f; \
- .xdata4 ".data.patch.rse",1b-. \
+ .xdata4 ".data..patch.rse",1b-. \
;; \
cmp.ge p6,p7 = 33,r17; \
;; \
diff --git a/arch/ia64/kernel/paravirtentry.S b/arch/ia64/kernel/paravirtentry.S
index 2f42fcb..d778e8b 100644
--- a/arch/ia64/kernel/paravirtentry.S
+++ b/arch/ia64/kernel/paravirtentry.S
@@ -25,7 +25,7 @@
#include "entry.h"

#define DATA8(sym, init_value) \
- .pushsection .data.read_mostly ; \
+ .pushsection .data..read_mostly ; \
.align 8 ; \
.global sym ; \
sym: ; \
diff --git a/arch/ia64/kernel/vmlinux.lds.S b/arch/ia64/kernel/vmlinux.lds.S
index 10a7d47..1596906 100644
--- a/arch/ia64/kernel/vmlinux.lds.S
+++ b/arch/ia64/kernel/vmlinux.lds.S
@@ -8,7 +8,7 @@

#define IVT_TEXT \
VMLINUX_SYMBOL(__start_ivt_text) = .; \
- *(.text.ivt) \
+ *(.text..ivt) \
VMLINUX_SYMBOL(__end_ivt_text) = .;

OUTPUT_FORMAT("elf64-ia64-little")
@@ -51,13 +51,13 @@ SECTIONS
KPROBES_TEXT
*(.gnu.linkonce.t*)
}
- .text.head : AT(ADDR(.text.head) - LOAD_OFFSET)
- { *(.text.head) }
+ .text..head : AT(ADDR(.text..head) - LOAD_OFFSET)
+ { *(.text..head) }
.text2 : AT(ADDR(.text2) - LOAD_OFFSET)
{ *(.text2) }
#ifdef CONFIG_SMP
- .text.lock : AT(ADDR(.text.lock) - LOAD_OFFSET)
- { *(.text.lock) }
+ .text..lock : AT(ADDR(.text..lock) - LOAD_OFFSET)
+ { *(.text..lock) }
#endif
_etext = .;

@@ -84,10 +84,10 @@ SECTIONS
__stop___mca_table = .;
}

- .data.patch.phys_stack_reg : AT(ADDR(.data.patch.phys_stack_reg) - LOAD_OFFSET)
+ .data..patch.phys_stack_reg : AT(ADDR(.data..patch.phys_stack_reg) - LOAD_OFFSET)
{
__start___phys_stack_reg_patchlist = .;
- *(.data.patch.phys_stack_reg)
+ *(.data..patch.phys_stack_reg)
__end___phys_stack_reg_patchlist = .;
}

@@ -148,24 +148,24 @@ SECTIONS
__initcall_end = .;
}

- .data.patch.vtop : AT(ADDR(.data.patch.vtop) - LOAD_OFFSET)
+ .data..patch.vtop : AT(ADDR(.data..patch.vtop) - LOAD_OFFSET)
{
__start___vtop_patchlist = .;
- *(.data.patch.vtop)
+ *(.data..patch.vtop)
__end___vtop_patchlist = .;
}

- .data.patch.rse : AT(ADDR(.data.patch.rse) - LOAD_OFFSET)
+ .data..patch.rse : AT(ADDR(.data..patch.rse) - LOAD_OFFSET)
{
__start___rse_patchlist = .;
- *(.data.patch.rse)
+ *(.data..patch.rse)
__end___rse_patchlist = .;
}

- .data.patch.mckinley_e9 : AT(ADDR(.data.patch.mckinley_e9) - LOAD_OFFSET)
+ .data..patch.mckinley_e9 : AT(ADDR(.data..patch.mckinley_e9) - LOAD_OFFSET)
{
__start___mckinley_e9_bundles = .;
- *(.data.patch.mckinley_e9)
+ *(.data..patch.mckinley_e9)
__end___mckinley_e9_bundles = .;
}

@@ -193,34 +193,34 @@ SECTIONS
__init_end = .;

/* The initial task and kernel stack */
- .data.init_task : AT(ADDR(.data.init_task) - LOAD_OFFSET)
- { *(.data.init_task) }
+ .data..init_task : AT(ADDR(.data..init_task) - LOAD_OFFSET)
+ { *(.data..init_task) }

- .data.page_aligned : AT(ADDR(.data.page_aligned) - LOAD_OFFSET)
+ .data..page_aligned : AT(ADDR(.data..page_aligned) - LOAD_OFFSET)
{ *(__special_page_section)
__start_gate_section = .;
- *(.data.gate)
+ *(.data..gate)
__stop_gate_section = .;
}
. = ALIGN(PAGE_SIZE); /* make sure the gate page doesn't expose
* kernel data
*/

- .data.read_mostly : AT(ADDR(.data.read_mostly) - LOAD_OFFSET)
- { *(.data.read_mostly) }
+ .data..read_mostly : AT(ADDR(.data..read_mostly) - LOAD_OFFSET)
+ { *(.data..read_mostly) }

- .data.cacheline_aligned : AT(ADDR(.data.cacheline_aligned) - LOAD_OFFSET)
- { *(.data.cacheline_aligned) }
+ .data..cacheline_aligned : AT(ADDR(.data..cacheline_aligned) - LOAD_OFFSET)
+ { *(.data..cacheline_aligned) }

/* Per-cpu data: */
percpu : { } :percpu
. = ALIGN(PERCPU_PAGE_SIZE);
__phys_per_cpu_start = .;
- .data.percpu PERCPU_ADDR : AT(__phys_per_cpu_start - LOAD_OFFSET)
+ .data..percpu PERCPU_ADDR : AT(__phys_per_cpu_start - LOAD_OFFSET)
{
__per_cpu_start = .;
- *(.data.percpu)
- *(.data.percpu.shared_aligned)
+ *(.data..percpu)
+ *(.data..percpu.shared_aligned)
__per_cpu_end = .;
}
. = __phys_per_cpu_start + PERCPU_PAGE_SIZE; /* ensure percpu data fits
diff --git a/arch/ia64/kvm/vmm_ivt.S b/arch/ia64/kvm/vmm_ivt.S
index 3ef1a01..c5ab23b 100644
--- a/arch/ia64/kvm/vmm_ivt.S
+++ b/arch/ia64/kvm/vmm_ivt.S
@@ -104,7 +104,7 @@ GLOBAL_ENTRY(kvm_vmm_panic)
br.call.sptk.many b6=vmm_panic_handler;
END(kvm_vmm_panic)

- .section .text.ivt,"ax"
+ .section .text..ivt,"ax"

.align 32768 // align on 32KB boundary
.global kvm_ia64_ivt
diff --git a/arch/ia64/xen/xensetup.S b/arch/ia64/xen/xensetup.S
index 28fed1f..28def53 100644
--- a/arch/ia64/xen/xensetup.S
+++ b/arch/ia64/xen/xensetup.S
@@ -14,7 +14,7 @@
#include <linux/init.h>
#include <xen/interface/elfnote.h>

- .section .data.read_mostly
+ .section .data..read_mostly
.align 8
.global xen_domain_type
xen_domain_type:
diff --git a/arch/m32r/kernel/head.S b/arch/m32r/kernel/head.S
index 9091606..2bac669 100644
--- a/arch/m32r/kernel/head.S
+++ b/arch/m32r/kernel/head.S
@@ -23,7 +23,7 @@ __INITDATA
/*
* References to members of the boot_cpu_data structure.
*/
-.section .text.head, "ax"
+.section .text..head, "ax"
.global start_kernel
.global __bss_start
.global _end
diff --git a/arch/m32r/kernel/init_task.c b/arch/m32r/kernel/init_task.c
index 016885c..7c24cec 100644
--- a/arch/m32r/kernel/init_task.c
+++ b/arch/m32r/kernel/init_task.c
@@ -25,7 +25,7 @@ EXPORT_SYMBOL(init_mm);
* "init_task" linker map entry..
*/
union thread_union init_thread_union
- __attribute__((__section__(".data.init_task"))) =
+ __attribute__((__section__(".data..init_task"))) =
{ INIT_THREAD_INFO(init_task) };

/*
diff --git a/arch/m32r/kernel/vmlinux.lds.S b/arch/m32r/kernel/vmlinux.lds.S
index 9db05df..2bcc37e 100644
--- a/arch/m32r/kernel/vmlinux.lds.S
+++ b/arch/m32r/kernel/vmlinux.lds.S
@@ -27,7 +27,7 @@ SECTIONS
_text = .; /* Text and read-only data */
.boot : { *(.boot) } = 0
.text : {
- *(.text.head)
+ *(.text..head)
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
@@ -57,17 +57,17 @@ SECTIONS

. = ALIGN(4096);
__nosave_begin = .;
- .data_nosave : { *(.data.nosave) }
+ .data_nosave : { *(.data..nosave) }
. = ALIGN(4096);
__nosave_end = .;

. = ALIGN(32);
- .data.cacheline_aligned : { *(.data.cacheline_aligned) }
+ .data..cacheline_aligned : { *(.data..cacheline_aligned) }

_edata = .; /* End of data section */

. = ALIGN(8192); /* init_task */
- .data.init_task : { *(.data.init_task) }
+ .data..init_task : { *(.data..init_task) }

/* will be freed after init */
. = ALIGN(4096); /* Init code and data */
diff --git a/arch/m68k/kernel/head.S b/arch/m68k/kernel/head.S
index f513f53..bdb247e 100644
--- a/arch/m68k/kernel/head.S
+++ b/arch/m68k/kernel/head.S
@@ -577,7 +577,7 @@ func_define putn,1
#endif
.endm

-.section ".text.head","ax"
+.section ".text..head","ax"
ENTRY(_stext)
/*
* Version numbers of the bootinfo interface
diff --git a/arch/m68k/kernel/process.c b/arch/m68k/kernel/process.c
index 632ce01..b3c78d9 100644
--- a/arch/m68k/kernel/process.c
+++ b/arch/m68k/kernel/process.c
@@ -47,7 +47,7 @@ struct mm_struct init_mm = INIT_MM(init_mm);
EXPORT_SYMBOL(init_mm);

union thread_union init_thread_union
-__attribute__((section(".data.init_task"), aligned(THREAD_SIZE)))
+__attribute__((section(".data..init_task"), aligned(THREAD_SIZE)))
= { INIT_THREAD_INFO(init_task) };

/* initial task structure */
diff --git a/arch/m68k/kernel/sun3-head.S b/arch/m68k/kernel/sun3-head.S
index aad0159..5a6714b 100644
--- a/arch/m68k/kernel/sun3-head.S
+++ b/arch/m68k/kernel/sun3-head.S
@@ -29,7 +29,7 @@ kernel_pmd_table: .skip 0x2000
.globl kernel_pg_dir
.equ kernel_pg_dir,kernel_pmd_table

- .section .text.head
+ .section .text..head
ENTRY(_stext)
ENTRY(_start)

diff --git a/arch/m68k/kernel/vmlinux-std.lds b/arch/m68k/kernel/vmlinux-std.lds
index f846d4e..35b6da8 100644
--- a/arch/m68k/kernel/vmlinux-std.lds
+++ b/arch/m68k/kernel/vmlinux-std.lds
@@ -12,7 +12,7 @@ SECTIONS
. = 0x1000;
_text = .; /* Text and read-only data */
.text : {
- *(.text.head)
+ *(.text..head)
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
@@ -35,7 +35,7 @@ SECTIONS
}

. = ALIGN(16);
- .data.cacheline_aligned : { *(.data.cacheline_aligned) }
+ .data..cacheline_aligned : { *(.data..cacheline_aligned) }

.bss : { *(.bss) } /* BSS */

@@ -78,7 +78,7 @@ SECTIONS
. = ALIGN(8192);
__init_end = .;

- .data.init_task : { *(.data.init_task) } /* The initial task and kernel stack */
+ .data..init_task : { *(.data..init_task) } /* The initial task and kernel stack */

_end = . ;

diff --git a/arch/m68k/kernel/vmlinux-sun3.lds b/arch/m68k/kernel/vmlinux-sun3.lds
index d9368c0..e6ce561 100644
--- a/arch/m68k/kernel/vmlinux-sun3.lds
+++ b/arch/m68k/kernel/vmlinux-sun3.lds
@@ -12,7 +12,7 @@ SECTIONS
. = 0xE002000;
_text = .; /* Text and read-only data */
.text : {
- *(.text.head)
+ *(.text..head)
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
@@ -70,7 +70,7 @@ __init_begin = .;
#endif
. = ALIGN(PAGE_SIZE);
__init_end = .;
- .data.init.task : { *(.data.init_task) }
+ .data..init.task : { *(.data..init_task) }


.bss : { *(.bss) } /* BSS */
diff --git a/arch/m68knommu/kernel/init_task.c b/arch/m68knommu/kernel/init_task.c
index fe282de..1ea36d6 100644
--- a/arch/m68knommu/kernel/init_task.c
+++ b/arch/m68knommu/kernel/init_task.c
@@ -36,6 +36,6 @@ EXPORT_SYMBOL(init_task);
* "init_task" linker map entry..
*/
union thread_union init_thread_union
- __attribute__((__section__(".data.init_task"))) =
+ __attribute__((__section__(".data..init_task"))) =
{ INIT_THREAD_INFO(init_task) };

diff --git a/arch/m68knommu/kernel/vmlinux.lds.S b/arch/m68knommu/kernel/vmlinux.lds.S
index 69ba9b1..bdf9b1c 100644
--- a/arch/m68knommu/kernel/vmlinux.lds.S
+++ b/arch/m68knommu/kernel/vmlinux.lds.S
@@ -55,7 +55,7 @@ SECTIONS {
.romvec : {
__rom_start = . ;
_romvec = .;
- *(.data.initvect)
+ *(.data..initvect)
} > romvec
#endif

@@ -66,7 +66,7 @@ SECTIONS {
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
- *(.text.lock)
+ *(.text..lock)

. = ALIGN(16); /* Exception table */
__start___ex_table = .;
@@ -148,7 +148,7 @@ SECTIONS {
_sdata = . ;
DATA_DATA
. = ALIGN(8192) ;
- *(.data.init_task)
+ *(.data..init_task)
_edata = . ;
} > DATA

diff --git a/arch/m68knommu/platform/68360/head-ram.S b/arch/m68knommu/platform/68360/head-ram.S
index 2ef0624..8eb94fb 100644
--- a/arch/m68knommu/platform/68360/head-ram.S
+++ b/arch/m68knommu/platform/68360/head-ram.S
@@ -280,7 +280,7 @@ _dprbase:
* and then overwritten as needed.
*/

-.section ".data.initvect","awx"
+.section ".data..initvect","awx"
.long RAMEND /* Reset: Initial Stack Pointer - 0. */
.long _start /* Reset: Initial Program Counter - 1. */
.long buserr /* Bus Error - 2. */
diff --git a/arch/m68knommu/platform/68360/head-rom.S b/arch/m68knommu/platform/68360/head-rom.S
index 62ecf41..97510e5 100644
--- a/arch/m68knommu/platform/68360/head-rom.S
+++ b/arch/m68knommu/platform/68360/head-rom.S
@@ -291,7 +291,7 @@ _dprbase:
* and then overwritten as needed.
*/

-.section ".data.initvect","awx"
+.section ".data..initvect","awx"
.long RAMEND /* Reset: Initial Stack Pointer - 0. */
.long _start /* Reset: Initial Program Counter - 1. */
.long buserr /* Bus Error - 2. */
diff --git a/arch/mips/kernel/init_task.c b/arch/mips/kernel/init_task.c
index 149cd91..acb71af 100644
--- a/arch/mips/kernel/init_task.c
+++ b/arch/mips/kernel/init_task.c
@@ -26,7 +26,7 @@ EXPORT_SYMBOL(init_mm);
* The things we do for performance..
*/
union thread_union init_thread_union
- __attribute__((__section__(".data.init_task"),
+ __attribute__((__section__(".data..init_task"),
__aligned__(THREAD_SIZE))) =
{ INIT_THREAD_INFO(init_task) };

diff --git a/arch/mips/kernel/vmlinux.lds.S b/arch/mips/kernel/vmlinux.lds.S
index 58738c8..6b302c7 100644
--- a/arch/mips/kernel/vmlinux.lds.S
+++ b/arch/mips/kernel/vmlinux.lds.S
@@ -77,7 +77,7 @@ SECTIONS
* object file alignment. Using 32768
*/
. = ALIGN(_PAGE_SIZE);
- *(.data.init_task)
+ *(.data..init_task)

DATA_DATA
CONSTRUCTORS
@@ -99,14 +99,14 @@ SECTIONS
. = ALIGN(_PAGE_SIZE);
.data_nosave : {
__nosave_begin = .;
- *(.data.nosave)
+ *(.data..nosave)
}
. = ALIGN(_PAGE_SIZE);
__nosave_end = .;

. = ALIGN(1 << CONFIG_MIPS_L1_CACHE_SHIFT);
- .data.cacheline_aligned : {
- *(.data.cacheline_aligned)
+ .data..cacheline_aligned : {
+ *(.data..cacheline_aligned)
}
_edata = .; /* End of data section */

diff --git a/arch/mips/lasat/image/head.S b/arch/mips/lasat/image/head.S
index efb95f2..e0ecda9 100644
--- a/arch/mips/lasat/image/head.S
+++ b/arch/mips/lasat/image/head.S
@@ -1,7 +1,7 @@
#include <asm/lasat/head.h>

.text
- .section .text.start, "ax"
+ .section .text..start, "ax"
.set noreorder
.set mips3

diff --git a/arch/mips/lasat/image/romscript.normal b/arch/mips/lasat/image/romscript.normal
index 988f8ad..0864c96 100644
--- a/arch/mips/lasat/image/romscript.normal
+++ b/arch/mips/lasat/image/romscript.normal
@@ -4,7 +4,7 @@ SECTIONS
{
.text :
{
- *(.text.start)
+ *(.text..start)
}

/* Data in ROM */
diff --git a/arch/mn10300/kernel/head.S b/arch/mn10300/kernel/head.S
index 606bd8c..dd0db5f 100644
--- a/arch/mn10300/kernel/head.S
+++ b/arch/mn10300/kernel/head.S
@@ -19,7 +19,7 @@
#include <asm/param.h>
#include <asm/unit/serial.h>

- .section .text.head,"ax"
+ .section .text..head,"ax"

###############################################################################
#
diff --git a/arch/mn10300/kernel/init_task.c b/arch/mn10300/kernel/init_task.c
index 5ac3566..599840f 100644
--- a/arch/mn10300/kernel/init_task.c
+++ b/arch/mn10300/kernel/init_task.c
@@ -31,7 +31,7 @@ EXPORT_SYMBOL(init_mm);
* "init_task" linker map entry..
*/
union thread_union init_thread_union
- __attribute__((__section__(".data.init_task"))) =
+ __attribute__((__section__(".data..init_task"))) =
{ INIT_THREAD_INFO(init_task) };

/*
diff --git a/arch/mn10300/kernel/vmlinux.lds.S b/arch/mn10300/kernel/vmlinux.lds.S
index b825966..938792a 100644
--- a/arch/mn10300/kernel/vmlinux.lds.S
+++ b/arch/mn10300/kernel/vmlinux.lds.S
@@ -28,7 +28,7 @@ SECTIONS
_text = .; /* Text and read-only data */
.text : {
*(
- .text.head
+ .text..head
.text
)
TEXT_TEXT
@@ -58,25 +58,25 @@ SECTIONS

. = ALIGN(PAGE_SIZE);
__nosave_begin = .;
- .data_nosave : { *(.data.nosave) }
+ .data_nosave : { *(.data..nosave) }
. = ALIGN(PAGE_SIZE);
__nosave_end = .;

. = ALIGN(PAGE_SIZE);
- .data.page_aligned : { *(.data.idt) }
+ .data..page_aligned : { *(.data..idt) }

. = ALIGN(32);
- .data.cacheline_aligned : { *(.data.cacheline_aligned) }
+ .data..cacheline_aligned : { *(.data..cacheline_aligned) }

/* rarely changed data like cpu maps */
. = ALIGN(32);
- .data.read_mostly : AT(ADDR(.data.read_mostly)) {
- *(.data.read_mostly)
+ .data..read_mostly : AT(ADDR(.data..read_mostly)) {
+ *(.data..read_mostly)
_edata = .; /* End of data section */
}

. = ALIGN(THREAD_SIZE); /* init_task */
- .data.init_task : { *(.data.init_task) }
+ .data..init_task : { *(.data..init_task) }

/* might get freed after init */
. = ALIGN(PAGE_SIZE);
@@ -134,7 +134,7 @@ SECTIONS

__bss_start = .; /* BSS */
.bss : {
- *(.bss.page_aligned)
+ *(.bss..page_aligned)
*(.bss)
}
. = ALIGN(4);
diff --git a/arch/parisc/include/asm/cache.h b/arch/parisc/include/asm/cache.h
index 32c2cca..45effe6 100644
--- a/arch/parisc/include/asm/cache.h
+++ b/arch/parisc/include/asm/cache.h
@@ -28,7 +28,7 @@

#define SMP_CACHE_BYTES L1_CACHE_BYTES

-#define __read_mostly __attribute__((__section__(".data.read_mostly")))
+#define __read_mostly __attribute__((__section__(".data..read_mostly")))

void parisc_cache_init(void); /* initializes cache-flushing */
void disable_sr_hashing_asm(int); /* low level support for above */
diff --git a/arch/parisc/include/asm/system.h b/arch/parisc/include/asm/system.h
index ee80c92..7ccaf29 100644
--- a/arch/parisc/include/asm/system.h
+++ b/arch/parisc/include/asm/system.h
@@ -174,7 +174,7 @@ static inline void set_eiem(unsigned long val)
})

#ifdef CONFIG_SMP
-# define __lock_aligned __attribute__((__section__(".data.lock_aligned")))
+# define __lock_aligned __attribute__((__section__(".data..lock_aligned")))
#endif

#define arch_align_stack(x) (x)
diff --git a/arch/parisc/kernel/head.S b/arch/parisc/kernel/head.S
index 0e3d9f9..4dbdf0e 100644
--- a/arch/parisc/kernel/head.S
+++ b/arch/parisc/kernel/head.S
@@ -345,7 +345,7 @@ smp_slave_stext:
ENDPROC(stext)

#ifndef CONFIG_64BIT
- .section .data.read_mostly
+ .section .data..read_mostly

.align 4
.export $global$,data
diff --git a/arch/parisc/kernel/init_task.c b/arch/parisc/kernel/init_task.c
index 1e25a45..14394c0 100644
--- a/arch/parisc/kernel/init_task.c
+++ b/arch/parisc/kernel/init_task.c
@@ -48,7 +48,7 @@ EXPORT_SYMBOL(init_mm);
* "init_task" linker map entry..
*/
union thread_union init_thread_union
- __attribute__((aligned(128))) __attribute__((__section__(".data.init_task"))) =
+ __attribute__((aligned(128))) __attribute__((__section__(".data..init_task"))) =
{ INIT_THREAD_INFO(init_task) };

#if PT_NLEVELS == 3
@@ -57,11 +57,11 @@ union thread_union init_thread_union
* guarantee that global objects will be laid out in memory in the same order
* as the order of declaration, so put these in different sections and use
* the linker script to order them. */
-pmd_t pmd0[PTRS_PER_PMD] __attribute__ ((__section__ (".data.vm0.pmd"), aligned(PAGE_SIZE)));
+pmd_t pmd0[PTRS_PER_PMD] __attribute__ ((__section__ (".data..vm0.pmd"), aligned(PAGE_SIZE)));
#endif

-pgd_t swapper_pg_dir[PTRS_PER_PGD] __attribute__ ((__section__ (".data.vm0.pgd"), aligned(PAGE_SIZE)));
-pte_t pg0[PT_INITIAL * PTRS_PER_PTE] __attribute__ ((__section__ (".data.vm0.pte"), aligned(PAGE_SIZE)));
+pgd_t swapper_pg_dir[PTRS_PER_PGD] __attribute__ ((__section__ (".data..vm0.pgd"), aligned(PAGE_SIZE)));
+pte_t pg0[PT_INITIAL * PTRS_PER_PTE] __attribute__ ((__section__ (".data..vm0.pte"), aligned(PAGE_SIZE)));

/*
* Initial task structure.
diff --git a/arch/parisc/kernel/vmlinux.lds.S b/arch/parisc/kernel/vmlinux.lds.S
index 1a3b6cc..b26eb05 100644
--- a/arch/parisc/kernel/vmlinux.lds.S
+++ b/arch/parisc/kernel/vmlinux.lds.S
@@ -94,8 +94,8 @@ SECTIONS

/* rarely changed data like cpu maps */
. = ALIGN(16);
- .data.read_mostly : {
- *(.data.read_mostly)
+ .data..read_mostly : {
+ *(.data..read_mostly)
}

. = ALIGN(L1_CACHE_BYTES);
@@ -106,14 +106,14 @@ SECTIONS
}

. = ALIGN(L1_CACHE_BYTES);
- .data.cacheline_aligned : {
- *(.data.cacheline_aligned)
+ .data..cacheline_aligned : {
+ *(.data..cacheline_aligned)
}

/* PA-RISC locks requires 16-byte alignment */
. = ALIGN(16);
- .data.lock_aligned : {
- *(.data.lock_aligned)
+ .data..lock_aligned : {
+ *(.data..lock_aligned)
}

/* nosave data is really only used for software suspend...it's here
@@ -122,7 +122,7 @@ SECTIONS
. = ALIGN(PAGE_SIZE);
__nosave_begin = .;
.data_nosave : {
- *(.data.nosave)
+ *(.data..nosave)
}
. = ALIGN(PAGE_SIZE);
__nosave_end = .;
@@ -134,10 +134,10 @@ SECTIONS
__bss_start = .;
/* page table entries need to be PAGE_SIZE aligned */
. = ALIGN(PAGE_SIZE);
- .data.vmpages : {
- *(.data.vm0.pmd)
- *(.data.vm0.pgd)
- *(.data.vm0.pte)
+ .data..vmpages : {
+ *(.data..vm0.pmd)
+ *(.data..vm0.pgd)
+ *(.data..vm0.pte)
}
.bss : {
*(.bss)
@@ -149,8 +149,8 @@ SECTIONS
/* assembler code expects init_task to be 16k aligned */
. = ALIGN(16384);
/* init_task */
- .data.init_task : {
- *(.data.init_task)
+ .data..init_task : {
+ *(.data..init_task)
}

#ifdef CONFIG_64BIT
diff --git a/arch/powerpc/include/asm/cache.h b/arch/powerpc/include/asm/cache.h
index 81de6eb..3f41ab9 100644
--- a/arch/powerpc/include/asm/cache.h
+++ b/arch/powerpc/include/asm/cache.h
@@ -38,7 +38,7 @@ extern struct ppc64_caches ppc64_caches;
#endif /* __powerpc64__ && ! __ASSEMBLY__ */

#if !defined(__ASSEMBLY__)
-#define __read_mostly __attribute__((__section__(".data.read_mostly")))
+#define __read_mostly __attribute__((__section__(".data..read_mostly")))
#endif

#endif /* __KERNEL__ */
diff --git a/arch/powerpc/include/asm/page_64.h b/arch/powerpc/include/asm/page_64.h
index 043bfdf..2307910 100644
--- a/arch/powerpc/include/asm/page_64.h
+++ b/arch/powerpc/include/asm/page_64.h
@@ -157,7 +157,7 @@ do { \
#else
#define __page_aligned \
__attribute__((__aligned__(PAGE_SIZE), \
- __section__(".data.page_aligned")))
+ __section__(".data..page_aligned")))
#endif

#define VM_DATA_DEFAULT_FLAGS \
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index 1a0d628..c4759df 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -193,7 +193,7 @@ name: \
GLUE(.,name):

#define _INIT_GLOBAL(name) \
- .section ".text.init.refok"; \
+ .section ".text..init.refok"; \
.align 2 ; \
.globl name; \
.globl GLUE(.,name); \
@@ -233,7 +233,7 @@ name: \
GLUE(.,name):

#define _INIT_STATIC(name) \
- .section ".text.init.refok"; \
+ .section ".text..init.refok"; \
.align 2 ; \
.section ".opd","aw"; \
name: \
diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
index d794a63..cead668 100644
--- a/arch/powerpc/kernel/head_32.S
+++ b/arch/powerpc/kernel/head_32.S
@@ -50,7 +50,7 @@
mtspr SPRN_DBAT##n##L,RB; \
1:

- .section .text.head, "ax"
+ .section .text..head, "ax"
.stabs "arch/powerpc/kernel/",N_SO,0,0,0f
.stabs "head_32.S",N_SO,0,0,0f
0:
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 56d8e5d..de69ecd 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -52,7 +52,7 @@
*
* This is all going to change RSN when we add bi_recs....... -- Dan
*/
- .section .text.head, "ax"
+ .section .text..head, "ax"
_ENTRY(_stext);
_ENTRY(_start);

diff --git a/arch/powerpc/kernel/head_44x.S b/arch/powerpc/kernel/head_44x.S
index b56fecc..ab0b339 100644
--- a/arch/powerpc/kernel/head_44x.S
+++ b/arch/powerpc/kernel/head_44x.S
@@ -50,7 +50,7 @@
* r7 - End of kernel command line string
*
*/
- .section .text.head, "ax"
+ .section .text..head, "ax"
_ENTRY(_stext);
_ENTRY(_start);
/*
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 3c9452d..f6d1e67 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -38,7 +38,7 @@
#else
#define DO_8xx_CPU6(val, reg)
#endif
- .section .text.head, "ax"
+ .section .text..head, "ax"
_ENTRY(_stext);
_ENTRY(_start);

diff --git a/arch/powerpc/kernel/head_fsl_booke.S b/arch/powerpc/kernel/head_fsl_booke.S
index 36ffb35..69a93e2 100644
--- a/arch/powerpc/kernel/head_fsl_booke.S
+++ b/arch/powerpc/kernel/head_fsl_booke.S
@@ -53,7 +53,7 @@
* r7 - End of kernel command line string
*
*/
- .section .text.head, "ax"
+ .section .text..head, "ax"
_ENTRY(_stext);
_ENTRY(_start);
/*
diff --git a/arch/powerpc/kernel/init_task.c b/arch/powerpc/kernel/init_task.c
index 688b329..1162c3c 100644
--- a/arch/powerpc/kernel/init_task.c
+++ b/arch/powerpc/kernel/init_task.c
@@ -21,7 +21,7 @@ EXPORT_SYMBOL(init_mm);
* "init_task" linker map entry..
*/
union thread_union init_thread_union
- __attribute__((__section__(".data.init_task"))) =
+ __attribute__((__section__(".data..init_task"))) =
{ INIT_THREAD_INFO(init_task) };

/*
diff --git a/arch/powerpc/kernel/machine_kexec_64.c b/arch/powerpc/kernel/machine_kexec_64.c
index 49e705f..815a13c 100644
--- a/arch/powerpc/kernel/machine_kexec_64.c
+++ b/arch/powerpc/kernel/machine_kexec_64.c
@@ -250,7 +250,7 @@ static void kexec_prepare_cpus(void)
* current, but that audit has not been performed.
*/
static union thread_union kexec_stack
- __attribute__((__section__(".data.init_task"))) = { };
+ __attribute__((__section__(".data..init_task"))) = { };

/* Our assembly helper, in kexec_stub.S */
extern NORET_TYPE void kexec_sequence(void *newstack, unsigned long start,
diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index ad06d5c..6685af8 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -74,7 +74,7 @@ static int vdso_ready;
static union {
struct vdso_data data;
u8 page[PAGE_SIZE];
-} vdso_data_store __attribute__((__section__(".data.page_aligned")));
+} vdso_data_store __attribute__((__section__(".data..page_aligned")));
struct vdso_data *vdso_data = &vdso_data_store.data;

/* Format of the patch table */
diff --git a/arch/powerpc/kernel/vdso32/vdso32_wrapper.S b/arch/powerpc/kernel/vdso32/vdso32_wrapper.S
index 556f0ca..10f61ac 100644
--- a/arch/powerpc/kernel/vdso32/vdso32_wrapper.S
+++ b/arch/powerpc/kernel/vdso32/vdso32_wrapper.S
@@ -1,7 +1,7 @@
#include <linux/init.h>
#include <asm/page.h>

- .section ".data.page_aligned"
+ .section ".data..page_aligned"

.globl vdso32_start, vdso32_end
.balign PAGE_SIZE
diff --git a/arch/powerpc/kernel/vdso64/vdso64_wrapper.S b/arch/powerpc/kernel/vdso64/vdso64_wrapper.S
index 0529cb9..3c1cc59 100644
--- a/arch/powerpc/kernel/vdso64/vdso64_wrapper.S
+++ b/arch/powerpc/kernel/vdso64/vdso64_wrapper.S
@@ -1,7 +1,7 @@
#include <linux/init.h>
#include <asm/page.h>

- .section ".data.page_aligned"
+ .section ".data..page_aligned"

.globl vdso64_start, vdso64_end
.balign PAGE_SIZE
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 161b9b9..f8fa660 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -52,9 +52,9 @@ SECTIONS
/* Text and gots */
.text : AT(ADDR(.text) - LOAD_OFFSET) {
ALIGN_FUNCTION();
- *(.text.head)
+ *(.text..head)
_text = .;
- *(.text .fixup .text.init.refok .exit.text.refok __ftr_alt_*)
+ *(.text .fixup .text..init.refok .text..exit.refok __ftr_alt_*)
SCHED_TEXT
LOCK_TEXT
KPROBES_TEXT
@@ -182,10 +182,10 @@ SECTIONS
}
#endif
. = ALIGN(PAGE_SIZE);
- .data.percpu : AT(ADDR(.data.percpu) - LOAD_OFFSET) {
+ .data..percpu : AT(ADDR(.data..percpu) - LOAD_OFFSET) {
__per_cpu_start = .;
- *(.data.percpu)
- *(.data.percpu.shared_aligned)
+ *(.data..percpu)
+ *(.data..percpu.shared_aligned)
__per_cpu_end = .;
}

@@ -259,28 +259,28 @@ SECTIONS
#else
. = ALIGN(16384);
#endif
- .data.init_task : AT(ADDR(.data.init_task) - LOAD_OFFSET) {
- *(.data.init_task)
+ .data..init_task : AT(ADDR(.data..init_task) - LOAD_OFFSET) {
+ *(.data..init_task)
}

. = ALIGN(PAGE_SIZE);
- .data.page_aligned : AT(ADDR(.data.page_aligned) - LOAD_OFFSET) {
- *(.data.page_aligned)
+ .data..page_aligned : AT(ADDR(.data..page_aligned) - LOAD_OFFSET) {
+ *(.data..page_aligned)
}

- .data.cacheline_aligned : AT(ADDR(.data.cacheline_aligned) - LOAD_OFFSET) {
- *(.data.cacheline_aligned)
+ .data..cacheline_aligned : AT(ADDR(.data..cacheline_aligned) - LOAD_OFFSET) {
+ *(.data..cacheline_aligned)
}

. = ALIGN(L1_CACHE_BYTES);
- .data.read_mostly : AT(ADDR(.data.read_mostly) - LOAD_OFFSET) {
- *(.data.read_mostly)
+ .data..read_mostly : AT(ADDR(.data..read_mostly) - LOAD_OFFSET) {
+ *(.data..read_mostly)
}

. = ALIGN(PAGE_SIZE);
.data_nosave : AT(ADDR(.data_nosave) - LOAD_OFFSET) {
__nosave_begin = .;
- *(.data.nosave)
+ *(.data..nosave)
. = ALIGN(PAGE_SIZE);
__nosave_end = .;
}
diff --git a/arch/s390/include/asm/cache.h b/arch/s390/include/asm/cache.h
index 9b86681..24aafa6 100644
--- a/arch/s390/include/asm/cache.h
+++ b/arch/s390/include/asm/cache.h
@@ -14,6 +14,6 @@
#define L1_CACHE_BYTES 256
#define L1_CACHE_SHIFT 8

-#define __read_mostly __attribute__((__section__(".data.read_mostly")))
+#define __read_mostly __attribute__((__section__(".data..read_mostly")))

#endif
diff --git a/arch/s390/kernel/head.S b/arch/s390/kernel/head.S
index ec7e35f..322b3b2 100644
--- a/arch/s390/kernel/head.S
+++ b/arch/s390/kernel/head.S
@@ -35,7 +35,7 @@
#define ARCH_OFFSET 0
#endif

-.section ".text.head","ax"
+.section ".text..head","ax"
#ifndef CONFIG_IPL
.org 0
.long 0x00080000,0x80000000+startup # Just a restart PSW
diff --git a/arch/s390/kernel/init_task.c b/arch/s390/kernel/init_task.c
index 7db95c0..7cfd9af 100644
--- a/arch/s390/kernel/init_task.c
+++ b/arch/s390/kernel/init_task.c
@@ -30,7 +30,7 @@ EXPORT_SYMBOL(init_mm);
* "init_task" linker map entry..
*/
union thread_union init_thread_union
- __attribute__((__section__(".data.init_task"))) =
+ __attribute__((__section__(".data..init_task"))) =
{ INIT_THREAD_INFO(init_task) };

/*
diff --git a/arch/s390/kernel/vdso.c b/arch/s390/kernel/vdso.c
index 690e178..2384c7e 100644
--- a/arch/s390/kernel/vdso.c
+++ b/arch/s390/kernel/vdso.c
@@ -64,7 +64,7 @@ __setup("vdso=", vdso_setup);
static union {
struct vdso_data data;
u8 page[PAGE_SIZE];
-} vdso_data_store __attribute__((__section__(".data.page_aligned")));
+} vdso_data_store __attribute__((__section__(".data..page_aligned")));
struct vdso_data *vdso_data = &vdso_data_store.data;

/*
diff --git a/arch/s390/kernel/vdso32/vdso32_wrapper.S b/arch/s390/kernel/vdso32/vdso32_wrapper.S
index 61639a8..025cb77 100644
--- a/arch/s390/kernel/vdso32/vdso32_wrapper.S
+++ b/arch/s390/kernel/vdso32/vdso32_wrapper.S
@@ -1,7 +1,7 @@
#include <linux/init.h>
#include <asm/page.h>

- .section ".data.page_aligned"
+ .section ".data..page_aligned"

.globl vdso32_start, vdso32_end
.balign PAGE_SIZE
diff --git a/arch/s390/kernel/vdso64/vdso64_wrapper.S b/arch/s390/kernel/vdso64/vdso64_wrapper.S
index d8e2ac1..d26126d 100644
--- a/arch/s390/kernel/vdso64/vdso64_wrapper.S
+++ b/arch/s390/kernel/vdso64/vdso64_wrapper.S
@@ -1,7 +1,7 @@
#include <linux/init.h>
#include <asm/page.h>

- .section ".data.page_aligned"
+ .section ".data..page_aligned"

.globl vdso64_start, vdso64_end
.balign PAGE_SIZE
diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S
index d796d05..41f1d6a 100644
--- a/arch/s390/kernel/vmlinux.lds.S
+++ b/arch/s390/kernel/vmlinux.lds.S
@@ -29,7 +29,7 @@ SECTIONS
. = 0x00000000;
.text : {
_text = .; /* Text and read-only data */
- *(.text.head)
+ *(.text..head)
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
@@ -66,30 +66,30 @@ SECTIONS
. = ALIGN(PAGE_SIZE);
.data_nosave : {
__nosave_begin = .;
- *(.data.nosave)
+ *(.data..nosave)
}
. = ALIGN(PAGE_SIZE);
__nosave_end = .;

. = ALIGN(PAGE_SIZE);
- .data.page_aligned : {
- *(.data.idt)
+ .data..page_aligned : {
+ *(.data..idt)
}

. = ALIGN(0x100);
- .data.cacheline_aligned : {
- *(.data.cacheline_aligned)
+ .data..cacheline_aligned : {
+ *(.data..cacheline_aligned)
}

. = ALIGN(0x100);
- .data.read_mostly : {
- *(.data.read_mostly)
+ .data..read_mostly : {
+ *(.data..read_mostly)
}
_edata = .; /* End of data section */

. = ALIGN(THREAD_SIZE); /* init_task */
- .data.init_task : {
- *(.data.init_task)
+ .data..init_task : {
+ *(.data..init_task)
}

/* will be freed after init */
diff --git a/arch/sh/include/asm/cache.h b/arch/sh/include/asm/cache.h
index 02df18e..455a9a9 100644
--- a/arch/sh/include/asm/cache.h
+++ b/arch/sh/include/asm/cache.h
@@ -14,7 +14,7 @@

#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)

-#define __read_mostly __attribute__((__section__(".data.read_mostly")))
+#define __read_mostly __attribute__((__section__(".data..read_mostly")))

#ifndef __ASSEMBLY__
struct cache_info {
diff --git a/arch/sh/kernel/cpu/sh5/entry.S b/arch/sh/kernel/cpu/sh5/entry.S
index e640c63..433de32 100644
--- a/arch/sh/kernel/cpu/sh5/entry.S
+++ b/arch/sh/kernel/cpu/sh5/entry.S
@@ -2058,10 +2058,10 @@ asm_uaccess_end:


/*
- * --- .text.init Section
+ * --- .text..init Section
*/

- .section .text.init, "ax"
+ .section .text..init, "ax"

/*
* void trap_init (void)
diff --git a/arch/sh/kernel/head_32.S b/arch/sh/kernel/head_32.S
index 788605f..4065946 100644
--- a/arch/sh/kernel/head_32.S
+++ b/arch/sh/kernel/head_32.S
@@ -40,7 +40,7 @@ ENTRY(empty_zero_page)
1:
.skip PAGE_SIZE - empty_zero_page - 1b

- .section .text.head, "ax"
+ .section .text..head, "ax"

/*
* Condition at the entry of _stext:
diff --git a/arch/sh/kernel/head_64.S b/arch/sh/kernel/head_64.S
index 7ccfb99..c3543d1 100644
--- a/arch/sh/kernel/head_64.S
+++ b/arch/sh/kernel/head_64.S
@@ -110,7 +110,7 @@ empty_bad_pte_table:
fpu_in_use: .quad 0


- .section .text.head, "ax"
+ .section .text..head, "ax"
.balign L1_CACHE_BYTES
/*
* Condition at the entry of __stext:
diff --git a/arch/sh/kernel/init_task.c b/arch/sh/kernel/init_task.c
index 80c35ff..af29479 100644
--- a/arch/sh/kernel/init_task.c
+++ b/arch/sh/kernel/init_task.c
@@ -21,7 +21,7 @@ EXPORT_SYMBOL(init_mm);
* "init_task" linker map entry..
*/
union thread_union init_thread_union
- __attribute__((__section__(".data.init_task"))) =
+ __attribute__((__section__(".data..init_task"))) =
{ INIT_THREAD_INFO(init_task) };

/*
diff --git a/arch/sh/kernel/irq.c b/arch/sh/kernel/irq.c
index 64b7690..acf65fb 100644
--- a/arch/sh/kernel/irq.c
+++ b/arch/sh/kernel/irq.c
@@ -158,10 +158,10 @@ asmlinkage int do_IRQ(unsigned int irq, struct pt_regs *regs)

#ifdef CONFIG_IRQSTACKS
static char softirq_stack[NR_CPUS * THREAD_SIZE]
- __attribute__((__section__(".bss.page_aligned")));
+ __attribute__((__section__(".bss..page_aligned")));

static char hardirq_stack[NR_CPUS * THREAD_SIZE]
- __attribute__((__section__(".bss.page_aligned")));
+ __attribute__((__section__(".bss..page_aligned")));

/*
* allocate per-cpu stacks for hardirq and for softirq processing
diff --git a/arch/sh/kernel/vmlinux_32.lds.S b/arch/sh/kernel/vmlinux_32.lds.S
index 7b4b82b..5add714 100644
--- a/arch/sh/kernel/vmlinux_32.lds.S
+++ b/arch/sh/kernel/vmlinux_32.lds.S
@@ -28,7 +28,7 @@ SECTIONS
} = 0

.text : {
- *(.text.head)
+ *(.text..head)
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
@@ -58,19 +58,19 @@ SECTIONS

. = ALIGN(THREAD_SIZE);
.data : { /* Data */
- *(.data.init_task)
+ *(.data..init_task)

. = ALIGN(L1_CACHE_BYTES);
- *(.data.cacheline_aligned)
+ *(.data..cacheline_aligned)

. = ALIGN(L1_CACHE_BYTES);
- *(.data.read_mostly)
+ *(.data..read_mostly)

. = ALIGN(PAGE_SIZE);
- *(.data.page_aligned)
+ *(.data..page_aligned)

__nosave_begin = .;
- *(.data.nosave)
+ *(.data..nosave)
. = ALIGN(PAGE_SIZE);
__nosave_end = .;

@@ -128,7 +128,7 @@ SECTIONS
.bss : {
__init_end = .;
__bss_start = .; /* BSS */
- *(.bss.page_aligned)
+ *(.bss..page_aligned)
*(.bss)
*(COMMON)
. = ALIGN(4);
diff --git a/arch/sh/kernel/vmlinux_64.lds.S b/arch/sh/kernel/vmlinux_64.lds.S
index 33fa464..c7d5f5b 100644
--- a/arch/sh/kernel/vmlinux_64.lds.S
+++ b/arch/sh/kernel/vmlinux_64.lds.S
@@ -42,7 +42,7 @@ SECTIONS
} = 0

.text : C_PHYS(.text) {
- *(.text.head)
+ *(.text..head)
TEXT_TEXT
*(.text64)
*(.text..SHmedia32)
@@ -70,19 +70,19 @@ SECTIONS

. = ALIGN(THREAD_SIZE);
.data : C_PHYS(.data) { /* Data */
- *(.data.init_task)
+ *(.data..init_task)

. = ALIGN(L1_CACHE_BYTES);
- *(.data.cacheline_aligned)
+ *(.data..cacheline_aligned)

. = ALIGN(L1_CACHE_BYTES);
- *(.data.read_mostly)
+ *(.data..read_mostly)

. = ALIGN(PAGE_SIZE);
- *(.data.page_aligned)
+ *(.data..page_aligned)

__nosave_begin = .;
- *(.data.nosave)
+ *(.data..nosave)
. = ALIGN(PAGE_SIZE);
__nosave_end = .;

@@ -140,7 +140,7 @@ SECTIONS
.bss : C_PHYS(.bss) {
__init_end = .;
__bss_start = .; /* BSS */
- *(.bss.page_aligned)
+ *(.bss..page_aligned)
*(.bss)
*(COMMON)
. = ALIGN(4);
diff --git a/arch/sparc/boot/btfixupprep.c b/arch/sparc/boot/btfixupprep.c
index 52a4208..e899f2a 100644
--- a/arch/sparc/boot/btfixupprep.c
+++ b/arch/sparc/boot/btfixupprep.c
@@ -171,7 +171,7 @@ main1:
}
} else if (buffer[nbase+4] != '_')
continue;
- if (!strcmp (sect, ".text.exit"))
+ if (!strcmp (sect, ".text..exit"))
continue;
if (strcmp (sect, ".text") &&
strcmp (sect, ".init.text") &&
@@ -325,7 +325,7 @@ main1:
(*rr)->next = NULL;
}
printf("! Generated by btfixupprep. Do not edit.\n\n");
- printf("\t.section\t\".data.init\",#alloc,#write\n\t.align\t4\n\n");
+ printf("\t.section\t\".data..init\",#alloc,#write\n\t.align\t4\n\n");
printf("\t.global\t___btfixup_start\n___btfixup_start:\n\n");
for (i = 0; i < last; i++) {
f = array + i;
diff --git a/arch/sparc/include/asm/cache.h b/arch/sparc/include/asm/cache.h
index 41f85ae..2909f0a 100644
--- a/arch/sparc/include/asm/cache.h
+++ b/arch/sparc/include/asm/cache.h
@@ -19,7 +19,7 @@

#define SMP_CACHE_BYTES (1 << SMP_CACHE_BYTES_SHIFT)

-#define __read_mostly __attribute__((__section__(".data.read_mostly")))
+#define __read_mostly __attribute__((__section__(".data..read_mostly")))

#ifdef CONFIG_SPARC32
#include <asm/asi.h>
diff --git a/arch/sparc/kernel/head_32.S b/arch/sparc/kernel/head_32.S
index f0b4b51..0ca3dc5 100644
--- a/arch/sparc/kernel/head_32.S
+++ b/arch/sparc/kernel/head_32.S
@@ -72,7 +72,7 @@ sun4e_notsup:
.align 4

/* The Sparc trap table, bootloader gives us control at _start. */
- .section .text.head,"ax"
+ .section .text..head,"ax"
.globl start, _stext, _start, __stext
.globl trapbase
_start: /* danger danger */
@@ -735,7 +735,7 @@ go_to_highmem:
nop

/* The code above should be at beginning and we have to take care about
- * short jumps, as branching to .text.init section from .text is usually
+ * short jumps, as branching to .text..init section from .text is usually
* impossible */
__INIT
/* Acquire boot time privileged register values, this will help debugging.
diff --git a/arch/sparc/kernel/head_64.S b/arch/sparc/kernel/head_64.S
index a46c3a2..ad713f6 100644
--- a/arch/sparc/kernel/head_64.S
+++ b/arch/sparc/kernel/head_64.S
@@ -467,7 +467,7 @@ jump_to_sun4u_init:
jmpl %g2 + %g0, %g0
nop

- .section .text.init.refok
+ .section .text..init.refok
sun4u_init:
BRANCH_IF_SUN4V(g1, sun4v_init)

diff --git a/arch/sparc/kernel/init_task.c b/arch/sparc/kernel/init_task.c
index f28cb82..7f7a468 100644
--- a/arch/sparc/kernel/init_task.c
+++ b/arch/sparc/kernel/init_task.c
@@ -22,5 +22,5 @@ EXPORT_SYMBOL(init_task);
* in etrap.S which assumes it.
*/
union thread_union init_thread_union
- __attribute__((section (".data.init_task")))
+ __attribute__((section (".data..init_task")))
= { INIT_THREAD_INFO(init_task) };
diff --git a/arch/sparc/kernel/vmlinux.lds.S b/arch/sparc/kernel/vmlinux.lds.S
index 7626708..66eddbe 100644
--- a/arch/sparc/kernel/vmlinux.lds.S
+++ b/arch/sparc/kernel/vmlinux.lds.S
@@ -41,7 +41,7 @@ SECTIONS
.text TEXTSTART :
{
_text = .;
- *(.text.head)
+ *(.text..head)
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
@@ -59,20 +59,20 @@ SECTIONS
*(.data1)
}
. = ALIGN(SMP_CACHE_BYTES);
- .data.cacheline_aligned : {
- *(.data.cacheline_aligned)
+ .data..cacheline_aligned : {
+ *(.data..cacheline_aligned)
}
. = ALIGN(SMP_CACHE_BYTES);
- .data.read_mostly : {
- *(.data.read_mostly)
+ .data..read_mostly : {
+ *(.data..read_mostly)
}
/* End of data section */
_edata = .;

/* init_task */
. = ALIGN(THREAD_SIZE);
- .data.init_task : {
- *(.data.init_task)
+ .data..init_task : {
+ *(.data..init_task)
}
.fixup : {
__start___fixup = .;
diff --git a/arch/um/include/asm/common.lds.S b/arch/um/include/asm/common.lds.S
index cb02486..b623606 100644
--- a/arch/um/include/asm/common.lds.S
+++ b/arch/um/include/asm/common.lds.S
@@ -49,9 +49,9 @@
}

. = ALIGN(32);
- .data.percpu : {
+ .data..percpu : {
__per_cpu_start = . ;
- *(.data.percpu)
+ *(.data..percpu)
__per_cpu_end = . ;
}

diff --git a/arch/um/kernel/dyn.lds.S b/arch/um/kernel/dyn.lds.S
index 9975e1a..a415658 100644
--- a/arch/um/kernel/dyn.lds.S
+++ b/arch/um/kernel/dyn.lds.S
@@ -97,9 +97,9 @@ SECTIONS
.fini_array : { *(.fini_array) }
.data : {
. = ALIGN(KERNEL_STACK_SIZE); /* init_task */
- *(.data.init_task)
+ *(.data..init_task)
. = ALIGN(KERNEL_STACK_SIZE);
- *(.data.init_irqstack)
+ *(.data..init_irqstack)
DATA_DATA
*(.data.* .gnu.linkonce.d.*)
SORT(CONSTRUCTORS)
diff --git a/arch/um/kernel/init_task.c b/arch/um/kernel/init_task.c
index 806d381..eed073e 100644
--- a/arch/um/kernel/init_task.c
+++ b/arch/um/kernel/init_task.c
@@ -34,9 +34,9 @@ EXPORT_SYMBOL(init_task);
*/

union thread_union init_thread_union
- __attribute__((__section__(".data.init_task"))) =
+ __attribute__((__section__(".data..init_task"))) =
{ INIT_THREAD_INFO(init_task) };

union thread_union cpu0_irqstack
- __attribute__((__section__(".data.init_irqstack"))) =
+ __attribute__((__section__(".data..init_irqstack"))) =
{ INIT_THREAD_INFO(init_task) };
diff --git a/arch/um/kernel/uml.lds.S b/arch/um/kernel/uml.lds.S
index 11b8352..7b03348 100644
--- a/arch/um/kernel/uml.lds.S
+++ b/arch/um/kernel/uml.lds.S
@@ -53,9 +53,9 @@ SECTIONS
.data :
{
. = ALIGN(KERNEL_STACK_SIZE); /* init_task */
- *(.data.init_task)
+ *(.data..init_task)
. = ALIGN(KERNEL_STACK_SIZE);
- *(.data.init_irqstack)
+ *(.data..init_irqstack)
DATA_DATA
*(.gnu.linkonce.d*)
CONSTRUCTORS
diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S
index 29c5fbf..b085d1f 100644
--- a/arch/x86/boot/compressed/head_32.S
+++ b/arch/x86/boot/compressed/head_32.S
@@ -29,7 +29,7 @@
#include <asm/boot.h>
#include <asm/asm-offsets.h>

-.section ".text.head","ax",@progbits
+.section ".text..head","ax",@progbits
.globl startup_32

startup_32:
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 1d5dff4..e3b4559 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -33,7 +33,7 @@
#include <asm/processor-flags.h>
#include <asm/asm-offsets.h>

-.section ".text.head"
+.section ".text..head"
.code32
.globl startup_32

diff --git a/arch/x86/boot/compressed/relocs.c b/arch/x86/boot/compressed/relocs.c
index 857e492..af8b9c5 100644
--- a/arch/x86/boot/compressed/relocs.c
+++ b/arch/x86/boot/compressed/relocs.c
@@ -559,7 +559,7 @@ static void emit_relocs(int as_text)
/* Print the relocations in a form suitable that
* gas will like.
*/
- printf(".section \".data.reloc\",\"a\"\n");
+ printf(".section \".data..reloc\",\"a\"\n");
printf(".balign 4\n");
for (i = 0; i < reloc_count; i++) {
printf("\t .long 0x%08lx\n", relocs[i]);
diff --git a/arch/x86/boot/compressed/vmlinux.scr b/arch/x86/boot/compressed/vmlinux.scr
index f02382a..862d748 100644
--- a/arch/x86/boot/compressed/vmlinux.scr
+++ b/arch/x86/boot/compressed/vmlinux.scr
@@ -1,6 +1,6 @@
SECTIONS
{
- .rodata.compressed : {
+ .rodata..compressed : {
input_len = .;
LONG(input_data_end - input_data) input_data = .;
*(.data)
diff --git a/arch/x86/boot/compressed/vmlinux_32.lds b/arch/x86/boot/compressed/vmlinux_32.lds
index bb3c483..d70318a 100644
--- a/arch/x86/boot/compressed/vmlinux_32.lds
+++ b/arch/x86/boot/compressed/vmlinux_32.lds
@@ -7,13 +7,13 @@ SECTIONS
* address 0.
*/
. = 0;
- .text.head : {
+ .text..head : {
_head = . ;
- *(.text.head)
+ *(.text..head)
_ehead = . ;
}
- .rodata.compressed : {
- *(.rodata.compressed)
+ .rodata..compressed : {
+ *(.rodata..compressed)
}
.text : {
_text = .; /* Text */
@@ -21,6 +21,10 @@ SECTIONS
*(.text.*)
_etext = . ;
}
+ .got : {
+ *(.got)
+ *(.got.*)
+ }
.rodata : {
_rodata = . ;
*(.rodata) /* read-only data */
@@ -40,4 +44,6 @@ SECTIONS
*(COMMON)
_end = . ;
}
+ /* Be bold, and discard everything not explicitly mentioned */
+ /DISCARD/ : { *(*) }
}
diff --git a/arch/x86/boot/compressed/vmlinux_64.lds b/arch/x86/boot/compressed/vmlinux_64.lds
index bef1ac8..d3d1468 100644
--- a/arch/x86/boot/compressed/vmlinux_64.lds
+++ b/arch/x86/boot/compressed/vmlinux_64.lds
@@ -7,13 +7,13 @@ SECTIONS
* address 0.
*/
. = 0;
- .text.head : {
+ .text..head : {
_head = . ;
- *(.text.head)
+ *(.text..head)
_ehead = . ;
}
- .rodata.compressed : {
- *(.rodata.compressed)
+ .rodata..compressed : {
+ *(.rodata..compressed)
}
.text : {
_text = .; /* Text */
@@ -45,4 +45,6 @@ SECTIONS
. = . + 4096 * 6;
_ebss = .;
}
+ /* Be bold, and discard everything not explicitly mentioned */
+ /DISCARD/ : { *(*) }
}
diff --git a/arch/x86/include/asm/cache.h b/arch/x86/include/asm/cache.h
index 5d367ca..6b94d49 100644
--- a/arch/x86/include/asm/cache.h
+++ b/arch/x86/include/asm/cache.h
@@ -5,7 +5,7 @@
#define L1_CACHE_SHIFT (CONFIG_X86_L1_CACHE_SHIFT)
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)

-#define __read_mostly __attribute__((__section__(".data.read_mostly")))
+#define __read_mostly __attribute__((__section__(".data..read_mostly")))

#ifdef CONFIG_X86_VSMP
/* vSMP Internode cacheline shift */
@@ -13,7 +13,7 @@
#ifdef CONFIG_SMP
#define __cacheline_aligned_in_smp \
__attribute__((__aligned__(1 << (INTERNODE_CACHE_SHIFT)))) \
- __attribute__((__section__(".data.page_aligned")))
+ __attribute__((__section__(".data..page_aligned")))
#endif
#endif

diff --git a/arch/x86/kernel/acpi/wakeup_32.S b/arch/x86/kernel/acpi/wakeup_32.S
index a12e6a9..b3f6fbf 100644
--- a/arch/x86/kernel/acpi/wakeup_32.S
+++ b/arch/x86/kernel/acpi/wakeup_32.S
@@ -1,4 +1,4 @@
- .section .text.page_aligned
+ .section .text..page_aligned
#include <linux/linkage.h>
#include <asm/segment.h>
#include <asm/page.h>
diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
index e835b4e..65b23f8 100644
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -81,7 +81,7 @@ INIT_MAP_BEYOND_END = BOOTBITMAP_SIZE + (PAGE_TABLE_SIZE + ALLOCATOR_SLOP)*PAGE_
* any particular GDT layout, because we load our own as soon as we
* can.
*/
-.section .text.head,"ax",@progbits
+.section .text..head,"ax",@progbits
ENTRY(startup_32)
/* test KEEP_SEGMENTS flag to see if the bootloader is asking
us to not reload segments */
@@ -609,7 +609,7 @@ ENTRY(_stext)
/*
* BSS section
*/
-.section ".bss.page_aligned","wa"
+.section ".bss..page_aligned","wa"
.align PAGE_SIZE_asm
#ifdef CONFIG_X86_PAE
swapper_pg_pmd:
@@ -626,7 +626,7 @@ ENTRY(empty_zero_page)
* This starts the data section.
*/
#ifdef CONFIG_X86_PAE
-.section ".data.page_aligned","wa"
+.section ".data..page_aligned","wa"
/* Page-aligned for the benefit of paravirt? */
.align PAGE_SIZE_asm
ENTRY(swapper_pg_dir)
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 0e275d4..643d5d3 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -40,7 +40,7 @@ L4_START_KERNEL = pgd_index(__START_KERNEL_map)
L3_START_KERNEL = pud_index(__START_KERNEL_map)

.text
- .section .text.head
+ .section .text..head
.code64
.globl startup_64
startup_64:
@@ -414,7 +414,7 @@ ENTRY(phys_base)
ENTRY(idt_table)
.skip 256 * 16

- .section .bss.page_aligned, "aw", @nobits
+ .section .bss..page_aligned, "aw", @nobits
.align PAGE_SIZE
ENTRY(empty_zero_page)
.skip PAGE_SIZE
diff --git a/arch/x86/kernel/init_task.c b/arch/x86/kernel/init_task.c
index df3bf26..103b1c5 100644
--- a/arch/x86/kernel/init_task.c
+++ b/arch/x86/kernel/init_task.c
@@ -22,7 +22,7 @@ struct mm_struct init_mm = INIT_MM(init_mm);
* "init_task" linker map entry..
*/
union thread_union init_thread_union
- __attribute__((__section__(".data.init_task"))) =
+ __attribute__((__section__(".data..init_task"))) =
{ INIT_THREAD_INFO(init_task) };

/*
@@ -36,7 +36,7 @@ EXPORT_SYMBOL(init_task);
/*
* per-CPU TSS segments. Threads are completely 'soft' on Linux,
* no more per-task TSS's. The TSS size is kept cacheline-aligned
- * so they are allowed to end up in the .data.cacheline_aligned
+ * so they are allowed to end up in the .data..cacheline_aligned
* section. Since TSS's are completely CPU-local, we want them
* on exact cacheline boundaries, to eliminate cacheline ping-pong.
*/
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index a9e7548..75102d8 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -78,7 +78,7 @@ char ignore_fpu_irq;
* for this.
*/
gate_desc idt_table[256]
- __attribute__((__section__(".data.idt"))) = { { { { 0, 0 } } }, };
+ __attribute__((__section__(".data..idt"))) = { { { { 0, 0 } } }, };
#endif

DECLARE_BITMAP(used_vectors, NR_VECTORS);
diff --git a/arch/x86/kernel/vmlinux_32.lds.S b/arch/x86/kernel/vmlinux_32.lds.S
index 82c6755..31c131a 100644
--- a/arch/x86/kernel/vmlinux_32.lds.S
+++ b/arch/x86/kernel/vmlinux_32.lds.S
@@ -31,15 +31,15 @@ SECTIONS
. = LOAD_OFFSET + LOAD_PHYSICAL_ADDR;
phys_startup_32 = startup_32 - LOAD_OFFSET;

- .text.head : AT(ADDR(.text.head) - LOAD_OFFSET) {
+ .text..head : AT(ADDR(.text..head) - LOAD_OFFSET) {
_text = .; /* Text and read-only data */
- *(.text.head)
+ *(.text..head)
} :text = 0x9090

/* read-only */
.text : AT(ADDR(.text) - LOAD_OFFSET) {
. = ALIGN(PAGE_SIZE); /* not really needed, already page aligned */
- *(.text.page_aligned)
+ *(.text..page_aligned)
TEXT_TEXT
SCHED_TEXT
LOCK_TEXT
@@ -71,32 +71,32 @@ SECTIONS
. = ALIGN(PAGE_SIZE);
.data_nosave : AT(ADDR(.data_nosave) - LOAD_OFFSET) {
__nosave_begin = .;
- *(.data.nosave)
+ *(.data..nosave)
. = ALIGN(PAGE_SIZE);
__nosave_end = .;
}

. = ALIGN(PAGE_SIZE);
- .data.page_aligned : AT(ADDR(.data.page_aligned) - LOAD_OFFSET) {
- *(.data.page_aligned)
- *(.data.idt)
+ .data..page_aligned : AT(ADDR(.data..page_aligned) - LOAD_OFFSET) {
+ *(.data..page_aligned)
+ *(.data..idt)
}

. = ALIGN(32);
- .data.cacheline_aligned : AT(ADDR(.data.cacheline_aligned) - LOAD_OFFSET) {
- *(.data.cacheline_aligned)
+ .data..cacheline_aligned : AT(ADDR(.data..cacheline_aligned) - LOAD_OFFSET) {
+ *(.data..cacheline_aligned)
}

/* rarely changed data like cpu maps */
. = ALIGN(32);
- .data.read_mostly : AT(ADDR(.data.read_mostly) - LOAD_OFFSET) {
- *(.data.read_mostly)
+ .data..read_mostly : AT(ADDR(.data..read_mostly) - LOAD_OFFSET) {
+ *(.data..read_mostly)
_edata = .; /* End of data section */
}

. = ALIGN(THREAD_SIZE); /* init_task */
- .data.init_task : AT(ADDR(.data.init_task) - LOAD_OFFSET) {
- *(.data.init_task)
+ .data..init_task : AT(ADDR(.data..init_task) - LOAD_OFFSET) {
+ *(.data..init_task)
}

/* might get freed after init */
@@ -179,11 +179,11 @@ SECTIONS
}
#endif
. = ALIGN(PAGE_SIZE);
- .data.percpu : AT(ADDR(.data.percpu) - LOAD_OFFSET) {
+ .data..percpu : AT(ADDR(.data..percpu) - LOAD_OFFSET) {
__per_cpu_start = .;
- *(.data.percpu.page_aligned)
- *(.data.percpu)
- *(.data.percpu.shared_aligned)
+ *(.data..percpu.page_aligned)
+ *(.data..percpu)
+ *(.data..percpu.shared_aligned)
__per_cpu_end = .;
}
. = ALIGN(PAGE_SIZE);
@@ -192,7 +192,7 @@ SECTIONS
.bss : AT(ADDR(.bss) - LOAD_OFFSET) {
__init_end = .;
__bss_start = .; /* BSS */
- *(.bss.page_aligned)
+ *(.bss..page_aligned)
*(.bss)
. = ALIGN(4);
__bss_stop = .;
diff --git a/arch/x86/kernel/vmlinux_64.lds.S b/arch/x86/kernel/vmlinux_64.lds.S
index 1a614c0..56ed592 100644
--- a/arch/x86/kernel/vmlinux_64.lds.S
+++ b/arch/x86/kernel/vmlinux_64.lds.S
@@ -28,7 +28,7 @@ SECTIONS
_text = .; /* Text and read-only data */
.text : AT(ADDR(.text) - LOAD_OFFSET) {
/* First the code that has to be first for bootstrapping */
- *(.text.head)
+ *(.text..head)
_stext = .;
/* Then the rest */
TEXT_TEXT
@@ -63,17 +63,17 @@ SECTIONS

. = ALIGN(PAGE_SIZE);
. = ALIGN(CONFIG_X86_L1_CACHE_BYTES);
- .data.cacheline_aligned : AT(ADDR(.data.cacheline_aligned) - LOAD_OFFSET) {
- *(.data.cacheline_aligned)
+ .data..cacheline_aligned : AT(ADDR(.data..cacheline_aligned) - LOAD_OFFSET) {
+ *(.data..cacheline_aligned)
}
. = ALIGN(CONFIG_X86_INTERNODE_CACHE_BYTES);
- .data.read_mostly : AT(ADDR(.data.read_mostly) - LOAD_OFFSET) {
- *(.data.read_mostly)
+ .data..read_mostly : AT(ADDR(.data..read_mostly) - LOAD_OFFSET) {
+ *(.data..read_mostly)
}

#define VSYSCALL_ADDR (-10*1024*1024)
-#define VSYSCALL_PHYS_ADDR ((LOADADDR(.data.read_mostly) + SIZEOF(.data.read_mostly) + 4095) & ~(4095))
-#define VSYSCALL_VIRT_ADDR ((ADDR(.data.read_mostly) + SIZEOF(.data.read_mostly) + 4095) & ~(4095))
+#define VSYSCALL_PHYS_ADDR ((LOADADDR(.data..read_mostly) + SIZEOF(.data..read_mostly) + 4095) & ~(4095))
+#define VSYSCALL_VIRT_ADDR ((ADDR(.data..read_mostly) + SIZEOF(.data..read_mostly) + 4095) & ~(4095))

#define VLOAD_OFFSET (VSYSCALL_ADDR - VSYSCALL_PHYS_ADDR)
#define VLOAD(x) (ADDR(x) - VLOAD_OFFSET)
@@ -122,13 +122,13 @@ SECTIONS
#undef VVIRT

. = ALIGN(THREAD_SIZE); /* init_task */
- .data.init_task : AT(ADDR(.data.init_task) - LOAD_OFFSET) {
- *(.data.init_task)
+ .data..init_task : AT(ADDR(.data..init_task) - LOAD_OFFSET) {
+ *(.data..init_task)
}:data.init

. = ALIGN(PAGE_SIZE);
- .data.page_aligned : AT(ADDR(.data.page_aligned) - LOAD_OFFSET) {
- *(.data.page_aligned)
+ .data..page_aligned : AT(ADDR(.data..page_aligned) - LOAD_OFFSET) {
+ *(.data..page_aligned)
}

/* might get freed after init */
@@ -215,13 +215,13 @@ SECTIONS

. = ALIGN(PAGE_SIZE);
__nosave_begin = .;
- .data_nosave : AT(ADDR(.data_nosave) - LOAD_OFFSET) { *(.data.nosave) }
+ .data_nosave : AT(ADDR(.data_nosave) - LOAD_OFFSET) { *(.data..nosave) }
. = ALIGN(PAGE_SIZE);
__nosave_end = .;

__bss_start = .; /* BSS */
.bss : AT(ADDR(.bss) - LOAD_OFFSET) {
- *(.bss.page_aligned)
+ *(.bss..page_aligned)
*(.bss)
}
__bss_stop = .;
diff --git a/arch/xtensa/kernel/head.S b/arch/xtensa/kernel/head.S
index 67e6913..b515e04 100644
--- a/arch/xtensa/kernel/head.S
+++ b/arch/xtensa/kernel/head.S
@@ -234,7 +234,7 @@ should_never_return:
* BSS section
*/

-.section ".bss.page_aligned", "w"
+.section ".bss..page_aligned", "w"
ENTRY(swapper_pg_dir)
.fill PAGE_SIZE, 1, 0
ENTRY(empty_zero_page)
diff --git a/arch/xtensa/kernel/init_task.c b/arch/xtensa/kernel/init_task.c
index e07f5c9..d07db9b 100644
--- a/arch/xtensa/kernel/init_task.c
+++ b/arch/xtensa/kernel/init_task.c
@@ -28,7 +28,7 @@ struct mm_struct init_mm = INIT_MM(init_mm);
EXPORT_SYMBOL(init_mm);

union thread_union init_thread_union
- __attribute__((__section__(".data.init_task"))) =
+ __attribute__((__section__(".data..init_task"))) =
{ INIT_THREAD_INFO(init_task) };

struct task_struct init_task = INIT_TASK(init_task);
diff --git a/arch/xtensa/kernel/vmlinux.lds.S b/arch/xtensa/kernel/vmlinux.lds.S
index d506774..06aa28b 100644
--- a/arch/xtensa/kernel/vmlinux.lds.S
+++ b/arch/xtensa/kernel/vmlinux.lds.S
@@ -121,14 +121,14 @@ SECTIONS
DATA_DATA
CONSTRUCTORS
. = ALIGN(XCHAL_ICACHE_LINESIZE);
- *(.data.cacheline_aligned)
+ *(.data..cacheline_aligned)
}

_edata = .;

/* The initial task */
. = ALIGN(8192);
- .data.init_task : { *(.data.init_task) }
+ .data..init_task : { *(.data..init_task) }

/* Initialization code and data: */

@@ -259,7 +259,7 @@ SECTIONS

/* BSS section */
_bss_start = .;
- .bss : { *(.bss.page_aligned) *(.bss) }
+ .bss : { *(.bss..page_aligned) *(.bss) }
_bss_end = .;

_end = .;
diff --git a/include/asm-frv/init.h b/include/asm-frv/init.h
index 8b15838..4d21473 100644
--- a/include/asm-frv/init.h
+++ b/include/asm-frv/init.h
@@ -1,12 +1,12 @@
#ifndef _ASM_INIT_H
#define _ASM_INIT_H

-#define __init __attribute__ ((__section__ (".text.init")))
-#define __initdata __attribute__ ((__section__ (".data.init")))
+#define __init __attribute__ ((__section__ (".text..init")))
+#define __initdata __attribute__ ((__section__ (".data..init")))
/* For assembly routines */
-#define __INIT .section ".text.init",#alloc,#execinstr
+#define __INIT .section ".text..init",#alloc,#execinstr
#define __FINIT .previous
-#define __INITDATA .section ".data.init",#alloc,#write
+#define __INITDATA .section ".data..init",#alloc,#write

#endif

diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index c61fab1..0485f0b 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -64,7 +64,7 @@
/* .data section */
#define DATA_DATA \
*(.data) \
- *(.data.init.refok) \
+ *(.data..init.refok) \
*(.ref.data) \
DEV_KEEP(init.data) \
DEV_KEEP(exit.data) \
@@ -255,8 +255,8 @@
*(.text.hot) \
*(.text) \
*(.ref.text) \
- *(.text.init.refok) \
- *(.exit.text.refok) \
+ *(.text..init.refok) \
+ *(.text..exit.refok) \
DEV_KEEP(init.text) \
DEV_KEEP(exit.text) \
CPU_KEEP(init.text) \
@@ -433,9 +433,9 @@
#define PERCPU(align) \
. = ALIGN(align); \
VMLINUX_SYMBOL(__per_cpu_start) = .; \
- .data.percpu : AT(ADDR(.data.percpu) - LOAD_OFFSET) { \
- *(.data.percpu.page_aligned) \
- *(.data.percpu) \
- *(.data.percpu.shared_aligned) \
+ .data..percpu : AT(ADDR(.data..percpu) - LOAD_OFFSET) { \
+ *(.data..percpu.page_aligned) \
+ *(.data..percpu) \
+ *(.data..percpu.shared_aligned) \
} \
VMLINUX_SYMBOL(__per_cpu_end) = .;
diff --git a/include/linux/cache.h b/include/linux/cache.h
index 97e2488..4c57065 100644
--- a/include/linux/cache.h
+++ b/include/linux/cache.h
@@ -31,7 +31,7 @@
#ifndef __cacheline_aligned
#define __cacheline_aligned \
__attribute__((__aligned__(SMP_CACHE_BYTES), \
- __section__(".data.cacheline_aligned")))
+ __section__(".data..cacheline_aligned")))
#endif /* __cacheline_aligned */

#ifndef __cacheline_aligned_in_smp
diff --git a/include/linux/init.h b/include/linux/init.h
index 68cb026..f87423d 100644
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -62,9 +62,9 @@

/* backward compatibility note
* A few places hardcode the old section names:
- * .text.init.refok
- * .data.init.refok
- * .exit.text.refok
+ * .text..init.refok
+ * .data..init.refok
+ * .text..exit.refok
* They should be converted to use the defines from this file
*/

@@ -301,7 +301,7 @@ void __init parse_early_param(void);
#endif

/* Data marked not to be saved by software suspend */
-#define __nosavedata __section(.data.nosave)
+#define __nosavedata __section(.data..nosave)

/* This means "can be init if no module support, otherwise module load
may call it." */
diff --git a/include/linux/linkage.h b/include/linux/linkage.h
index fee9e59..bddaee6 100644
--- a/include/linux/linkage.h
+++ b/include/linux/linkage.h
@@ -18,8 +18,8 @@
# define asmregparm
#endif

-#define __page_aligned_data __section(.data.page_aligned) __aligned(PAGE_SIZE)
-#define __page_aligned_bss __section(.bss.page_aligned) __aligned(PAGE_SIZE)
+#define __page_aligned_data __section(.data..page_aligned) __aligned(PAGE_SIZE)
+#define __page_aligned_bss __section(.bss..page_aligned) __aligned(PAGE_SIZE)

/*
* This is used by architectures to keep arguments on the stack
diff --git a/include/linux/percpu.h b/include/linux/percpu.h
index 9f2a375..c89a214 100644
--- a/include/linux/percpu.h
+++ b/include/linux/percpu.h
@@ -10,13 +10,13 @@

#ifdef CONFIG_SMP
#define DEFINE_PER_CPU(type, name) \
- __attribute__((__section__(".data.percpu"))) \
+ __attribute__((__section__(".data..percpu"))) \
PER_CPU_ATTRIBUTES __typeof__(type) per_cpu__##name

#ifdef MODULE
-#define SHARED_ALIGNED_SECTION ".data.percpu"
+#define SHARED_ALIGNED_SECTION ".data..percpu"
#else
-#define SHARED_ALIGNED_SECTION ".data.percpu.shared_aligned"
+#define SHARED_ALIGNED_SECTION ".data..percpu.shared_aligned"
#endif

#define DEFINE_PER_CPU_SHARED_ALIGNED(type, name) \
@@ -24,8 +24,8 @@
PER_CPU_ATTRIBUTES __typeof__(type) per_cpu__##name \
____cacheline_aligned_in_smp

-#define DEFINE_PER_CPU_PAGE_ALIGNED(type, name) \
- __attribute__((__section__(".data.percpu.page_aligned"))) \
+#define DEFINE_PER_CPU_PAGE_ALIGNED(type, name) \
+ __attribute__((__section__(".data..percpu.page_aligned"))) \
PER_CPU_ATTRIBUTES __typeof__(type) per_cpu__##name
#else
#define DEFINE_PER_CPU(type, name) \
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index a0c66a2..710c825 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -60,7 +60,7 @@
/*
* Must define these before including other files, inline functions need them
*/
-#define LOCK_SECTION_NAME ".text.lock."KBUILD_BASENAME
+#define LOCK_SECTION_NAME ".text..lock."KBUILD_BASENAME

#define LOCK_SECTION_START(extra) \
".subsection 1\n\t" \
diff --git a/kernel/module.c b/kernel/module.c
index 922d177..e7b39ac 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -475,7 +475,7 @@ static unsigned int find_pcpusec(Elf_Ehdr *hdr,
Elf_Shdr *sechdrs,
const char *secstrings)
{
- return find_sec(hdr, sechdrs, secstrings, ".data.percpu");
+ return find_sec(hdr, sechdrs, secstrings, ".data..percpu");
}

static void percpu_modcopy(void *pcpudest, const void *from, unsigned long size)
diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
index 8892161..a7e282e 100644
--- a/scripts/mod/modpost.c
+++ b/scripts/mod/modpost.c
@@ -794,9 +794,9 @@ static const char *data_sections[] = { DATA_SECTIONS, NULL };
/* sections that may refer to an init/exit section with no warning */
static const char *initref_sections[] =
{
- ".text.init.refok*",
- ".exit.text.refok*",
- ".data.init.refok*",
+ ".text..init.refok*",
+ ".text..exit.refok*",
+ ".data..init.refok*",
NULL
};

@@ -915,7 +915,7 @@ static int section_mismatch(const char *fromsec, const char *tosec)
* Pattern 0:
* Do not warn if funtion/data are marked with __init_refok/__initdata_refok.
* The pattern is identified by:
- * fromsec = .text.init.refok* | .data.init.refok*
+ * fromsec = .text..init.refok* | .data..init.refok*
*
* Pattern 1:
* If a module parameter is declared __initdata and permissions=0
@@ -939,8 +939,8 @@ static int section_mismatch(const char *fromsec, const char *tosec)
* *probe_one, *_console, *_timer
*
* Pattern 3:
- * Whitelist all refereces from .text.head to .init.data
- * Whitelist all refereces from .text.head to .init.text
+ * Whitelist all references from .text..head to .init.data
+ * Whitelist all references from .text..head to .init.text
*
* Pattern 4:
* Some symbols belong to init section but still it is ok to reference
diff --git a/scripts/recordmcount.pl b/scripts/recordmcount.pl
index fe83141..da33ef9 100755
--- a/scripts/recordmcount.pl
+++ b/scripts/recordmcount.pl
@@ -26,7 +26,7 @@
# which will also be the location of that section after final link.
# e.g.
#
-# .section ".text.sched"
+# .section ".text..sched"
# .globl my_func
# my_func:
# [...]
@@ -39,7 +39,7 @@
# [...]
#
# Both relocation offsets for the mcounts in the above example will be
-# offset from .text.sched. If we make another file called tmp.s with:
+# offset from .text..sched. If we make another file called tmp.s with:
#
# .section __mcount_loc
# .quad my_func + 0x5
@@ -51,7 +51,7 @@
# But this gets hard if my_func is not globl (a static function).
# In such a case we have:
#
-# .section ".text.sched"
+# .section ".text..sched"
# my_func:
# [...]
# call mcount (offset: 0x5)
--
1.6.2

2009-03-24 06:07:29

by Stephen Rothwell

[permalink] [raw]
Subject: Re: [PATCH 0/4] Add support for compiling with -ffunction-sections -fdata-sections

Hi Tim,

On Tue, 24 Mar 2009 01:28:41 -0400 Tim Abbott <[email protected]> wrote:
>
> I'd like to see this patch series merged for 2.6.30. It applies on
> top of your current master (aka v2.6.29). Patch 1/4 would benefit
> from special treatment during the merge window, since it makes many
> small changes in lots of files, and thus is likely to conflict with
> other changes; the other patches are fairly small.

Indeed. I just did a merge of your changes with next-20090323 and it
conflicted with changes in 10 files across 4 architectures. There has
been work on several areas that you are changing here (some of which will
make your life easier - like the percpu changes). I suspect that at
least some of patch 1/4 could have bee split out and sent to the
appropriate maintainers.

As it is, It looks like this will need to be rebased on top of (say)
2.6.30-rc1 (at least) in order not to inflict too much pain on others.

--
Cheers,
Stephen Rothwell [email protected]
http://www.canb.auug.org.au/~sfr/


Attachments:
(No filename) (1.01 kB)
(No filename) (197.00 B)
Download all attachments

2009-03-24 07:26:46

by Tim Abbott

[permalink] [raw]
Subject: Re: [PATCH 0/4] Add support for compiling with -ffunction-sections -fdata-sections

On Tue, 24 Mar 2009, Stephen Rothwell wrote:

> On Tue, 24 Mar 2009 01:28:41 -0400 Tim Abbott <[email protected]> wrote:
> >
> > I'd like to see this patch series merged for 2.6.30. It applies on
> > top of your current master (aka v2.6.29). Patch 1/4 would benefit
> > from special treatment during the merge window, since it makes many
> > small changes in lots of files, and thus is likely to conflict with
> > other changes; the other patches are fairly small.
>
> Indeed. I just did a merge of your changes with next-20090323 and it
> conflicted with changes in 10 files across 4 architectures. There has
> been work on several areas that you are changing here (some of which will
> make your life easier - like the percpu changes).

Yeah, unfortunately, because people are always modifying the kernel's
"magic" sections, a version of the patch that applies to master will
basically always conflict with something in linux-next (at least, this has
been my experience updating it during the 2.6.29 release cycle). As Rusty
Russell said last month about this patch, there's no good time for this
kind of change.

I'm happy to rebase this patch again to merge it late in the merge window
if that makes life easier for others. Merge conflicts with this patch are
hard to avoid but fairly easy to resolve -- you just replace .data.foo
with .data..foo (etc.).

> I suspect that at least some of patch 1/4 could have bee split out and
> sent to the appropriate maintainers.

Because the change is actually renaming sections, it needs to be made on
all the architectures at the same time as changes are made to the
architecture-independent code. So, changes in, say, include/linux/init.h:

-#define __nosavedata __section(.data.nosave)
+#define __nosavedata __section(.data..nosave)

must be synchronized with, say, arch/arm/kernel/vmlinux.lds.S:

. = ALIGN(4096);
__nosave_begin = .;
- *(.data.nosave)
+ *(.data..nosave)

So, while it might have been useful to split out and send individual
per-architecture patches to arch maintainers for review, that would not
avoid the need to merge it as a single giant patch.

-Tim Abbott

2009-03-24 14:47:56

by H. Peter Anvin

[permalink] [raw]
Subject: Re: [PATCH 0/4] Add support for compiling with -ffunction-sections -fdata-sections

Tim Abbott wrote:
>
> Yeah, unfortunately, because people are always modifying the kernel's
> "magic" sections, a version of the patch that applies to master will
> basically always conflict with something in linux-next (at least, this has
> been my experience updating it during the 2.6.29 release cycle). As Rusty
> Russell said last month about this patch, there's no good time for this
> kind of change.
>

"Just before -rc1" is pretty much the only time to do this kind of
changes. It's not the first time we have had this issue.

-hpa

2009-03-27 01:44:20

by Tim Abbott

[permalink] [raw]
Subject: Re: [PATCH 0/4] Add support for compiling with -ffunction-sections -fdata-sections

On Tue, 24 Mar 2009, H. Peter Anvin wrote:

> "Just before -rc1" is pretty much the only time to do this kind of changes.
> It's not the first time we have had this issue.

OK. Merging this patch just before -rc1 would be fine with me; I'll keep
the patch series updated as the merge window proceeds.

How should I help this patch be merged "just before -rc1"? In particular,
how will I know when is the right time to send the updated patch?

-Tim Abbott