2009-05-01 00:14:30

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 00/11] section name cleanup for x86

This patch series cleans up the section names on the x86
architecture. It requires the architecture-independent macro
definitions from this patch series:

<http://www.spinics.net/lists/mips/msg33499.html>

The long-term goal here is to add support for building the kernel with
-ffunction-sections -fdata-sections. This requires renaming all the
magic section names in the kernel of the form .text.foo, .data.foo,
.bss.foo, and .rodata.foo to not have collisions with sections
generated for code like:

static int nosave = 0; /* -fdata-sections places in .data.nosave */
static void head(); /* -ffunction-sections places in .text.head */

These patches are on top of the x86/kbuild branch of linux-tip.

-Tim Abbott


Anders Kaseorg (1):
x86: fix fragile computation of vsyscall address

Tim Abbott (10):
x86: Use macros for .bss.page_aligned section.
x86: Use section .data.page_aligned for the idt_table.
x86: Use macros for .data.page_aligned.
x86: convert compressed loader to use __HEAD and HEAD_TEXT macros.
x86: convert to use __HEAD and HEAD_TEXT macros.
x86: use NOSAVE_DATA macro for .data.nosave section.
x86: use new macro for .data.cacheline_aligned section.
x86: use new macros for .data.init_task.
x86: use new macro for .data.read_mostly section.
x86: convert to new generic read_mostly support.

arch/x86/Kconfig | 3 +
arch/x86/boot/compressed/head_32.S | 3 +-
arch/x86/boot/compressed/head_64.S | 3 +-
arch/x86/boot/compressed/vmlinux.lds.S | 6 +-
arch/x86/include/asm/cache.h | 2 -
arch/x86/kernel/head_32.S | 6 +-
arch/x86/kernel/head_64.S | 4 +-
arch/x86/kernel/init_task.c | 3 +-
arch/x86/kernel/traps.c | 6 +-
arch/x86/kernel/vmlinux.lds.S | 119 ++++++++------------------------
10 files changed, 49 insertions(+), 106 deletions(-)


2009-05-01 00:11:45

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 07/11] x86: use new macro for .data.cacheline_aligned section.

.data.cacheline_aligned should not need a separate output section;
this change moves it into the .data section.

This removes the extra ALIGN(PAGE_SIZE) before the cache-aligned data
on x86_32, since I don't see a purpose for it.

Signed-off-by: Tim Abbott <[email protected]>
Cc: Ingo Molnar <[email protected]>
---
arch/x86/kernel/vmlinux.lds.S | 16 +++++-----------
1 files changed, 5 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 88b059b..c7d54bf 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -104,6 +104,11 @@ SECTIONS
.data : AT(ADDR(.data) - LOAD_OFFSET) {
PAGE_ALIGNED_DATA
NOSAVE_DATA
+#ifdef CONFIG_X86_32
+ CACHELINE_ALIGNED_DATA(32)
+#else
+ CACHELINE_ALIGNED_DATA(CONFIG_X86_L1_CACHE_BYTES)
+#endif
DATA_DATA
CONSTRUCTORS

@@ -113,17 +118,6 @@ SECTIONS
#endif
} :data

-#ifdef CONFIG_X86_32
- . = ALIGN(32);
-#else
- . = ALIGN(PAGE_SIZE);
- . = ALIGN(CONFIG_X86_L1_CACHE_BYTES);
-#endif
- .data.cacheline_aligned :
- AT(ADDR(.data.cacheline_aligned) - LOAD_OFFSET) {
- *(.data.cacheline_aligned)
- }
-
/* rarely changed data like cpu maps */
#ifdef CONFIG_X86_32
. = ALIGN(32);
--
1.6.2.1

2009-05-01 00:12:14

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 08/11] x86: use new macros for .data.init_task.

.data.init_task should not need a separate output section; this change
moves it into the .data section. This has the consequence of moving
the init_task data inside _edata, which I think should be OK.

Signed-off-by: Tim Abbott <[email protected]>
Cc: Ingo Molnar <[email protected]>
---
arch/x86/kernel/init_task.c | 3 +--
arch/x86/kernel/vmlinux.lds.S | 13 ++++---------
2 files changed, 5 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kernel/init_task.c b/arch/x86/kernel/init_task.c
index df3bf26..e0383af 100644
--- a/arch/x86/kernel/init_task.c
+++ b/arch/x86/kernel/init_task.c
@@ -21,8 +21,7 @@ struct mm_struct init_mm = INIT_MM(init_mm);
* way process stacks are handled. This is done by having a special
* "init_task" linker map entry..
*/
-union thread_union init_thread_union
- __attribute__((__section__(".data.init_task"))) =
+union thread_union init_thread_union __init_task_data =
{ INIT_THREAD_INFO(init_task) };

/*
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index c7d54bf..6b8a633 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -102,6 +102,7 @@ SECTIONS
/* Data */
. = ALIGN(PAGE_SIZE);
.data : AT(ADDR(.data) - LOAD_OFFSET) {
+ INIT_TASK_DATA(THREAD_SIZE)
PAGE_ALIGNED_DATA
NOSAVE_DATA
#ifdef CONFIG_X86_32
@@ -205,15 +206,6 @@ SECTIONS

#endif /* CONFIG_X86_64 */

- /* init_task */
- . = ALIGN(THREAD_SIZE);
- .data.init_task : AT(ADDR(.data.init_task) - LOAD_OFFSET) {
- *(.data.init_task)
- }
-#ifdef CONFIG_X86_64
- :data.init
-#endif
-
/*
* smp_locks might be freed after init
* start/end must be page aligned
@@ -225,6 +217,9 @@ SECTIONS
__smp_locks_end = .;
. = ALIGN(PAGE_SIZE);
}
+#ifdef CONFIG_X86_64
+ :data.init
+#endif

/* Init code and data - will be freed after init */
. = ALIGN(PAGE_SIZE);
--
1.6.2.1

2009-05-01 00:12:37

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 10/11] x86: use new macro for .data.read_mostly section.

.data.read_mostly should not need a separate output section; this
change moves it into the .data section.

Since after the .data.read_mostly unification, _edata is now in the
same place on 32-bit and 64-bit, combine the definitions.

Signed-off-by: Tim Abbott <[email protected]>
Cc: Ingo Molnar <[email protected]>
---
arch/x86/kernel/vmlinux.lds.S | 19 ++-----------------
1 files changed, 2 insertions(+), 17 deletions(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 296b49c..7235ee5 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -107,33 +107,18 @@ SECTIONS
NOSAVE_DATA
#ifdef CONFIG_X86_32
CACHELINE_ALIGNED_DATA(32)
+ READ_MOSTLY_DATA(32)
#else
CACHELINE_ALIGNED_DATA(CONFIG_X86_L1_CACHE_BYTES)
+ READ_MOSTLY_DATA(CONFIG_X86_INTERNODE_CACHE_BYTES)
#endif
DATA_DATA
CONSTRUCTORS

-#ifdef CONFIG_X86_64
/* End of data section */
_edata = .;
-#endif
} :data

- /* rarely changed data like cpu maps */
-#ifdef CONFIG_X86_32
- . = ALIGN(32);
-#else
- . = ALIGN(CONFIG_X86_INTERNODE_CACHE_BYTES);
-#endif
- .data.read_mostly : AT(ADDR(.data.read_mostly) - LOAD_OFFSET) {
- *(.data.read_mostly)
-
-#ifdef CONFIG_X86_32
- /* End of data section */
- _edata = .;
-#endif
- }
-
#ifdef CONFIG_X86_64

#define VSYSCALL_ADDR (-10*1024*1024)
--
1.6.2.1

2009-05-01 00:12:58

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 02/11] x86: Use section .data.page_aligned for the idt_table.

The .data.idt section is just squashed into the .data.page_aligned
output section by the linker script anyway, so it might as well be in
the .data.page_aligned section.

This eliminates all references to .data.idt on x86.

Signed-off-by: Tim Abbott <[email protected]>
Cc: Ingo Molnar <[email protected]>
---
arch/x86/kernel/traps.c | 6 ++----
arch/x86/kernel/vmlinux.lds.S | 1 -
2 files changed, 2 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index a1d2883..d7affb7 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -73,11 +73,9 @@ char ignore_fpu_irq;

/*
* The IDT has to be page-aligned to simplify the Pentium
- * F0 0F bug workaround.. We have a special link segment
- * for this.
+ * F0 0F bug workaround.
*/
-gate_desc idt_table[256]
- __attribute__((__section__(".data.idt"))) = { { { { 0, 0 } } }, };
+gate_desc idt_table[256] __page_aligned_data = { { { { 0, 0 } } }, };
#endif

DECLARE_BITMAP(used_vectors, NR_VECTORS);
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 10edb97..037df73 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -131,7 +131,6 @@ SECTIONS
. = ALIGN(PAGE_SIZE);
.data.page_aligned : AT(ADDR(.data.page_aligned) - LOAD_OFFSET) {
*(.data.page_aligned)
- *(.data.idt)
}

#ifdef CONFIG_X86_32
--
1.6.2.1

2009-05-01 00:13:33

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 01/11] x86: Use macros for .bss.page_aligned section.

Signed-off-by: Tim Abbott <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: H. Peter Anvin <[email protected]>
---
arch/x86/kernel/head_32.S | 2 +-
arch/x86/kernel/head_64.S | 2 +-
arch/x86/kernel/vmlinux.lds.S | 4 ++--
3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
index dc5ed4b..a420002 100644
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -611,7 +611,7 @@ ENTRY(initial_code)
/*
* BSS section
*/
-.section ".bss.page_aligned","wa"
+__PAGE_ALIGNED_BSS
.align PAGE_SIZE_asm
#ifdef CONFIG_X86_PAE
swapper_pg_pmd:
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 54b29bb..ae9a453 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -419,7 +419,7 @@ ENTRY(phys_base)
ENTRY(idt_table)
.skip IDT_ENTRIES * 16

- .section .bss.page_aligned, "aw", @nobits
+ __PAGE_ALIGNED_BSS
.align PAGE_SIZE
ENTRY(empty_zero_page)
.skip PAGE_SIZE
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 4c85b2e..10edb97 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -366,8 +366,8 @@ SECTIONS
/* BSS */
. = ALIGN(PAGE_SIZE);
.bss : AT(ADDR(.bss) - LOAD_OFFSET) {
- __bss_start = .;
- *(.bss.page_aligned)
+ __bss_start = .; /* BSS */
+ PAGE_ALIGNED_BSS
*(.bss)
. = ALIGN(4);
__bss_stop = .;
--
1.6.2.1

2009-05-01 00:13:57

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 06/11] x86: use NOSAVE_DATA macro for .data.nosave section.

.data.nosave should not need a separate output section; this change
moves it into the .data section.

On x86_64, this has the consequence of moving .data.nosave inside
_edata, which I think should be harmless.

Signed-off-by: Tim Abbott <[email protected]>
Cc: Ingo Molnar <[email protected]>
---
arch/x86/kernel/vmlinux.lds.S | 29 ++++++-----------------------
1 files changed, 6 insertions(+), 23 deletions(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 791afa7..88b059b 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -103,6 +103,7 @@ SECTIONS
. = ALIGN(PAGE_SIZE);
.data : AT(ADDR(.data) - LOAD_OFFSET) {
PAGE_ALIGNED_DATA
+ NOSAVE_DATA
DATA_DATA
CONSTRUCTORS

@@ -113,17 +114,6 @@ SECTIONS
} :data

#ifdef CONFIG_X86_32
- /* 32 bit has nosave before _edata */
- . = ALIGN(PAGE_SIZE);
- .data_nosave : AT(ADDR(.data_nosave) - LOAD_OFFSET) {
- __nosave_begin = .;
- *(.data.nosave)
- . = ALIGN(PAGE_SIZE);
- __nosave_end = .;
- }
-#endif
-
-#ifdef CONFIG_X86_32
. = ALIGN(32);
#else
. = ALIGN(PAGE_SIZE);
@@ -323,7 +313,7 @@ SECTIONS
#if defined(CONFIG_X86_64) && defined(CONFIG_SMP)
/*
* percpu offsets are zero-based on SMP. PERCPU_VADDR() changes the
- * output PHDR, so the next output section - __data_nosave - should
+ * output PHDR, so the next output section - .bss - should
* start another section data.init2. Also, pda should be at the head of
* percpu area. Preallocate it and define the percpu offset symbol
* so that it can be accessed as a percpu variable.
@@ -341,17 +331,6 @@ SECTIONS
__init_end = .;
}

-#ifdef CONFIG_X86_64
- .data_nosave : AT(ADDR(.data_nosave) - LOAD_OFFSET) {
- . = ALIGN(PAGE_SIZE);
- __nosave_begin = .;
- *(.data.nosave)
- . = ALIGN(PAGE_SIZE);
- __nosave_end = .;
- } :data.init2
- /* use another section data.init2, see PERCPU_VADDR() above */
-#endif
-
/* BSS */
. = ALIGN(PAGE_SIZE);
.bss : AT(ADDR(.bss) - LOAD_OFFSET) {
@@ -361,6 +340,10 @@ SECTIONS
. = ALIGN(4);
__bss_stop = .;
}
+#ifdef CONFIG_X86_64
+ :data.init2
+#endif
+ /* use another section data.init2, see PERCPU_VADDR() above */

. = ALIGN(PAGE_SIZE);
.brk : AT(ADDR(.brk) - LOAD_OFFSET) {
--
1.6.2.1

2009-05-01 00:14:51

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 03/11] x86: Use macros for .data.page_aligned.

.data.page_aligned should not need a separate output section. So, as
part of this cleanup I moved it into the .data output section in the
linker scripts in order to eliminate unnecessary references to the
section name.

Signed-off-by: Tim Abbott <[email protected]>
Cc: Ingo Molnar <[email protected]>
---
arch/x86/kernel/head_32.S | 2 +-
arch/x86/kernel/vmlinux.lds.S | 6 +-----
2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
index a420002..d251f71 100644
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -629,7 +629,7 @@ ENTRY(empty_zero_page)
* This starts the data section.
*/
#ifdef CONFIG_X86_PAE
-.section ".data.page_aligned","wa"
+__PAGE_ALIGNED_DATA
/* Page-aligned for the benefit of paravirt? */
.align PAGE_SIZE_asm
ENTRY(swapper_pg_dir)
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 037df73..6f73355 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -108,6 +108,7 @@ SECTIONS
/* Data */
. = ALIGN(PAGE_SIZE);
.data : AT(ADDR(.data) - LOAD_OFFSET) {
+ PAGE_ALIGNED_DATA
DATA_DATA
CONSTRUCTORS

@@ -128,11 +129,6 @@ SECTIONS
}
#endif

- . = ALIGN(PAGE_SIZE);
- .data.page_aligned : AT(ADDR(.data.page_aligned) - LOAD_OFFSET) {
- *(.data.page_aligned)
- }
-
#ifdef CONFIG_X86_32
. = ALIGN(32);
#else
--
1.6.2.1

2009-05-01 00:15:53

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 11/11] x86: convert to new generic read_mostly support.

Signed-off-by: Tim Abbott <[email protected]>
Cc: Ingo Molnar <[email protected]>
---
arch/x86/Kconfig | 3 +++
arch/x86/include/asm/cache.h | 2 --
2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index bfff845..f12d0db 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -52,6 +52,9 @@ config OUTPUT_FORMAT
default "elf32-i386" if X86_32
default "elf64-x86-64" if X86_64

+config HAVE_READ_MOSTLY_DATA
+ def_bool y
+
config ARCH_DEFCONFIG
string
default "arch/x86/configs/i386_defconfig" if X86_32
diff --git a/arch/x86/include/asm/cache.h b/arch/x86/include/asm/cache.h
index 5d367ca..16c1e65 100644
--- a/arch/x86/include/asm/cache.h
+++ b/arch/x86/include/asm/cache.h
@@ -5,8 +5,6 @@
#define L1_CACHE_SHIFT (CONFIG_X86_L1_CACHE_SHIFT)
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)

-#define __read_mostly __attribute__((__section__(".data.read_mostly")))
-
#ifdef CONFIG_X86_VSMP
/* vSMP Internode cacheline shift */
#define INTERNODE_CACHE_SHIFT (12)
--
1.6.2.1

2009-05-01 00:16:49

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 05/11] x86: convert to use __HEAD and HEAD_TEXT macros.

This has the consequence of changing the section name use for head
code from ".text.head" to ".head.text". It also eliminates the
".text.head" output section (instead placing head code at the start of
the .text output section), which should be harmless.

This patch only changes the sections in the actual kernel, not those
in the compressed boot loader.

Signed-off-by: Tim Abbott <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sam Ravnborg <[email protected]>
---
arch/x86/kernel/head_32.S | 2 +-
arch/x86/kernel/head_64.S | 2 +-
arch/x86/kernel/vmlinux.lds.S | 12 +++---------
3 files changed, 5 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
index d251f71..9b7920d 100644
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -80,7 +80,7 @@ RESERVE_BRK(pagetables, INIT_MAP_SIZE)
* any particular GDT layout, because we load our own as soon as we
* can.
*/
-.section .text.head,"ax",@progbits
+__HEAD
ENTRY(startup_32)
/* test KEEP_SEGMENTS flag to see if the bootloader is asking
us to not reload segments */
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index ae9a453..7cd4956 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -41,7 +41,7 @@ L4_START_KERNEL = pgd_index(__START_KERNEL_map)
L3_START_KERNEL = pud_index(__START_KERNEL_map)

.text
- .section .text.head
+ __HEAD
.code64
.globl startup_64
startup_64:
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 6f73355..791afa7 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -66,17 +66,11 @@ SECTIONS
#endif

/* Text and read-only data */
-
- /* bootstrapping code */
- .text.head : AT(ADDR(.text.head) - LOAD_OFFSET) {
- _text = .;
- *(.text.head)
- } :text = 0x9090
-
- /* The rest of the text */
.text : AT(ADDR(.text) - LOAD_OFFSET) {
+ _text = .;
+ /* bootstrapping code */
+ HEAD_TEXT
#ifdef CONFIG_X86_32
- /* not really needed, already page aligned */
. = ALIGN(PAGE_SIZE);
*(.text.page_aligned)
#endif
--
1.6.2.1

2009-05-01 00:16:27

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 04/11] x86: convert compressed loader to use __HEAD and HEAD_TEXT macros.

This has the consequence of changing the section name use for head
code from ".text.head" to ".head.text".

Linus suggested that we merge the ".text.head" section with ".text"
(presumably while preserving the fact that the head code starts at 0).
When I tried this it caused the kernel to not boot.

Signed-off-by: Tim Abbott <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Sam Ravnborg <[email protected]>
Cc: Linus Torvalds <[email protected]>
---
arch/x86/boot/compressed/head_32.S | 3 ++-
arch/x86/boot/compressed/head_64.S | 3 ++-
arch/x86/boot/compressed/vmlinux.lds.S | 6 ++++--
3 files changed, 8 insertions(+), 4 deletions(-)

diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S
index 85bd328..83df56f 100644
--- a/arch/x86/boot/compressed/head_32.S
+++ b/arch/x86/boot/compressed/head_32.S
@@ -23,13 +23,14 @@
*/
.text

+#include <linux/init.h>
#include <linux/linkage.h>
#include <asm/segment.h>
#include <asm/page_types.h>
#include <asm/boot.h>
#include <asm/asm-offsets.h>

-.section ".text.head","ax",@progbits
+__HEAD
ENTRY(startup_32)
cld
/* test KEEP_SEGMENTS flag to see if the bootloader is asking
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index ed4a829..a788a91 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -24,6 +24,7 @@
.code32
.text

+#include <linux/init.h>
#include <linux/linkage.h>
#include <asm/segment.h>
#include <asm/pgtable_types.h>
@@ -33,7 +34,7 @@
#include <asm/processor-flags.h>
#include <asm/asm-offsets.h>

-.section ".text.head"
+__HEAD
.code32
ENTRY(startup_32)
cld
diff --git a/arch/x86/boot/compressed/vmlinux.lds.S b/arch/x86/boot/compressed/vmlinux.lds.S
index ffcb191..093fb4f 100644
--- a/arch/x86/boot/compressed/vmlinux.lds.S
+++ b/arch/x86/boot/compressed/vmlinux.lds.S
@@ -1,3 +1,5 @@
+#include <asm-generic/vmlinux.lds.h>
+
OUTPUT_FORMAT(CONFIG_OUTPUT_FORMAT, CONFIG_OUTPUT_FORMAT, CONFIG_OUTPUT_FORMAT)

#ifdef CONFIG_X86_64
@@ -14,9 +16,9 @@ SECTIONS
* address 0.
*/
. = 0;
- .text.head : {
+ .head.text : {
_head = . ;
- *(.text.head)
+ HEAD_TEXT
_ehead = . ;
}
.rodata.compressed : {
--
1.6.2.1

2009-05-01 00:32:20

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 09/11] x86: fix fragile computation of vsyscall address

From: Anders Kaseorg <[email protected]>

Previously, the address of the vsyscall page (VSYSCALL_PHYS_ADDR,
VSYSCALL_VIRT_ADDR) was computed by arithmetic on the address of the
last section. This leads to bugs when new sections are inserted, such
as the one fixed by commit d312ceda567ab91acd756cde95ac5fbc6b40ed40.
Let's compute it from the current address instead.

Signed-off-by: Anders Kaseorg <[email protected]>
---
arch/x86/kernel/vmlinux.lds.S | 19 +++++++------------
1 files changed, 7 insertions(+), 12 deletions(-)

diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 6b8a633..296b49c 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -137,24 +137,21 @@ SECTIONS
#ifdef CONFIG_X86_64

#define VSYSCALL_ADDR (-10*1024*1024)
-#define VSYSCALL_PHYS_ADDR ((LOADADDR(.data.read_mostly) + \
- SIZEOF(.data.read_mostly) + 4095) & ~(4095))
-#define VSYSCALL_VIRT_ADDR ((ADDR(.data.read_mostly) + \
- SIZEOF(.data.read_mostly) + 4095) & ~(4095))

-#define VLOAD_OFFSET (VSYSCALL_ADDR - VSYSCALL_PHYS_ADDR)
+#define VLOAD_OFFSET (VSYSCALL_ADDR - __vsyscall_0 + LOAD_OFFSET)
#define VLOAD(x) (ADDR(x) - VLOAD_OFFSET)

-#define VVIRT_OFFSET (VSYSCALL_ADDR - VSYSCALL_VIRT_ADDR)
+#define VVIRT_OFFSET (VSYSCALL_ADDR - __vsyscall_0)
#define VVIRT(x) (ADDR(x) - VVIRT_OFFSET)

+ . = ALIGN(4096);
+ __vsyscall_0 = .;
+
. = VSYSCALL_ADDR;
- .vsyscall_0 : AT(VSYSCALL_PHYS_ADDR) {
+ .vsyscall_0 : AT(VLOAD(.vsyscall_0)) {
*(.vsyscall_0)
} :user

- __vsyscall_0 = VSYSCALL_VIRT_ADDR;
-
. = ALIGN(CONFIG_X86_L1_CACHE_BYTES);
.vsyscall_fn : AT(VLOAD(.vsyscall_fn)) {
*(.vsyscall_fn)
@@ -194,11 +191,9 @@ SECTIONS
*(.vsyscall_3)
}

- . = VSYSCALL_VIRT_ADDR + PAGE_SIZE;
+ . = __vsyscall_0 + PAGE_SIZE;

#undef VSYSCALL_ADDR
-#undef VSYSCALL_PHYS_ADDR
-#undef VSYSCALL_VIRT_ADDR
#undef VLOAD_OFFSET
#undef VLOAD
#undef VVIRT_OFFSET
--
1.6.2.1