2009-05-01 00:07:58

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 0/7] section name cleanup for sh

This patch series cleans up the section names on the sh
architecture. It requires the architecture-independent macro
definitions from this patch series:

<http://www.spinics.net/lists/mips/msg33499.html>

The long-term goal here is to add support for building the kernel with
-ffunction-sections -fdata-sections. This requires renaming all the
magic section names in the kernel of the form .text.foo, .data.foo,
.bss.foo, and .rodata.foo to not have collisions with sections
generated for code like:

static int nosave = 0; /* -fdata-sections places in .data.nosave */
static void head(); /* -ffunction-sections places in .text.head */

Note that these patches have not been boot-tested (aside from testing
the analogous changes on x86), since I don't have access to the
appropriate hardware.

-Tim Abbott


Tim Abbott (7):
sh: Use macros for .bss.page_aligned section.
sh: Use macros for .data.page_aligned section.
sh: use NOSAVE_DATA macro for .data.nosave section.
sh: use new macro for .data.cacheline_aligned section.
sh: use new macros for .data.init_task.
sh: use new macro for .data.read_mostly section.
sh: convert to new generic read_mostly support.

arch/sh/Kconfig | 3 +++
arch/sh/include/asm/cache.h | 2 --
arch/sh/kernel/init_task.c | 3 +--
arch/sh/kernel/irq.c | 6 ++----
arch/sh/kernel/vmlinux_32.lds.S | 24 ++++++------------------
arch/sh/kernel/vmlinux_64.lds.S | 24 ++++++------------------
6 files changed, 18 insertions(+), 44 deletions(-)


2009-05-01 00:17:40

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 2/7] sh: Use macros for .data.page_aligned section.

Signed-off-by: Tim Abbott <[email protected]>
Acked-by: Paul Mundt <[email protected]>
---
arch/sh/kernel/vmlinux_32.lds.S | 3 +--
arch/sh/kernel/vmlinux_64.lds.S | 3 +--
2 files changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/sh/kernel/vmlinux_32.lds.S b/arch/sh/kernel/vmlinux_32.lds.S
index 99a4124..325af0b 100644
--- a/arch/sh/kernel/vmlinux_32.lds.S
+++ b/arch/sh/kernel/vmlinux_32.lds.S
@@ -69,8 +69,7 @@ SECTIONS
. = ALIGN(L1_CACHE_BYTES);
*(.data.read_mostly)

- . = ALIGN(PAGE_SIZE);
- *(.data.page_aligned)
+ PAGE_ALIGNED_DATA

__nosave_begin = .;
*(.data.nosave)
diff --git a/arch/sh/kernel/vmlinux_64.lds.S b/arch/sh/kernel/vmlinux_64.lds.S
index cb46577..b222700 100644
--- a/arch/sh/kernel/vmlinux_64.lds.S
+++ b/arch/sh/kernel/vmlinux_64.lds.S
@@ -78,8 +78,7 @@ SECTIONS
. = ALIGN(L1_CACHE_BYTES);
*(.data.read_mostly)

- . = ALIGN(PAGE_SIZE);
- *(.data.page_aligned)
+ PAGE_ALIGNED_DATA

__nosave_begin = .;
*(.data.nosave)
--
1.6.2.1

2009-05-01 00:18:31

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 1/7] sh: Use macros for .bss.page_aligned section.

Signed-off-by: Tim Abbott <[email protected]>
Acked-by: Paul Mundt <[email protected]>
---
arch/sh/kernel/irq.c | 6 ++----
arch/sh/kernel/vmlinux_32.lds.S | 2 +-
arch/sh/kernel/vmlinux_64.lds.S | 2 +-
3 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/arch/sh/kernel/irq.c b/arch/sh/kernel/irq.c
index 3f1372e..9853fde 100644
--- a/arch/sh/kernel/irq.c
+++ b/arch/sh/kernel/irq.c
@@ -157,11 +157,9 @@ asmlinkage int do_IRQ(unsigned int irq, struct pt_regs *regs)
}

#ifdef CONFIG_IRQSTACKS
-static char softirq_stack[NR_CPUS * THREAD_SIZE]
- __attribute__((__section__(".bss.page_aligned")));
+static char softirq_stack[NR_CPUS * THREAD_SIZE] __page_aligned_bss;

-static char hardirq_stack[NR_CPUS * THREAD_SIZE]
- __attribute__((__section__(".bss.page_aligned")));
+static char hardirq_stack[NR_CPUS * THREAD_SIZE] __page_aligned_bss;

/*
* allocate per-cpu stacks for hardirq and for softirq processing
diff --git a/arch/sh/kernel/vmlinux_32.lds.S b/arch/sh/kernel/vmlinux_32.lds.S
index dd9b2ee..99a4124 100644
--- a/arch/sh/kernel/vmlinux_32.lds.S
+++ b/arch/sh/kernel/vmlinux_32.lds.S
@@ -131,7 +131,7 @@ SECTIONS
.bss : {
__init_end = .;
__bss_start = .; /* BSS */
- *(.bss.page_aligned)
+ PAGE_ALIGNED_BSS
*(.bss)
*(COMMON)
. = ALIGN(4);
diff --git a/arch/sh/kernel/vmlinux_64.lds.S b/arch/sh/kernel/vmlinux_64.lds.S
index 6966446..cb46577 100644
--- a/arch/sh/kernel/vmlinux_64.lds.S
+++ b/arch/sh/kernel/vmlinux_64.lds.S
@@ -140,7 +140,7 @@ SECTIONS
.bss : C_PHYS(.bss) {
__init_end = .;
__bss_start = .; /* BSS */
- *(.bss.page_aligned)
+ PAGE_ALIGNED_BSS
*(.bss)
*(COMMON)
. = ALIGN(4);
--
1.6.2.1

2009-05-01 00:21:28

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 4/7] sh: use new macro for .data.cacheline_aligned section.

I moved the section after NOSAVE_DATA with the intention of having the
alignment levels of the sections placed in the .data output section
being strictly decreasing.

Signed-off-by: Tim Abbott <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: [email protected]
---
arch/sh/kernel/vmlinux_32.lds.S | 4 +---
arch/sh/kernel/vmlinux_64.lds.S | 4 +---
2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/sh/kernel/vmlinux_32.lds.S b/arch/sh/kernel/vmlinux_32.lds.S
index 5f5e190..eabb0bb 100644
--- a/arch/sh/kernel/vmlinux_32.lds.S
+++ b/arch/sh/kernel/vmlinux_32.lds.S
@@ -64,13 +64,11 @@ SECTIONS
*(.data.init_task)

. = ALIGN(L1_CACHE_BYTES);
- *(.data.cacheline_aligned)
-
- . = ALIGN(L1_CACHE_BYTES);
*(.data.read_mostly)

PAGE_ALIGNED_DATA
NOSAVE_DATA
+ CACHELINE_ALIGNED_DATA(L1_CACHE_BYTES)
DATA_DATA
CONSTRUCTORS
}
diff --git a/arch/sh/kernel/vmlinux_64.lds.S b/arch/sh/kernel/vmlinux_64.lds.S
index 9cccb3d..e0e2e1b 100644
--- a/arch/sh/kernel/vmlinux_64.lds.S
+++ b/arch/sh/kernel/vmlinux_64.lds.S
@@ -73,13 +73,11 @@ SECTIONS
*(.data.init_task)

. = ALIGN(L1_CACHE_BYTES);
- *(.data.cacheline_aligned)
-
- . = ALIGN(L1_CACHE_BYTES);
*(.data.read_mostly)

PAGE_ALIGNED_DATA
NOSAVE_DATA
+ CACHELINE_ALIGNED_DATA(L1_CACHE_BYTES)
DATA_DATA
CONSTRUCTORS
}
--
1.6.2.1

2009-05-01 00:20:55

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 3/7] sh: use NOSAVE_DATA macro for .data.nosave section.

Signed-off-by: Tim Abbott <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: [email protected]
---
arch/sh/kernel/vmlinux_32.lds.S | 7 +------
arch/sh/kernel/vmlinux_64.lds.S | 7 +------
2 files changed, 2 insertions(+), 12 deletions(-)

diff --git a/arch/sh/kernel/vmlinux_32.lds.S b/arch/sh/kernel/vmlinux_32.lds.S
index 325af0b..5f5e190 100644
--- a/arch/sh/kernel/vmlinux_32.lds.S
+++ b/arch/sh/kernel/vmlinux_32.lds.S
@@ -70,12 +70,7 @@ SECTIONS
*(.data.read_mostly)

PAGE_ALIGNED_DATA
-
- __nosave_begin = .;
- *(.data.nosave)
- . = ALIGN(PAGE_SIZE);
- __nosave_end = .;
-
+ NOSAVE_DATA
DATA_DATA
CONSTRUCTORS
}
diff --git a/arch/sh/kernel/vmlinux_64.lds.S b/arch/sh/kernel/vmlinux_64.lds.S
index b222700..9cccb3d 100644
--- a/arch/sh/kernel/vmlinux_64.lds.S
+++ b/arch/sh/kernel/vmlinux_64.lds.S
@@ -79,12 +79,7 @@ SECTIONS
*(.data.read_mostly)

PAGE_ALIGNED_DATA
-
- __nosave_begin = .;
- *(.data.nosave)
- . = ALIGN(PAGE_SIZE);
- __nosave_end = .;
-
+ NOSAVE_DATA
DATA_DATA
CONSTRUCTORS
}
--
1.6.2.1

2009-05-01 00:21:51

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 7/7] sh: convert to new generic read_mostly support.

Signed-off-by: Tim Abbott <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: [email protected]
---
arch/sh/Kconfig | 3 +++
arch/sh/include/asm/cache.h | 2 --
2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
index e7390dd..efafd78 100644
--- a/arch/sh/Kconfig
+++ b/arch/sh/Kconfig
@@ -137,6 +137,9 @@ config ARCH_HAS_DEFAULT_IDLE
config IO_TRAPPED
bool

+config HAVE_READ_MOSTLY_DATA
+ def_bool y
+
source "init/Kconfig"

source "kernel/Kconfig.freezer"
diff --git a/arch/sh/include/asm/cache.h b/arch/sh/include/asm/cache.h
index 02df18e..0d44279 100644
--- a/arch/sh/include/asm/cache.h
+++ b/arch/sh/include/asm/cache.h
@@ -14,8 +14,6 @@

#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)

-#define __read_mostly __attribute__((__section__(".data.read_mostly")))
-
#ifndef __ASSEMBLY__
struct cache_info {
unsigned int ways; /* Number of cache ways */
--
1.6.2.1

2009-05-01 00:24:34

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 6/7] sh: use new macro for .data.read_mostly section.

Signed-off-by: Tim Abbott <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: [email protected]
---
arch/sh/kernel/vmlinux_32.lds.S | 5 +----
arch/sh/kernel/vmlinux_64.lds.S | 5 +----
2 files changed, 2 insertions(+), 8 deletions(-)

diff --git a/arch/sh/kernel/vmlinux_32.lds.S b/arch/sh/kernel/vmlinux_32.lds.S
index 7a0f3c4..de80e50 100644
--- a/arch/sh/kernel/vmlinux_32.lds.S
+++ b/arch/sh/kernel/vmlinux_32.lds.S
@@ -61,13 +61,10 @@ SECTIONS

.data : { /* Data */
INIT_TASK_DATA(THREAD_SIZE)
-
- . = ALIGN(L1_CACHE_BYTES);
- *(.data.read_mostly)
-
PAGE_ALIGNED_DATA
NOSAVE_DATA
CACHELINE_ALIGNED_DATA(L1_CACHE_BYTES)
+ READ_MOSTLY_DATA(L1_CACHE_BYTES)
DATA_DATA
CONSTRUCTORS
}
diff --git a/arch/sh/kernel/vmlinux_64.lds.S b/arch/sh/kernel/vmlinux_64.lds.S
index 5cd3e78..72e856b 100644
--- a/arch/sh/kernel/vmlinux_64.lds.S
+++ b/arch/sh/kernel/vmlinux_64.lds.S
@@ -70,13 +70,10 @@ SECTIONS

.data : C_PHYS(.data) { /* Data */
INIT_TASK_DATA(THREAD_SIZE)
-
- . = ALIGN(L1_CACHE_BYTES);
- *(.data.read_mostly)
-
PAGE_ALIGNED_DATA
NOSAVE_DATA
CACHELINE_ALIGNED_DATA(L1_CACHE_BYTES)
+ READ_MOSTLY_DATA(L1_CACHE_BYTES)
DATA_DATA
CONSTRUCTORS
}
--
1.6.2.1

2009-05-01 00:24:56

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 5/7] sh: use new macros for .data.init_task.

Signed-off-by: Tim Abbott <[email protected]>
Cc: Paul Mundt <[email protected]>
Cc: [email protected]
---
arch/sh/kernel/init_task.c | 3 +--
arch/sh/kernel/vmlinux_32.lds.S | 3 +--
arch/sh/kernel/vmlinux_64.lds.S | 3 +--
3 files changed, 3 insertions(+), 6 deletions(-)

diff --git a/arch/sh/kernel/init_task.c b/arch/sh/kernel/init_task.c
index 80c35ff..9522d9a 100644
--- a/arch/sh/kernel/init_task.c
+++ b/arch/sh/kernel/init_task.c
@@ -20,8 +20,7 @@ EXPORT_SYMBOL(init_mm);
* way process stacks are handled. This is done by having a special
* "init_task" linker map entry..
*/
-union thread_union init_thread_union
- __attribute__((__section__(".data.init_task"))) =
+union thread_union init_thread_union __init_task_data =
{ INIT_THREAD_INFO(init_task) };

/*
diff --git a/arch/sh/kernel/vmlinux_32.lds.S b/arch/sh/kernel/vmlinux_32.lds.S
index eabb0bb..7a0f3c4 100644
--- a/arch/sh/kernel/vmlinux_32.lds.S
+++ b/arch/sh/kernel/vmlinux_32.lds.S
@@ -59,9 +59,8 @@ SECTIONS
.uncached.data : { *(.uncached.data) }
__uncached_end = .;

- . = ALIGN(THREAD_SIZE);
.data : { /* Data */
- *(.data.init_task)
+ INIT_TASK_DATA(THREAD_SIZE)

. = ALIGN(L1_CACHE_BYTES);
*(.data.read_mostly)
diff --git a/arch/sh/kernel/vmlinux_64.lds.S b/arch/sh/kernel/vmlinux_64.lds.S
index e0e2e1b..5cd3e78 100644
--- a/arch/sh/kernel/vmlinux_64.lds.S
+++ b/arch/sh/kernel/vmlinux_64.lds.S
@@ -68,9 +68,8 @@ SECTIONS
NOTES
RO_DATA(PAGE_SIZE)

- . = ALIGN(THREAD_SIZE);
.data : C_PHYS(.data) { /* Data */
- *(.data.init_task)
+ INIT_TASK_DATA(THREAD_SIZE)

. = ALIGN(L1_CACHE_BYTES);
*(.data.read_mostly)
--
1.6.2.1