Build system: section garbage collection for vmlinux
Newer gcc and binutils can do dead code and data removal
at link time. It is achieved using combination of
-ffunction-sections -fdata-sections options for gcc and
--gc-sections for ld.
Theory of operation:
Option -ffunction-sections instructs gcc to place each function
(including static ones) in it's own section named .text.function_name
instead of placing all functions in one big .text section.
At link time, ld normally coalesce all such sections into one
output section .text again. It is achieved by having *(.text.*) spec
along with *(.text) spec in built-in linker scripts.
If ld is invoked with --gc-sections, it tracks references, starting
from entry point and marks all input sections which are reachable
from there. Then it discards all input sections which are not marked.
This isn't buying much if you have one big .text section per .o module,
because even one referenced function will pull in entire section.
You need -ffunction-sections in order to split .text into per-function
sections and make --gc-sections much more useful.
-fdata-sections is analogous: it places each global or static variable
into .data.variable_name, .rodata.variable_name or .bss.variable_name.
How to use it in kernel:
First, we need to adapt existing code for new section names.
Basically, we need to stop using section names of the form
.text.xxxx
.data.xxxx
.rodata.xxxx
.bss.xxxx
in the kernel for - otherwise section placement done by kernel's
custom linker scripts produces broken vmlinux and vdso images.
Second, kernel linker scripts need to be adapted by adding KEEP(xxx)
directives around sections which are not directly referenced, but are
nevertheless used (initcalls, altinstructions, etc).
These patches fix section names and add
CONFIG_DISCARD_UNUSED_SECTIONS. It is not enabled
unconditionally because only newest binutils have
ld --gc-sections which is stable enough for kernel use.
IOW: this is an experimental feature for now.
Patches are conservative and mark a lot of things with
KEEP() directive in linker script, inhibiting GC for them.
With CONFIG_MODULES=y, all EXPORT_SYMBOLed functions
are not discarded.
In this case size savings typically look like this:
? ?text ? ?data ? ? bss ? ? dec ? ? hex filename
5159478 1005139 ?406784 6571401 ?644589 linux-2.6.23-rc4.org/vmlinux
5131822 ?996090 ?401439 6529351 ?63a147 linux-2.6.23-rc4.gc/vmlinux
In this particular case, 402 objects were discarded, saving 42 kb.
With CONFIG_MODULE not set, size savings are bigger - around 10%
of vmlinux size.
Linker is unable to discard more because current infrastructure
is a bit flawed in this regard. It prevents some unused code
from being detected. In particular:
KEEP(__ex_table) -> .fixup -> get_user and friends
KEEP(.smp_locks) -> lock prefixes
I am working on improving this, thanks to suggestions from lkml readers.
Patches were run-tested on x86_64, and likely do not work on any other arch
(need to add KEEP() to arch/*/kernel/vmlinux.lds.S for each arch).
Signed-off-by: Denys Vlasenko <[email protected]>
--
vda
This patch is needed for --gc-sections to work, regardless
of which final form that support will have.
This patch renames .text.xxx and .data.xxx sections
into .xxx.text and .xxx.data, respectively.
.bss.page_aligned (the only .bss.xxx -like section we have)
is renamed .bss.k.page_aligned. ".page_aligned.bss"
wouldn't work - gcc will assign such section attributes
which make it unmergeable with .bss. In fact, binutils ld
had a bug and instead of complaining was producing
broken vmlinux. The bug is fixed in binutils. Amazingly
fast reaction from binutils folks to bug reports! Thanks!
.bss.k.page_aligned is more-or-less ok, since it cannot collide
with gcc-produced sections due to second dot in the name. However,
should we want to do this in linker script:
.bss : { *(.bss) *(.bss.*) *(.bss.k.page_aligned))
it wouldn't work. But currently we don't need that.
If patch doesn't apply to a newer kernel,
you can regenerate it by running linux-2.6.23-rc4.0.fixname.sh
in a kernel free and rediffing it against unmodified one.
Please apply.
Signed-off-by: Denys Vlasenko <[email protected]>
--
vda
This patch fixes x86_64 vdso image so that it builds with --gc-sections.
Then it fixes comment in arch/i386/kernel/vmlinux.lds.S
and adds comments to other linker scripts about .bss.
Please apply.
Signed-off-by: Denys Vlasenko <[email protected]>
--
vda
This patch makes modpost able to process object files with more than
64k sections. Needed for huge kernel builds (allyesconfig, for example)
with --gc-sections.
This patch basically fixes modpost, it isn't specific
for section garbage collection.
Please apply.
Signed-off-by: Denys Vlasenko <[email protected]>
--
vda
This is the core patch of the series.
It adds CONFIG_DISCARD_UNUSED_SECTIONS,
adds KEEP() directives to linker scripts,
adds custom module linker script which is needed
to avoid having modules with many small sections.
Modules got a bit smaller too, as a result.
This patch is slighty more risky than first three,
probably need to go into -mm first.
It should be safe with CONFIG_DISCARD_UNUSED_SECTIONS off, though.
Signed-off-by: Denys Vlasenko <[email protected]>
--
vda
Denys Vlasenko <[email protected]> writes:
>
> ? ?text ? ?data ? ? bss ? ? dec ? ? hex filename
> 5159478 1005139 ?406784 6571401 ?644589 linux-2.6.23-rc4.org/vmlinux
> 5131822 ?996090 ?401439 6529351 ?63a147 linux-2.6.23-rc4.gc/vmlinux
>
> In this particular case, 402 objects were discarded, saving 42 kb.
I wonder how many of those are 100% unused on all configurations?
That could be an useful janitor task to clean up
-Andi
On Tue, 2007-09-11 at 21:07 +0100, Denys Vlasenko wrote:
> This patch is needed for --gc-sections to work, regardless
> of which final form that support will have.
>
> This patch renames .text.xxx and .data.xxx sections
> into .xxx.text and .xxx.data, respectively.
I think you'll have better luck with this if you focus on a single
architecture (i386 would be best) ..
Daniel
On Tuesday 11 September 2007 22:47, Daniel Walker wrote:
> On Tue, 2007-09-11 at 21:07 +0100, Denys Vlasenko wrote:
> > This patch is needed for --gc-sections to work, regardless
> > of which final form that support will have.
> >
> > This patch renames .text.xxx and .data.xxx sections
> > into .xxx.text and .xxx.data, respectively.
>
> I think you'll have better luck with this if you focus on a single
> architecture (i386 would be best) ..
I did exactly that. I focused on x86_64.
Of course, section name fixes cannot be done per-arch, as they
are scattered across entire tree.
Apart from that, it was x86_64 only.
By now, I also have patches for i386 in hand too.
--
vda
On Tuesday 11 September 2007 22:03, Andi Kleen wrote:
> Denys Vlasenko <[email protected]> writes:
> >
> > ? ?text ? ?data ? ? bss ? ? dec ? ? hex filename
> > 5159478 1005139 ?406784 6571401 ?644589 linux-2.6.23-rc4.org/vmlinux
> > 5131822 ?996090 ?401439 6529351 ?63a147 linux-2.6.23-rc4.gc/vmlinux
> >
> > In this particular case, 402 objects were discarded, saving 42 kb.
>
> I wonder how many of those are 100% unused on all configurations?
> That could be an useful janitor task to clean up
With CONFIG_DISCARD_UNUSED_SECTIONS=y ld will helpfully flood you
with the list of discarded stuff. I pass --print-gc-sections to it.
--
vda
On Wed, 2007-09-12 at 21:18 +0100, Denys Vlasenko wrote:
> On Tuesday 11 September 2007 22:47, Daniel Walker wrote:
> > On Tue, 2007-09-11 at 21:07 +0100, Denys Vlasenko wrote:
> > > This patch is needed for --gc-sections to work, regardless
> > > of which final form that support will have.
> > >
> > > This patch renames .text.xxx and .data.xxx sections
> > > into .xxx.text and .xxx.data, respectively.
> >
> > I think you'll have better luck with this if you focus on a single
> > architecture (i386 would be best) ..
>
> I did exactly that. I focused on x86_64.
>
> Of course, section name fixes cannot be done per-arch, as they
> are scattered across entire tree.
This is want I'm talking about .. Why can't you do this per
architecture?
Daniel
On 9/12/07, Denys Vlasenko <[email protected]> wrote:
> Patches were run-tested on x86_64, and likely do not work on any other arch
> (need to add KEEP() to arch/*/kernel/vmlinux.lds.S for each arch).
This is good stuff. I had been using a ported variant of this
optimization for ARM on quite an older 2.6 kernel for a while now. I
derived that port from:
http://lkml.org/lkml/2006/6/4/169
With some tweaks it worked for me. Could you also have a look at the
mentioned link and see if that's a superset of what you're trying to
achieve?
--
Abhishek Sagar
Hi Abhisshek.
On Thu, Sep 13, 2007 at 11:56:14PM +0530, Abhishek Sagar wrote:
> On 9/12/07, Denys Vlasenko <[email protected]> wrote:
> > Patches were run-tested on x86_64, and likely do not work on any other arch
> > (need to add KEEP() to arch/*/kernel/vmlinux.lds.S for each arch).
>
> This is good stuff. I had been using a ported variant of this
> optimization for ARM on quite an older 2.6 kernel for a while now. I
> derived that port from:
> http://lkml.org/lkml/2006/6/4/169
>
> With some tweaks it worked for me.
Could you post your tweaked version - against an older kernel is OK.
Sam
On 9/14/07, Sam Ravnborg <[email protected]> wrote:
> > With some tweaks it worked for me.
> Could you post your tweaked version - against an older kernel is OK.
The inlined patch should apply cleanly on top of the patch posted on
the link I mentioned before. The *.S files are the ones I chose to
bring them under the purview of --ffunction-sections. My observation
remains that if a fine-grained function/data/exported-symbol level
garbage collection can be incorporated into the build environment,
then it'll be something really useful.
--
Abhishek Sagar
---
diff -upNr linux_orig-2.6.12/arch/arm/kernel/armksyms.c
linux-2.6.12/arch/arm/kernel/armksyms.c
--- linux_orig-2.6.12/arch/arm/kernel/armksyms.c 2005-06-18
01:18:29.000000000 +0530
+++ linux-2.6.12/arch/arm/kernel/armksyms.c 2007-09-14 09:00:03.000000000 +0530
@@ -44,10 +44,17 @@ extern void fp_enter(void);
* This has a special calling convention; it doesn't
* modify any of the usual registers, except for LR.
*/
+#ifndef CONFIG_GCSECTIONS
#define EXPORT_SYMBOL_ALIAS(sym,orig) \
const struct kernel_symbol __ksymtab_##sym \
__attribute__((section("__ksymtab"))) = \
{ (unsigned long)&orig, #sym };
+#else
+#define EXPORT_SYMBOL_ALIAS(sym,orig) \
+ const struct kernel_symbol __ksymtab_##sym \
+ __attribute__((section("___ksymtab_" #sym))) = \
+ { (unsigned long)&orig, #sym };
+#endif /* CONFIG_GCSECTIONS */
/*
* floating point math emulator support.
diff -upNr linux_orig-2.6.12/arch/arm/kernel/iwmmxt.S
linux-2.6.12/arch/arm/kernel/iwmmxt.S
--- linux_orig-2.6.12/arch/arm/kernel/iwmmxt.S 2005-06-18
01:18:29.000000000 +0530
+++ linux-2.6.12/arch/arm/kernel/iwmmxt.S 2007-09-14 09:21:01.000000000 +0530
@@ -55,7 +55,11 @@
*
* called from prefetch exception handler with interrupts disabled
*/
-
+#ifdef CONFIG_GCSECTIONS
+ .section ".text.iwmmxt_task_enable"
+#else
+ .text
+#endif
ENTRY(iwmmxt_task_enable)
mrc p15, 0, r2, c15, c1, 0
diff -upNr linux_orig-2.6.12/arch/arm/kernel/vmlinux.lds.S
linux-2.6.12/arch/arm/kernel/vmlinux.lds.S
--- linux_orig-2.6.12/arch/arm/kernel/vmlinux.lds.S 2005-06-18
01:18:29.000000000 +0530
+++ linux-2.6.12/arch/arm/kernel/vmlinux.lds.S 2007-09-14
08:58:30.000000000 +0530
@@ -20,50 +20,50 @@ SECTIONS
.init : { /* Init code and data */
_stext = .;
_sinittext = .;
- *(.init.text)
+ KEEP(*(.init.text))
_einittext = .;
__proc_info_begin = .;
- *(.proc.info)
+ KEEP(*(.proc.info))
__proc_info_end = .;
__arch_info_begin = .;
- *(.arch.info)
+ KEEP(*(.arch.info))
__arch_info_end = .;
__tagtable_begin = .;
- *(.taglist)
+ KEEP(*(.taglist))
__tagtable_end = .;
. = ALIGN(16);
__setup_start = .;
- *(.init.setup)
+ KEEP(*(.init.setup))
__setup_end = .;
__early_begin = .;
- *(__early_param)
+ KEEP(*(__early_param))
__early_end = .;
__initcall_start = .;
- *(.initcall1.init)
- *(.initcall2.init)
- *(.initcall3.init)
- *(.initcall4.init)
- *(.initcall5.init)
- *(.initcall6.init)
- *(.initcall7.init)
+ KEEP(*(.initcall1.init))
+ KEEP(*(.initcall2.init))
+ KEEP(*(.initcall3.init))
+ KEEP(*(.initcall4.init))
+ KEEP(*(.initcall5.init))
+ KEEP(*(.initcall6.init))
+ KEEP(*(.initcall7.init))
__initcall_end = .;
__con_initcall_start = .;
- *(.con_initcall.init)
+ KEEP(*(.con_initcall.init))
__con_initcall_end = .;
__security_initcall_start = .;
- *(.security_initcall.init)
+ KEEP(*(.security_initcall.init))
__security_initcall_end = .;
. = ALIGN(32);
__initramfs_start = .;
- usr/built-in.o(.init.ramfs)
+ KEEP(usr/built-in.o(.init.ramfs))
__initramfs_end = .;
. = ALIGN(64);
__per_cpu_start = .;
- *(.data.percpu)
+ KEEP(*(.data.percpu))
__per_cpu_end = .;
#ifndef CONFIG_XIP_KERNEL
__init_begin = _stext;
- *(.init.data)
+ KEEP(*(.init.data))
. = ALIGN(4096);
__init_end = .;
#endif
@@ -78,6 +78,8 @@ SECTIONS
.text : { /* Real text segment */
_text = .; /* Text and read-only data */
*(.text)
+ *(.text.*)
+ #include "vmlinux.ldskeep.h"
SCHED_TEXT
LOCK_TEXT
*(.fixup)
@@ -92,12 +94,42 @@ SECTIONS
. = ALIGN(16);
__ex_table : { /* Exception table */
__start___ex_table = .;
- *(__ex_table)
+ KEEP(*(__ex_table))
__stop___ex_table = .;
}
RODATA
+#ifdef CONFIG_GCSECTIONS
+ __ksymtab : AT(ADDR(__ksymtab) - LOAD_OFFSET) {
+ VMLINUX_SYMBOL(__start___ksymtab) = .;
+ #include "keep.ksymtab.txt"
+ VMLINUX_SYMBOL(__stop___ksymtab) = .;
+ }
+
+ __ksymtab_gpl : AT(ADDR(__ksymtab_gpl) - LOAD_OFFSET) {
+ VMLINUX_SYMBOL(__start___ksymtab_gpl) = .;
+ #include "keep.ksymtabgpl.txt"
+ VMLINUX_SYMBOL(__stop___ksymtab_gpl) = .;
+ }
+
+ __kcrctab : AT(ADDR(__kcrctab) - LOAD_OFFSET) {
+ VMLINUX_SYMBOL(__start___kcrctab) = .;
+ KEEP(*(__kcrctab))
+ VMLINUX_SYMBOL(__stop___kcrctab) = .;
+ }
+
+ __kcrctab_gpl : AT(ADDR(__kcrctab_gpl) - LOAD_OFFSET) {
+ VMLINUX_SYMBOL(__start___kcrctab_gpl) = .;
+ KEEP(*(__kcrctab_gpl))
+ VMLINUX_SYMBOL(__stop___kcrctab_gpl) = .;
+ }
+
+ __ksymtab_strings : AT(ADDR(__ksymtab_strings) - LOAD_OFFSET) {
+ #include "keep.ksymstrings.txt"
+ }
+#endif /* CONFIG_GCSECTIONS */
+
_etext = .; /* End of text and rodata section */
#ifdef CONFIG_XIP_KERNEL
@@ -120,14 +152,14 @@ SECTIONS
#ifdef CONFIG_XIP_KERNEL
. = ALIGN(4096);
__init_begin = .;
- *(.init.data)
+ KEEP(*(.init.data))
. = ALIGN(4096);
__init_end = .;
#endif
. = ALIGN(4096);
__nosave_begin = .;
- *(.data.nosave)
+ KEEP(*(.data.nosave))
. = ALIGN(4096);
__nosave_end = .;
@@ -135,12 +167,13 @@ SECTIONS
* then the cacheline aligned data
*/
. = ALIGN(32);
- *(.data.cacheline_aligned)
+ KEEP(*(.data.cacheline_aligned))
/*
* and the usual data section
*/
*(.data)
+ *(.data.*)
CONSTRUCTORS
_edata = .;
@@ -149,6 +182,7 @@ SECTIONS
.bss : {
__bss_start = .; /* BSS */
*(.bss)
+ *(.bss.*)
*(COMMON)
_end = .;
}
diff -upNr linux_orig-2.6.12/arch/arm/lib/copy_page.S
linux-2.6.12/arch/arm/lib/copy_page.S
--- linux_orig-2.6.12/arch/arm/lib/copy_page.S 2005-06-18
01:18:29.000000000 +0530
+++ linux-2.6.12/arch/arm/lib/copy_page.S 2007-09-14 09:18:29.000000000 +0530
@@ -15,7 +15,11 @@
#define COPY_COUNT (PAGE_SZ/64 PLD( -1 ))
- .text
+#ifdef CONFIG_GCSECTIONS
+ .section ".text.copy_page"
+#else
+ .text
+#endif
.align 5
/*
* StrongARM optimised copy_page routine
diff -upNr linux_orig-2.6.12/arch/arm/lib/csumipv6.S
linux-2.6.12/arch/arm/lib/csumipv6.S
--- linux_orig-2.6.12/arch/arm/lib/csumipv6.S 2005-06-18
01:18:29.000000000 +0530
+++ linux-2.6.12/arch/arm/lib/csumipv6.S 2007-09-14 09:11:20.000000000 +0530
@@ -10,7 +10,11 @@
#include <linux/linkage.h>
#include <asm/assembler.h>
- .text
+#ifdef CONFIG_GCSECTIONS
+ .section ".text.__csum_ipv6_magic"
+#else
+ .text
+#endif
ENTRY(__csum_ipv6_magic)
str lr, [sp, #-4]!
diff -upNr linux_orig-2.6.12/arch/arm/lib/csumpartialcopyuser.S
linux-2.6.12/arch/arm/lib/csumpartialcopyuser.S
--- linux_orig-2.6.12/arch/arm/lib/csumpartialcopyuser.S 2005-06-18
01:18:29.000000000 +0530
+++ linux-2.6.12/arch/arm/lib/csumpartialcopyuser.S 2007-09-14
09:17:34.000000000 +0530
@@ -15,7 +15,11 @@
#include <asm/errno.h>
#include <asm/constants.h>
- .text
+#ifdef CONFIG_GCSECTIONS
+ .section ".text.csum_partial_copy_from_user"
+#else
+ .text
+#endif
.macro save_regs
stmfd sp!, {r1 - r2, r4 - r8, fp, ip, lr, pc}
diff -upNr linux_orig-2.6.12/arch/arm/lib/csumpartial.S
linux-2.6.12/arch/arm/lib/csumpartial.S
--- linux_orig-2.6.12/arch/arm/lib/csumpartial.S 2005-06-18
01:18:29.000000000 +0530
+++ linux-2.6.12/arch/arm/lib/csumpartial.S 2007-09-14 09:10:34.000000000 +0530
@@ -10,7 +10,11 @@
#include <linux/linkage.h>
#include <asm/assembler.h>
- .text
+#ifdef CONFIG_GCSECTIONS
+ .section ".text.csum_partial"
+#else
+ .text
+#endif
/*
* Function: __u32 csum_partial(const char *src, int len, __u32 sum)
diff -upNr linux_orig-2.6.12/arch/arm/lib/memchr.S
linux-2.6.12/arch/arm/lib/memchr.S
--- linux_orig-2.6.12/arch/arm/lib/memchr.S 2005-06-18 01:18:29.000000000 +0530
+++ linux-2.6.12/arch/arm/lib/memchr.S 2007-09-14 09:11:56.000000000 +0530
@@ -12,7 +12,11 @@
#include <linux/linkage.h>
#include <asm/assembler.h>
- .text
+#ifdef CONFIG_GCSECTIONS
+ .section ".text.memchr"
+#else
+ .text
+#endif
.align 5
ENTRY(memchr)
1: subs r2, r2, #1
diff -upNr linux_orig-2.6.12/arch/arm/lib/memset.S
linux-2.6.12/arch/arm/lib/memset.S
--- linux_orig-2.6.12/arch/arm/lib/memset.S 2005-06-18 01:18:29.000000000 +0530
+++ linux-2.6.12/arch/arm/lib/memset.S 2007-09-14 09:10:16.000000000 +0530
@@ -12,7 +12,11 @@
#include <linux/linkage.h>
#include <asm/assembler.h>
- .text
+#ifdef CONFIG_GCSECTIONS
+ .section ".text.memset"
+#else
+ .text
+#endif
.align 5
.word 0
diff -upNr linux_orig-2.6.12/arch/arm/lib/memzero.S
linux-2.6.12/arch/arm/lib/memzero.S
--- linux_orig-2.6.12/arch/arm/lib/memzero.S 2005-06-18 01:18:29.000000000 +0530
+++ linux-2.6.12/arch/arm/lib/memzero.S 2007-09-14 09:19:58.000000000 +0530
@@ -10,7 +10,11 @@
#include <linux/linkage.h>
#include <asm/assembler.h>
- .text
+#ifdef CONFIG_GCSECTIONS
+ .section ".text.__memzero"
+#else
+ .text
+#endif
.align 5
.word 0
/*
diff -upNr linux_orig-2.6.12/arch/arm/lib/strchr.S
linux-2.6.12/arch/arm/lib/strchr.S
--- linux_orig-2.6.12/arch/arm/lib/strchr.S 2005-06-18 01:18:29.000000000 +0530
+++ linux-2.6.12/arch/arm/lib/strchr.S 2007-09-14 09:09:56.000000000 +0530
@@ -12,7 +12,11 @@
#include <linux/linkage.h>
#include <asm/assembler.h>
- .text
+#ifdef CONFIG_GCSECTIONS
+ .section ".text.strchr"
+#else
+ .text
+#endif
.align 5
ENTRY(strchr)
and r1, r1, #0xff
diff -upNr linux_orig-2.6.12/arch/arm/lib/strncpy_from_user.S
linux-2.6.12/arch/arm/lib/strncpy_from_user.S
--- linux_orig-2.6.12/arch/arm/lib/strncpy_from_user.S 2005-06-18
01:18:29.000000000 +0530
+++ linux-2.6.12/arch/arm/lib/strncpy_from_user.S 2007-09-14
09:19:20.000000000 +0530
@@ -11,7 +11,11 @@
#include <asm/assembler.h>
#include <asm/errno.h>
- .text
+#ifdef CONFIG_GCSECTIONS
+ .section ".text.__arch_strncpy_from_user"
+#else
+ .text
+#endif
.align 5
/*
diff -upNr linux_orig-2.6.12/arch/arm/lib/strnlen_user.S
linux-2.6.12/arch/arm/lib/strnlen_user.S
--- linux_orig-2.6.12/arch/arm/lib/strnlen_user.S 2005-06-18
01:18:29.000000000 +0530
+++ linux-2.6.12/arch/arm/lib/strnlen_user.S 2007-09-14 09:10:42.000000000 +0530
@@ -11,7 +11,11 @@
#include <asm/assembler.h>
#include <asm/errno.h>
- .text
+#ifdef CONFIG_GCSECTIONS
+ .section ".text.__arch_strnlen_user"
+#else
+ .text
+#endif
.align 5
/* Prototype: unsigned long __arch_strnlen_user(const char *str, long n)
diff -upNr linux_orig-2.6.12/arch/arm/lib/strrchr.S
linux-2.6.12/arch/arm/lib/strrchr.S
--- linux_orig-2.6.12/arch/arm/lib/strrchr.S 2005-06-18 01:18:29.000000000 +0530
+++ linux-2.6.12/arch/arm/lib/strrchr.S 2007-09-14 09:10:06.000000000 +0530
@@ -12,7 +12,11 @@
#include <linux/linkage.h>
#include <asm/assembler.h>
- .text
+#ifdef CONFIG_GCSECTIONS
+ .section ".text.strrchr"
+#else
+ .text
+#endif /* CONFIG_GCSECTIONS */
.align 5
ENTRY(strrchr)
mov r3, #0