2009-09-22 13:38:49

by Alan Jenkins

[permalink] [raw]
Subject: module: Speed up symbol resolution during module loading

[Apologies for duplicate messages. It's debatable whether I should be trusted
with commands called names like "guilt-patchbomb". ]

The following series applies against v2.6.31. It sorts the tables of builtin
symbols, so the module loader can resolve them using a binary search.

The kbuild changes to achieve this are less scary than I expected. I'm
optimistic that they can be accepted without radical alteration :-) .

Quoting from the last patch in this series:

"On my EeePC 701, coldplug is mainly cpu bound and takes 1.5 seconds
during boot. perf showed this change eliminated 20% of cpu cycles during
coldplug, saving 0.3 seconds of real time.

These savings may not be representative since my config is not very well
tuned. The numbers above represent the loading of 35 modules,
referencing a total of 441 symbols. Nevertheless, it shows why I think
this is worth doing."

Alan


2009-09-22 13:39:46

by Alan Jenkins

[permalink] [raw]
Subject: [PATCH 1/4] module: extract __ExPORT_SYMBOL from module.h into mod_export.h

The new header mod_export.h allows __EXPORT_SYMBOL to be used without
pulling in any function or variable declarations. It will be used by
the build system to help sort the list of symbols exported by the
kernel.

Signed-off-by: Alan Jenkins <[email protected]>
---
include/linux/mod_export.h | 74 ++++++++++++++++++++++++++++++++++++++++++++
include/linux/module.h | 63 +------------------------------------
2 files changed, 75 insertions(+), 62 deletions(-)
create mode 100644 include/linux/mod_export.h

diff --git a/include/linux/mod_export.h b/include/linux/mod_export.h
new file mode 100644
index 0000000..3c51b9c
--- /dev/null
+++ b/include/linux/mod_export.h
@@ -0,0 +1,74 @@
+#ifndef LINUX_MOD_EXPORT_H
+#define LINUX_MOD_EXPORT_H
+
+#include <linux/compiler.h>
+#include <asm/module.h>
+
+/* some toolchains uses a `_' prefix for all user symbols */
+#ifndef MODULE_SYMBOL_PREFIX
+#define MODULE_SYMBOL_PREFIX ""
+#endif
+
+struct kernel_symbol
+{
+ unsigned long value;
+ const char *name;
+};
+
+#ifdef CONFIG_MODULES
+#ifndef __GENKSYMS__
+#ifdef CONFIG_MODVERSIONS
+/* Mark the CRC weak since genksyms apparently decides not to
+ * generate a checksums for some symbols */
+#define __CRC_SYMBOL(sym, sec) \
+ extern void *__crc_##sym __attribute__((weak)); \
+ static const unsigned long __kcrctab_##sym \
+ __used \
+ __attribute__((section("__kcrctab" sec), unused)) \
+ = (unsigned long) &__crc_##sym;
+#else
+#define __CRC_SYMBOL(sym, sec)
+#endif
+
+/* For every exported symbol, place a struct in the __ksymtab section */
+#define __EXPORT_SYMBOL(sym, sec) \
+ extern typeof(sym) sym; \
+ __CRC_SYMBOL(sym, sec) \
+ static const char __kstrtab_##sym[] \
+ __attribute__((section("__ksymtab_strings"), aligned(1))) \
+ = MODULE_SYMBOL_PREFIX #sym; \
+ static const struct kernel_symbol __ksymtab_##sym \
+ __used \
+ __attribute__((section("__ksymtab" sec), unused)) \
+ = { (unsigned long)&sym, __kstrtab_##sym }
+
+#define EXPORT_SYMBOL(sym) \
+ __EXPORT_SYMBOL(sym, "")
+
+#define EXPORT_SYMBOL_GPL(sym) \
+ __EXPORT_SYMBOL(sym, "_gpl")
+
+#define EXPORT_SYMBOL_GPL_FUTURE(sym) \
+ __EXPORT_SYMBOL(sym, "_gpl_future")
+
+#ifdef CONFIG_UNUSED_SYMBOLS
+#define EXPORT_UNUSED_SYMBOL(sym) __EXPORT_SYMBOL(sym, "_unused")
+#define EXPORT_UNUSED_SYMBOL_GPL(sym) __EXPORT_SYMBOL(sym, "_unused_gpl")
+#else
+#define EXPORT_UNUSED_SYMBOL(sym)
+#define EXPORT_UNUSED_SYMBOL_GPL(sym)
+#endif
+
+#endif /* __GENKSYMS__ */
+
+#else /* !CONFIG_MODULES */
+
+#define EXPORT_SYMBOL(sym)
+#define EXPORT_SYMBOL_GPL(sym)
+#define EXPORT_SYMBOL_GPL_FUTURE(sym)
+#define EXPORT_UNUSED_SYMBOL(sym)
+#define EXPORT_UNUSED_SYMBOL_GPL(sym)
+
+#endif /* CONFIG_MODULES */
+
+#endif /* LINUX_MOD_EXPORT_H */
diff --git a/include/linux/module.h b/include/linux/module.h
index 098bdb7..65b62e9 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -15,6 +15,7 @@
#include <linux/stringify.h>
#include <linux/kobject.h>
#include <linux/moduleparam.h>
+#include <linux/mod_export.h>
#include <linux/marker.h>
#include <linux/tracepoint.h>
#include <asm/local.h>
@@ -24,19 +25,8 @@
/* Not Yet Implemented */
#define MODULE_SUPPORTED_DEVICE(name)

-/* some toolchains uses a `_' prefix for all user symbols */
-#ifndef MODULE_SYMBOL_PREFIX
-#define MODULE_SYMBOL_PREFIX ""
-#endif
-
#define MODULE_NAME_LEN MAX_PARAM_PREFIX_LEN

-struct kernel_symbol
-{
- unsigned long value;
- const char *name;
-};
-
struct modversion_info
{
unsigned long crc;
@@ -174,52 +164,6 @@ void *__symbol_get(const char *symbol);
void *__symbol_get_gpl(const char *symbol);
#define symbol_get(x) ((typeof(&x))(__symbol_get(MODULE_SYMBOL_PREFIX #x)))

-#ifndef __GENKSYMS__
-#ifdef CONFIG_MODVERSIONS
-/* Mark the CRC weak since genksyms apparently decides not to
- * generate a checksums for some symbols */
-#define __CRC_SYMBOL(sym, sec) \
- extern void *__crc_##sym __attribute__((weak)); \
- static const unsigned long __kcrctab_##sym \
- __used \
- __attribute__((section("__kcrctab" sec), unused)) \
- = (unsigned long) &__crc_##sym;
-#else
-#define __CRC_SYMBOL(sym, sec)
-#endif
-
-/* For every exported symbol, place a struct in the __ksymtab section */
-#define __EXPORT_SYMBOL(sym, sec) \
- extern typeof(sym) sym; \
- __CRC_SYMBOL(sym, sec) \
- static const char __kstrtab_##sym[] \
- __attribute__((section("__ksymtab_strings"), aligned(1))) \
- = MODULE_SYMBOL_PREFIX #sym; \
- static const struct kernel_symbol __ksymtab_##sym \
- __used \
- __attribute__((section("__ksymtab" sec), unused)) \
- = { (unsigned long)&sym, __kstrtab_##sym }
-
-#define EXPORT_SYMBOL(sym) \
- __EXPORT_SYMBOL(sym, "")
-
-#define EXPORT_SYMBOL_GPL(sym) \
- __EXPORT_SYMBOL(sym, "_gpl")
-
-#define EXPORT_SYMBOL_GPL_FUTURE(sym) \
- __EXPORT_SYMBOL(sym, "_gpl_future")
-
-
-#ifdef CONFIG_UNUSED_SYMBOLS
-#define EXPORT_UNUSED_SYMBOL(sym) __EXPORT_SYMBOL(sym, "_unused")
-#define EXPORT_UNUSED_SYMBOL_GPL(sym) __EXPORT_SYMBOL(sym, "_unused_gpl")
-#else
-#define EXPORT_UNUSED_SYMBOL(sym)
-#define EXPORT_UNUSED_SYMBOL_GPL(sym)
-#endif
-
-#endif
-
enum module_state
{
MODULE_STATE_LIVE,
@@ -533,11 +477,6 @@ extern void module_update_tracepoints(void);
extern int module_get_iter_tracepoints(struct tracepoint_iter *iter);

#else /* !CONFIG_MODULES... */
-#define EXPORT_SYMBOL(sym)
-#define EXPORT_SYMBOL_GPL(sym)
-#define EXPORT_SYMBOL_GPL_FUTURE(sym)
-#define EXPORT_UNUSED_SYMBOL(sym)
-#define EXPORT_UNUSED_SYMBOL_GPL(sym)

/* Given an address, look for it in the exception tables. */
static inline const struct exception_table_entry *
--
1.6.3.2

2009-09-22 13:38:55

by Alan Jenkins

[permalink] [raw]
Subject: [PATCH 2/4] kbuild: sort the list of symbols exported by the kernel (__ksymtab)

modpost of vmlinux.o now extracts the ksymtab sections and outputs
sorted versions of them as .tmp_exports.c. These sorted sections
are linked into vmlinux and the original unsorted sections are
discarded.

This will allow modules to be loaded faster, resolving symbols using
binary search, without any increase in the memory needed for the
symbol tables.

This does not affect the building of modules, so hopefully it won't
affect compile times too much.

Signed-off-by: Alan Jenkins <[email protected]>
CC: Sam Ravnborg <[email protected]>
---
Makefile | 20 ++++++++---
include/asm-generic/vmlinux.lds.h | 40 ++++++++++++++++------
include/linux/mod_export.h | 19 ++++++++--
scripts/Makefile.modpost | 2 +-
scripts/mod/modpost.c | 66 ++++++++++++++++++++++++++++++++++++-
5 files changed, 124 insertions(+), 23 deletions(-)

diff --git a/Makefile b/Makefile
index 60de4ef..bb35b4d 100644
--- a/Makefile
+++ b/Makefile
@@ -674,6 +674,8 @@ libs-y := $(libs-y1) $(libs-y2)
# +--< $(vmlinux-main)
# | +--< driver/built-in.o mm/built-in.o + more
# |
+# +-< .tmp_exports.o (see comments regarding modpost of vmlinux.o)
+# |
# +-< kallsyms.o (see description in CONFIG_KALLSYMS section)
#
# vmlinux version (uname -v) cannot be updated during normal
@@ -737,7 +739,6 @@ define rule_vmlinux__
$(verify_kallsyms)
endef

-
ifdef CONFIG_KALLSYMS
# Generate section listing all symbols and add it into vmlinux $(kallsyms.o)
# It's a three stage process:
@@ -797,13 +798,13 @@ quiet_cmd_kallsyms = KSYM $@
$(call cmd,kallsyms)

# .tmp_vmlinux1 must be complete except kallsyms, so update vmlinux version
-.tmp_vmlinux1: $(vmlinux-lds) $(vmlinux-all) FORCE
+.tmp_vmlinux1: $(vmlinux-lds) $(vmlinux-all) .tmp_exports.o FORCE
$(call if_changed_rule,ksym_ld)

-.tmp_vmlinux2: $(vmlinux-lds) $(vmlinux-all) .tmp_kallsyms1.o FORCE
+.tmp_vmlinux2: $(vmlinux-lds) $(vmlinux-all) .tmp_exports.o .tmp_kallsyms1.o FORCE
$(call if_changed,vmlinux__)

-.tmp_vmlinux3: $(vmlinux-lds) $(vmlinux-all) .tmp_kallsyms2.o FORCE
+.tmp_vmlinux3: $(vmlinux-lds) $(vmlinux-all) .tmp_exports.o .tmp_kallsyms2.o FORCE
$(call if_changed,vmlinux__)

# Needs to visit scripts/ before $(KALLSYMS) can be used.
@@ -835,7 +836,7 @@ define rule_vmlinux-modpost
endef

# vmlinux image - including updated kernel symbols
-vmlinux: $(vmlinux-lds) $(vmlinux-init) $(vmlinux-main) vmlinux.o $(kallsyms.o) FORCE
+vmlinux: $(vmlinux-lds) $(vmlinux-init) $(vmlinux-main) vmlinux.o .tmp_exports.o $(kallsyms.o) FORCE
ifdef CONFIG_HEADERS_CHECK
$(Q)$(MAKE) -f $(srctree)/Makefile headers_check
endif
@@ -858,6 +859,12 @@ modpost-init := $(filter-out init/built-in.o, $(vmlinux-init))
vmlinux.o: $(modpost-init) $(vmlinux-main) FORCE
$(call if_changed_rule,vmlinux-modpost)

+# The modpost of vmlinux.o above creates .tmp_exports.c, a list of exported
+# symbols sorted by name. This list is linked into vmlinux to replace the
+# original unsorted exports. It allows symbols to be resolved efficiently
+# when loading modules.
+.tmp_exports.c: vmlinux.o
+
# The actual objects are generated when descending,
# make sure no implicit rule kicks in
$(sort $(vmlinux-init) $(vmlinux-main)) $(vmlinux-lds): $(vmlinux-dirs) ;
@@ -1190,7 +1197,8 @@ endif # CONFIG_MODULES
# Directories & files removed with 'make clean'
CLEAN_DIRS += $(MODVERDIR)
CLEAN_FILES += vmlinux System.map \
- .tmp_kallsyms* .tmp_version .tmp_vmlinux* .tmp_System.map
+ .tmp_kallsyms* .tmp_version .tmp_vmlinux* .tmp_System.map \
+ .tmp_exports*

# Directories & files removed with 'make mrproper'
MRPROPER_DIRS += include/config include2 usr/include include/generated
diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
index 6ad76bf..16c3e01 100644
--- a/include/asm-generic/vmlinux.lds.h
+++ b/include/asm-generic/vmlinux.lds.h
@@ -253,79 +253,97 @@
\
TRACEDATA \
\
+ /* \
+ * Kernel symbol table: discard the original unsorted tables \
+ * in favour of the sorted versions. \
+ */ \
+ /DISCARD/ : { \
+ *(__ksymtab) \
+ *(__ksymtab_gpl) \
+ *(__ksymtab_unused) \
+ *(__ksymtab_unused_gpl) \
+ *(__ksymtab_gpl_future) \
+ *(__kcrctab) \
+ *(__kcrctab_gpl) \
+ *(__kcrctab_unused) \
+ *(__kcrctab_unused_gpl) \
+ *(__kcrctab_gpl_future) \
+ *(__ksymtab_strings) \
+ } \
+ \
/* Kernel symbol table: Normal symbols */ \
__ksymtab : AT(ADDR(__ksymtab) - LOAD_OFFSET) { \
VMLINUX_SYMBOL(__start___ksymtab) = .; \
- *(__ksymtab) \
+ *(__ksymtab_sorted) \
VMLINUX_SYMBOL(__stop___ksymtab) = .; \
} \
\
/* Kernel symbol table: GPL-only symbols */ \
__ksymtab_gpl : AT(ADDR(__ksymtab_gpl) - LOAD_OFFSET) { \
VMLINUX_SYMBOL(__start___ksymtab_gpl) = .; \
- *(__ksymtab_gpl) \
+ *(__ksymtab_gpl_sorted) \
VMLINUX_SYMBOL(__stop___ksymtab_gpl) = .; \
} \
\
/* Kernel symbol table: Normal unused symbols */ \
__ksymtab_unused : AT(ADDR(__ksymtab_unused) - LOAD_OFFSET) { \
VMLINUX_SYMBOL(__start___ksymtab_unused) = .; \
- *(__ksymtab_unused) \
+ *(__ksymtab_unused_sorted) \
VMLINUX_SYMBOL(__stop___ksymtab_unused) = .; \
} \
\
/* Kernel symbol table: GPL-only unused symbols */ \
__ksymtab_unused_gpl : AT(ADDR(__ksymtab_unused_gpl) - LOAD_OFFSET) { \
VMLINUX_SYMBOL(__start___ksymtab_unused_gpl) = .; \
- *(__ksymtab_unused_gpl) \
+ *(__ksymtab_unused_gpl_sorted) \
VMLINUX_SYMBOL(__stop___ksymtab_unused_gpl) = .; \
} \
\
/* Kernel symbol table: GPL-future-only symbols */ \
__ksymtab_gpl_future : AT(ADDR(__ksymtab_gpl_future) - LOAD_OFFSET) { \
VMLINUX_SYMBOL(__start___ksymtab_gpl_future) = .; \
- *(__ksymtab_gpl_future) \
+ *(__ksymtab_gpl_future_sorted) \
VMLINUX_SYMBOL(__stop___ksymtab_gpl_future) = .; \
} \
\
/* Kernel symbol table: Normal symbols */ \
__kcrctab : AT(ADDR(__kcrctab) - LOAD_OFFSET) { \
VMLINUX_SYMBOL(__start___kcrctab) = .; \
- *(__kcrctab) \
+ *(__kcrctab_sorted) \
VMLINUX_SYMBOL(__stop___kcrctab) = .; \
} \
\
/* Kernel symbol table: GPL-only symbols */ \
__kcrctab_gpl : AT(ADDR(__kcrctab_gpl) - LOAD_OFFSET) { \
VMLINUX_SYMBOL(__start___kcrctab_gpl) = .; \
- *(__kcrctab_gpl) \
+ *(__kcrctab_gpl_sorted) \
VMLINUX_SYMBOL(__stop___kcrctab_gpl) = .; \
} \
\
/* Kernel symbol table: Normal unused symbols */ \
__kcrctab_unused : AT(ADDR(__kcrctab_unused) - LOAD_OFFSET) { \
VMLINUX_SYMBOL(__start___kcrctab_unused) = .; \
- *(__kcrctab_unused) \
+ *(__kcrctab_unused_sorted) \
VMLINUX_SYMBOL(__stop___kcrctab_unused) = .; \
} \
\
/* Kernel symbol table: GPL-only unused symbols */ \
__kcrctab_unused_gpl : AT(ADDR(__kcrctab_unused_gpl) - LOAD_OFFSET) { \
VMLINUX_SYMBOL(__start___kcrctab_unused_gpl) = .; \
- *(__kcrctab_unused_gpl) \
+ *(__kcrctab_unused_gpl_sorted) \
VMLINUX_SYMBOL(__stop___kcrctab_unused_gpl) = .; \
} \
\
/* Kernel symbol table: GPL-future-only symbols */ \
__kcrctab_gpl_future : AT(ADDR(__kcrctab_gpl_future) - LOAD_OFFSET) { \
VMLINUX_SYMBOL(__start___kcrctab_gpl_future) = .; \
- *(__kcrctab_gpl_future) \
+ *(__kcrctab_gpl_future_sorted) \
VMLINUX_SYMBOL(__stop___kcrctab_gpl_future) = .; \
} \
\
/* Kernel symbol table: strings */ \
__ksymtab_strings : AT(ADDR(__ksymtab_strings) - LOAD_OFFSET) { \
- *(__ksymtab_strings) \
+ *(__ksymtab_strings_sorted) \
} \
\
/* __*init sections */ \
diff --git a/include/linux/mod_export.h b/include/linux/mod_export.h
index 3c51b9c..737a829 100644
--- a/include/linux/mod_export.h
+++ b/include/linux/mod_export.h
@@ -31,17 +31,20 @@ struct kernel_symbol
#endif

/* For every exported symbol, place a struct in the __ksymtab section */
-#define __EXPORT_SYMBOL(sym, sec) \
+#define __EXPORT_SYMBOL_NAME(sym, sec, stringsec) \
extern typeof(sym) sym; \
__CRC_SYMBOL(sym, sec) \
static const char __kstrtab_##sym[] \
- __attribute__((section("__ksymtab_strings"), aligned(1))) \
+ __attribute__((section(stringsec), aligned(1))) \
= MODULE_SYMBOL_PREFIX #sym; \
static const struct kernel_symbol __ksymtab_##sym \
__used \
__attribute__((section("__ksymtab" sec), unused)) \
= { (unsigned long)&sym, __kstrtab_##sym }

+#define __EXPORT_SYMBOL(sym, sec) \
+ __EXPORT_SYMBOL_NAME(sym, sec, "__ksymtab_strings")
+
#define EXPORT_SYMBOL(sym) \
__EXPORT_SYMBOL(sym, "")

@@ -51,14 +54,22 @@ struct kernel_symbol
#define EXPORT_SYMBOL_GPL_FUTURE(sym) \
__EXPORT_SYMBOL(sym, "_gpl_future")

+
#ifdef CONFIG_UNUSED_SYMBOLS
-#define EXPORT_UNUSED_SYMBOL(sym) __EXPORT_SYMBOL(sym, "_unused")
-#define EXPORT_UNUSED_SYMBOL_GPL(sym) __EXPORT_SYMBOL(sym, "_unused_gpl")
+#define EXPORT_UNUSED_SYMBOL(sym) __EXPORT_SYMBOL(sym, "_unused", "")
+#define EXPORT_UNUSED_SYMBOL_GPL(sym) __EXPORT_SYMBOL(sym, "_unused_gpl", "")
#else
#define EXPORT_UNUSED_SYMBOL(sym)
#define EXPORT_UNUSED_SYMBOL_GPL(sym)
#endif

+/*
+ * In vmlinux (but not modules), __ksymtab and friends are replaced by sorted
+ * versions generated by scripts/mod/modpost.c
+ */
+#define __EXPORT_SYMBOL_SORTED(sym, sec) \
+ __EXPORT_SYMBOL_NAME(sym, sec "_sorted", "__ksymtab_strings_sorted")
+
#endif /* __GENKSYMS__ */

#else /* !CONFIG_MODULES */
diff --git a/scripts/Makefile.modpost b/scripts/Makefile.modpost
index f4053dc..8bee8cd 100644
--- a/scripts/Makefile.modpost
+++ b/scripts/Makefile.modpost
@@ -98,7 +98,7 @@ __modpost: $(modules:.ko=.o) FORCE
$(call cmd,modpost) $(wildcard vmlinux) $(filter-out FORCE,$^)

quiet_cmd_kernel-mod = MODPOST $@
- cmd_kernel-mod = $(modpost) $@
+ cmd_kernel-mod = $(modpost) -x .tmp_exports.c $@

vmlinux.o: FORCE
@rm -fr $(kernelmarkersfile)
diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
index 4522948..6302ce7 100644
--- a/scripts/mod/modpost.c
+++ b/scripts/mod/modpost.c
@@ -154,6 +154,7 @@ struct symbol {
};

static struct symbol *symbolhash[SYMBOL_HASH_SIZE];
+unsigned int symbolcount;

/* This is based on the hash agorithm from gdbm, via tdb */
static inline unsigned int tdb_hash(const char *name)
@@ -191,6 +192,7 @@ static struct symbol *new_symbol(const char *name, struct module *module,
unsigned int hash;
struct symbol *new;

+ symbolcount++;
hash = tdb_hash(name) % SYMBOL_HASH_SIZE;
new = symbolhash[hash] = alloc_symbol(name, 0, symbolhash[hash]);
new->module = module;
@@ -1976,6 +1978,61 @@ static void write_dump(const char *fname)
write_if_changed(&buf, fname);
}

+static const char *section_names[] = {
+ [export_plain] = "",
+ [export_unused] = "_unused",
+ [export_gpl] = "_gpl",
+ [export_unused_gpl] = "_unused_gpl",
+ [export_gpl_future] = "_gpl_future",
+};
+
+static int compare_symbol_names(const void *a, const void *b)
+{
+ struct symbol *const *syma = a;
+ struct symbol *const *symb = b;
+
+ return strcmp((*syma)->name, (*symb)->name);
+}
+
+/* sort exported symbols and output as */
+static void write_exports(const char *fname)
+{
+ struct buffer buf = { };
+ struct symbol *sym, **symbols;
+ int i, n;
+
+ symbols = NOFAIL(malloc(sizeof (struct symbol *) * symbolcount));
+ n = 0;
+
+ for (i = 0; i < SYMBOL_HASH_SIZE; i++) {
+ for (sym = symbolhash[i]; sym; sym = sym->next)
+ symbols[n++] = sym;
+ }
+
+ qsort(symbols, n, sizeof(struct symbol *), compare_symbol_names);
+
+ buf_printf(&buf, "#include <linux/mod_export.h>\n");
+ buf_printf(&buf, "\n");
+
+ for (i = 0; i < n; i++) {
+ sym = symbols[i];
+
+ /*
+ * We need to declare the symbol with extern linkage.
+ * Don't worry about the type. The linker won't care, and the
+ * compiler won't complain unless we include a conflicting
+ * declaration.
+ */
+ buf_printf(&buf, "extern void *%s;\n", sym->name);
+
+ buf_printf(&buf, "__EXPORT_SYMBOL_SORTED(%s, \"%s\");\n",
+ sym->name,
+ section_names[sym->export]);
+ }
+
+ write_if_changed(&buf, fname);
+}
+
static void add_marker(struct module *mod, const char *name, const char *fmt)
{
char *line = NULL;
@@ -2077,6 +2134,7 @@ int main(int argc, char **argv)
struct buffer buf = { };
char *kernel_read = NULL, *module_read = NULL;
char *dump_write = NULL;
+ char *exports_write = NULL;
char *markers_read = NULL;
char *markers_write = NULL;
int opt;
@@ -2084,7 +2142,7 @@ int main(int argc, char **argv)
struct ext_sym_list *extsym_iter;
struct ext_sym_list *extsym_start = NULL;

- while ((opt = getopt(argc, argv, "i:I:e:cmsSo:awM:K:")) != -1) {
+ while ((opt = getopt(argc, argv, "i:I:e:cmsSo:awM:K:x:")) != -1) {
switch (opt) {
case 'i':
kernel_read = optarg;
@@ -2122,6 +2180,9 @@ int main(int argc, char **argv)
case 'w':
warn_unresolved = 1;
break;
+ case 'x':
+ exports_write = optarg;
+ break;
case 'M':
markers_write = optarg;
break;
@@ -2182,6 +2243,9 @@ int main(int argc, char **argv)
"'make CONFIG_DEBUG_SECTION_MISMATCH=y'\n",
sec_mismatch_count);

+ if (exports_write)
+ write_exports(exports_write);
+
if (markers_read)
read_markers(markers_read);

--
1.6.3.2

2009-09-22 13:39:06

by Alan Jenkins

[permalink] [raw]
Subject: [PATCH 3/4] module: unexport each_symbol()

find_symbol() is about to be re-written to avoid traversing every single
exported symbol. each_symbol() has acquired no other in-tree user, so
let us remove it.

Also struct symsearch is useless outside of module.c, so move it there
instead of cluttering up module.h.

Signed-off-by: Alan Jenkins <[email protected]>
---
include/linux/module.h | 15 ---------------
kernel/module.c | 14 ++++++++++++--
2 files changed, 12 insertions(+), 17 deletions(-)

diff --git a/include/linux/module.h b/include/linux/module.h
index 65b62e9..df25f99 100644
--- a/include/linux/module.h
+++ b/include/linux/module.h
@@ -348,17 +348,6 @@ static inline int within_module_init(unsigned long addr, struct module *mod)
/* Search for module by name: must hold module_mutex. */
struct module *find_module(const char *name);

-struct symsearch {
- const struct kernel_symbol *start, *stop;
- const unsigned long *crcs;
- enum {
- NOT_GPL_ONLY,
- GPL_ONLY,
- WILL_BE_GPL_ONLY,
- } licence;
- bool unused;
-};
-
/* Search for an exported symbol by name. */
const struct kernel_symbol *find_symbol(const char *name,
struct module **owner,
@@ -366,10 +355,6 @@ const struct kernel_symbol *find_symbol(const char *name,
bool gplok,
bool warn);

-/* Walk the exported symbol table */
-bool each_symbol(bool (*fn)(const struct symsearch *arr, struct module *owner,
- unsigned int symnum, void *data), void *data);
-
/* Returns 0 and fills in value, defined and namebuf, or -ERANGE if
symnum out of range. */
int module_get_kallsym(unsigned int symnum, unsigned long *value, char *type,
diff --git a/kernel/module.c b/kernel/module.c
index 2d53718..b24fc55 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -186,6 +186,17 @@ extern const unsigned long __start___kcrctab_unused[];
extern const unsigned long __start___kcrctab_unused_gpl[];
#endif

+struct symsearch {
+ const struct kernel_symbol *start, *stop;
+ const unsigned long *crcs;
+ enum {
+ NOT_GPL_ONLY,
+ GPL_ONLY,
+ WILL_BE_GPL_ONLY,
+ } licence;
+ bool unused;
+};
+
#ifndef CONFIG_MODVERSIONS
#define symversion(base, idx) NULL
#else
@@ -212,7 +223,7 @@ static bool each_symbol_in_section(const struct symsearch *arr,
}

/* Returns true as soon as fn returns true, otherwise false. */
-bool each_symbol(bool (*fn)(const struct symsearch *arr, struct module *owner,
+static bool each_symbol(bool (*fn)(const struct symsearch *arr, struct module *owner,
unsigned int symnum, void *data), void *data)
{
struct module *mod;
@@ -266,7 +277,6 @@ bool each_symbol(bool (*fn)(const struct symsearch *arr, struct module *owner,
}
return false;
}
-EXPORT_SYMBOL_GPL(each_symbol);

struct find_symbol_arg {
/* Input */
--
1.6.3.2

2009-09-22 13:39:09

by Alan Jenkins

[permalink] [raw]
Subject: [PATCH 4/4] module: speed up find_symbol() using binary search on the builtin symbol tables

The builtin symbol tables are now sorted, so we can use a binary
search.

The symbol tables in modules are still unsorted and require linear
searching as before. But since the generic each_symbol() is removed,
the code size only goes up 8 bytes overall on i386.

On my EeePC 701, coldplug is mainly cpu bound and takes 1.5 seconds
during boot. perf showed this change eliminated 20% of cpu cycles during
coldplug, saving 0.3 seconds of real time.

These savings may not be representative since my config is not very well
tuned. The numbers above represent the loading of 35 modules,
referencing a total of 441 symbols. Nevertheless, it shows why I think
this is worth doing.

Signed-off-by: Alan Jenkins <[email protected]>
---
kernel/module.c | 209 ++++++++++++++++++++++++++++++-------------------------
1 files changed, 113 insertions(+), 96 deletions(-)

diff --git a/kernel/module.c b/kernel/module.c
index b24fc55..c78481d 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -203,31 +203,38 @@ struct symsearch {
#define symversion(base, idx) ((base != NULL) ? ((base) + (idx)) : NULL)
#endif

-static bool each_symbol_in_section(const struct symsearch *arr,
- unsigned int arrsize,
- struct module *owner,
- bool (*fn)(const struct symsearch *syms,
- struct module *owner,
- unsigned int symnum, void *data),
- void *data)
-{
- unsigned int i, j;
-
- for (j = 0; j < arrsize; j++) {
- for (i = 0; i < arr[j].stop - arr[j].start; i++)
- if (fn(&arr[j], owner, i, data))
- return true;
+/* binary search on sorted symbols */
+static bool find_symbol_in_kernel_section(const struct symsearch *syms,
+ const char *name,
+ unsigned int *symnum)
+{
+ int lo = 0, hi = syms->stop - syms->start - 1;
+ int mid, cmp;
+
+ while (lo <= hi) {
+ mid = (lo + hi) / 2;
+ cmp = strcmp(syms->start[mid].name, name);
+ if (cmp == 0) {
+ *symnum = mid;
+ return true;
+ }
+ else if (cmp < 0)
+ hi = mid - 1;
+ else
+ lo = mid + 1;
}

return false;
}

-/* Returns true as soon as fn returns true, otherwise false. */
-static bool each_symbol(bool (*fn)(const struct symsearch *arr, struct module *owner,
- unsigned int symnum, void *data), void *data)
+
+static bool find_symbol_in_kernel(const char *name,
+ struct symsearch *sym,
+ unsigned int *symnum)
{
- struct module *mod;
- const struct symsearch arr[] = {
+ unsigned int j;
+
+ struct symsearch arr[] = {
{ __start___ksymtab, __stop___ksymtab, __start___kcrctab,
NOT_GPL_ONLY, false },
{ __start___ksymtab_gpl, __stop___ksymtab_gpl,
@@ -246,66 +253,101 @@ static bool each_symbol(bool (*fn)(const struct symsearch *arr, struct module *o
#endif
};

- if (each_symbol_in_section(arr, ARRAY_SIZE(arr), NULL, fn, data))
- return true;
+ for (j = 0; j < ARRAY_SIZE(arr); j++)
+ if (find_symbol_in_kernel_section(&arr[j], name, symnum)) {
+ *sym = arr[j];
+ return true;
+ }

- list_for_each_entry_rcu(mod, &modules, list) {
- struct symsearch arr[] = {
- { mod->syms, mod->syms + mod->num_syms, mod->crcs,
- NOT_GPL_ONLY, false },
- { mod->gpl_syms, mod->gpl_syms + mod->num_gpl_syms,
- mod->gpl_crcs,
- GPL_ONLY, false },
- { mod->gpl_future_syms,
- mod->gpl_future_syms + mod->num_gpl_future_syms,
- mod->gpl_future_crcs,
- WILL_BE_GPL_ONLY, false },
+ return false;
+}
+
+/* linear search on unsorted symbols */
+static bool find_symbol_in_module_section(const struct symsearch *syms,
+ const char *name,
+ unsigned int *symnum)
+{
+ unsigned int i;
+
+ for (i = 0; i < syms->stop - syms->start; i++)
+ if (strcmp(syms->start[i].name, name) == 0) {
+ *symnum = i;
+ return true;
+ }
+
+ return false;
+}
+
+
+static bool find_symbol_in_module(struct module *mod,
+ const char *name,
+ struct symsearch *sym,
+ unsigned int *symnum)
+{
+ unsigned int j;
+
+ struct symsearch arr[] = {
+ { mod->syms, mod->syms + mod->num_syms, mod->crcs,
+ NOT_GPL_ONLY, false },
+ { mod->gpl_syms, mod->gpl_syms + mod->num_gpl_syms,
+ mod->gpl_crcs,
+ GPL_ONLY, false },
+ { mod->gpl_future_syms,
+ mod->gpl_future_syms + mod->num_gpl_future_syms,
+ mod->gpl_future_crcs,
+ WILL_BE_GPL_ONLY, false },
#ifdef CONFIG_UNUSED_SYMBOLS
- { mod->unused_syms,
- mod->unused_syms + mod->num_unused_syms,
- mod->unused_crcs,
- NOT_GPL_ONLY, true },
- { mod->unused_gpl_syms,
- mod->unused_gpl_syms + mod->num_unused_gpl_syms,
- mod->unused_gpl_crcs,
- GPL_ONLY, true },
+ { mod->unused_syms,
+ mod->unused_syms + mod->num_unused_syms,
+ mod->unused_crcs,
+ NOT_GPL_ONLY, true },
+ { mod->unused_gpl_syms,
+ mod->unused_gpl_syms + mod->num_unused_gpl_syms,
+ mod->unused_gpl_crcs,
+ GPL_ONLY, true },
#endif
- };
+ };

- if (each_symbol_in_section(arr, ARRAY_SIZE(arr), mod, fn, data))
+ for (j = 0; j < ARRAY_SIZE(arr); j++) {
+ if (find_symbol_in_module_section(&arr[j], name, symnum)) {
+ *sym = arr[j];
return true;
+ }
}
+
return false;
}

-struct find_symbol_arg {
- /* Input */
- const char *name;
- bool gplok;
- bool warn;
+/* Find a symbol and return it, along with, (optional) crc and
+ * (optional) module which owns it */
+const struct kernel_symbol *find_symbol(const char *name,
+ struct module **owner,
+ const unsigned long **crc,
+ bool gplok,
+ bool warn)
+{
+ struct symsearch syms;
+ unsigned int symnum;
+ struct module *mod = NULL;

- /* Output */
- struct module *owner;
- const unsigned long *crc;
- const struct kernel_symbol *sym;
-};
+ if (find_symbol_in_kernel(name, &syms, &symnum))
+ goto found;

-static bool find_symbol_in_section(const struct symsearch *syms,
- struct module *owner,
- unsigned int symnum, void *data)
-{
- struct find_symbol_arg *fsa = data;
+ list_for_each_entry_rcu(mod, &modules, list)
+ if (find_symbol_in_module(mod, name, &syms, &symnum))
+ goto found;

- if (strcmp(syms->start[symnum].name, fsa->name) != 0)
- return false;
+ DEBUGP("Failed to find symbol %s\n", name);
+ return NULL;

- if (!fsa->gplok) {
- if (syms->licence == GPL_ONLY)
- return false;
- if (syms->licence == WILL_BE_GPL_ONLY && fsa->warn) {
+found:
+ if (!gplok) {
+ if (syms.licence == GPL_ONLY)
+ return NULL;
+ if (syms.licence == WILL_BE_GPL_ONLY && warn) {
printk(KERN_WARNING "Symbol %s is being used "
"by a non-GPL module, which will not "
- "be allowed in the future\n", fsa->name);
+ "be allowed in the future\n", name);
printk(KERN_WARNING "Please see the file "
"Documentation/feature-removal-schedule.txt "
"in the kernel source tree for more details.\n");
@@ -313,9 +355,9 @@ static bool find_symbol_in_section(const struct symsearch *syms,
}

#ifdef CONFIG_UNUSED_SYMBOLS
- if (syms->unused && fsa->warn) {
+ if (syms.unused && warn) {
printk(KERN_WARNING "Symbol %s is marked as UNUSED, "
- "however this module is using it.\n", fsa->name);
+ "however this module is using it.\n", name);
printk(KERN_WARNING
"This symbol will go away in the future.\n");
printk(KERN_WARNING
@@ -326,36 +368,11 @@ static bool find_symbol_in_section(const struct symsearch *syms,
}
#endif

- fsa->owner = owner;
- fsa->crc = symversion(syms->crcs, symnum);
- fsa->sym = &syms->start[symnum];
- return true;
-}
-
-/* Find a symbol and return it, along with, (optional) crc and
- * (optional) module which owns it */
-const struct kernel_symbol *find_symbol(const char *name,
- struct module **owner,
- const unsigned long **crc,
- bool gplok,
- bool warn)
-{
- struct find_symbol_arg fsa;
-
- fsa.name = name;
- fsa.gplok = gplok;
- fsa.warn = warn;
-
- if (each_symbol(find_symbol_in_section, &fsa)) {
- if (owner)
- *owner = fsa.owner;
- if (crc)
- *crc = fsa.crc;
- return fsa.sym;
- }
-
- DEBUGP("Failed to find symbol %s\n", name);
- return NULL;
+ if (owner)
+ *owner = mod;
+ if (crc)
+ *crc = symversion(syms.crcs, symnum);
+ return &syms.start[symnum];
}
EXPORT_SYMBOL_GPL(find_symbol);

--
1.6.3.2

2009-09-22 14:42:15

by Sam Ravnborg

[permalink] [raw]
Subject: Re: [PATCH 1/4] module: extract __ExPORT_SYMBOL from module.h into mod_export.h

On Tue, Sep 22, 2009 at 02:38:35PM +0100, Alan Jenkins wrote:
> The new header mod_export.h allows __EXPORT_SYMBOL to be used without
> pulling in any function or variable declarations. It will be used by
> the build system to help sort the list of symbols exported by the
> kernel.
>
> Signed-off-by: Alan Jenkins <[email protected]>
> ---
> include/linux/mod_export.h | 74 ++++++++++++++++++++++++++++++++++++++++++++
> include/linux/module.h | 63 +------------------------------------
> 2 files changed, 75 insertions(+), 62 deletions(-)
> create mode 100644 include/linux/mod_export.h
>
> diff --git a/include/linux/mod_export.h b/include/linux/mod_export.h
> new file mode 100644
> index 0000000..3c51b9c
> --- /dev/null
> +++ b/include/linux/mod_export.h
> @@ -0,0 +1,74 @@
> +#ifndef LINUX_MOD_EXPORT_H
> +#define LINUX_MOD_EXPORT_H
> +
> +#include <linux/compiler.h>
> +#include <asm/module.h>

Do you need this include?

Sam

2009-09-22 14:45:21

by Alan Jenkins

[permalink] [raw]
Subject: Re: [PATCH 1/4] module: extract __ExPORT_SYMBOL from module.h into mod_export.h

Sam Ravnborg wrote:
> On Tue, Sep 22, 2009 at 02:38:35PM +0100, Alan Jenkins wrote:
>
>> The new header mod_export.h allows __EXPORT_SYMBOL to be used without
>> pulling in any function or variable declarations. It will be used by
>> the build system to help sort the list of symbols exported by the
>> kernel.
>>
>> Signed-off-by: Alan Jenkins <[email protected]>
>> ---
>> include/linux/mod_export.h | 74 ++++++++++++++++++++++++++++++++++++++++++++
>> include/linux/module.h | 63 +------------------------------------
>> 2 files changed, 75 insertions(+), 62 deletions(-)
>> create mode 100644 include/linux/mod_export.h
>>
>> diff --git a/include/linux/mod_export.h b/include/linux/mod_export.h
>> new file mode 100644
>> index 0000000..3c51b9c
>> --- /dev/null
>> +++ b/include/linux/mod_export.h
>> @@ -0,0 +1,74 @@
>> +#ifndef LINUX_MOD_EXPORT_H
>> +#define LINUX_MOD_EXPORT_H
>> +
>> +#include <linux/compiler.h>
>> +#include <asm/module.h>
>>
>
> Do you need this include?
>
> Sam
>
asm/module.h is needed in case the arch sets MODULE_SYMBOL_PREFIX.
(and linux/compiler.h is needed for __used).

Let me know if you think it needs a comment.

Thanks
Alan

2009-09-22 14:48:18

by Sam Ravnborg

[permalink] [raw]
Subject: Re: [PATCH 2/4] kbuild: sort the list of symbols exported by the kernel (__ksymtab)

On Tue, Sep 22, 2009 at 02:38:36PM +0100, Alan Jenkins wrote:
> modpost of vmlinux.o now extracts the ksymtab sections and outputs
> sorted versions of them as .tmp_exports.c. These sorted sections
> are linked into vmlinux and the original unsorted sections are
> discarded.
>
> This will allow modules to be loaded faster, resolving symbols using
> binary search, without any increase in the memory needed for the
> symbol tables.
>
> This does not affect the building of modules, so hopefully it won't
> affect compile times too much.

I do not quite follow you here.

With your patch:

For vmlinux we define our symbols in sections
named *_sorted - but they are not sorted.

We than create a small .c file that uses the original sections names
which is what is used in the final vmlinux.

Could we replace the content of these sections rather than playing
games with the names?

Sam

2009-09-22 15:08:56

by Alan Jenkins

[permalink] [raw]
Subject: Re: [PATCH 2/4] kbuild: sort the list of symbols exported by the kernel (__ksymtab)

Sam Ravnborg wrote:
> On Tue, Sep 22, 2009 at 02:38:36PM +0100, Alan Jenkins wrote:
>
>> modpost of vmlinux.o now extracts the ksymtab sections and outputs
>> sorted versions of them as .tmp_exports.c. These sorted sections
>> are linked into vmlinux and the original unsorted sections are
>> discarded.
>>
>> This will allow modules to be loaded faster, resolving symbols using
>> binary search, without any increase in the memory needed for the
>> symbol tables.
>>
>> This does not affect the building of modules, so hopefully it won't
>> affect compile times too much.
>>
>
> I do not quite follow you here.
>
> With your patch:
>
> For vmlinux we define our symbols in sections
> named *_sorted - but they are not sorted.
>
> We than create a small .c file that uses the original sections names
> which is what is used in the final vmlinux.
>

Actually it's intended to work the other way round.

EXPORT_SYMBOL() generates symbols in __ksymtab as usual. These get as
far as vmlinux.o before I start changing anything.

My .c file then uses __EXPORT_SYMBOL_SORTED() to generate symbols in
__ksymtab_sorted (or __ksymtab_gpl_sorted etc.), based on the symbols in
vmlinux.o. So the _sorted sections _are_ sorted.

Finally, I changed the vmlinux linker script to effectively rename
__ksymtab_sorted and discard the original unsorted __ksymtab.

I hope that clears that up.

> Could we replace the content of these sections rather than playing
> games with the names?
>
> Sam
>

Thinking about it, I guess I could avoid using new section names. I
could change the vmlinux linker script to discard all __ksymtab
sections, _except_ those which came from the file .tmp_exports.o. (And
maybe rename .tmp_exports.o to .tmp_sorted_exports.o to make it more
obvious).

That would avoid the need to add separate __EXPORT_SYMBOL_NAME() and
__EXPORT_SYMBOL_SORTED(). OTOH, it means the linker script uses more
features and is more tightly coupled to the build system. I.e. the
script references a specific file.

I prefer my original method, but I can change it if you disagree :-).

Thanks!
Alan

2009-09-22 15:45:36

by Daniel Walker

[permalink] [raw]
Subject: Re: [PATCH 1/4] module: extract __ExPORT_SYMBOL from module.h into mod_export.h

On Tue, 2009-09-22 at 14:38 +0100, Alan Jenkins wrote:
> +
> +struct kernel_symbol
> +{
> + unsigned long value;
> + const char *name;
> +};

One relevant checkpatch error above,

ERROR: open brace '{' following struct go on the same line
#71: FILE: include/linux/mod_export.h:13:
+struct kernel_symbol
+{

You have another similar one in patch 4 ,

ERROR: else should follow close brace '}'
#99: FILE: kernel/module.c:221:
+ }
+ else if (cmp < 0)


Could you correct those?

Daniel

2009-09-22 15:50:18

by Alan Jenkins

[permalink] [raw]
Subject: Re: [PATCH 1/4] module: extract __ExPORT_SYMBOL from module.h into mod_export.h

Daniel Walker wrote:
> On Tue, 2009-09-22 at 14:38 +0100, Alan Jenkins wrote:
>
>> +
>> +struct kernel_symbol
>> +{
>> + unsigned long value;
>> + const char *name;
>> +};
>>
>
> One relevant checkpatch error above,
>
> ERROR: open brace '{' following struct go on the same line
> #71: FILE: include/linux/mod_export.h:13:
> +struct kernel_symbol
> +{
>
> You have another similar one in patch 4 ,
>
> ERROR: else should follow close brace '}'
> #99: FILE: kernel/module.c:221:
> + }
> + else if (cmp < 0)
>
>
> Could you correct those?
>
> Daniel
>

Sure, thanks. I'll hold off on resending though, in case there are any
more radical changes needed.

Alan

2009-09-22 15:55:23

by Daniel Walker

[permalink] [raw]
Subject: Re: [PATCH 1/4] module: extract __ExPORT_SYMBOL from module.h into mod_export.h

On Tue, 2009-09-22 at 16:50 +0100, Alan Jenkins wrote:
> Daniel Walker wrote:
> > On Tue, 2009-09-22 at 14:38 +0100, Alan Jenkins wrote:
> >
> >> +
> >> +struct kernel_symbol
> >> +{
> >> + unsigned long value;
> >> + const char *name;
> >> +};
> >>
> >
> > One relevant checkpatch error above,
> >
> > ERROR: open brace '{' following struct go on the same line
> > #71: FILE: include/linux/mod_export.h:13:
> > +struct kernel_symbol
> > +{
> >
> > You have another similar one in patch 4 ,
> >
> > ERROR: else should follow close brace '}'
> > #99: FILE: kernel/module.c:221:
> > + }
> > + else if (cmp < 0)
> >
> >
> > Could you correct those?
> >
> > Daniel
> >
>
> Sure, thanks. I'll hold off on resending though, in case there are any
> more radical changes needed.

Ok thanks..

Daniel

2009-09-23 16:52:17

by Alan Jenkins

[permalink] [raw]
Subject: Re: module: Speed up symbol resolution during module loading

[CC to lkml fixed :-/]

Rusty Russell wrote:
> On Tue, 22 Sep 2009 10:58:28 pm Alan Jenkins wrote:
>
>> The following series applies against v2.6.31. It sorts the tables of builtin
>> symbols, so the module loader can resolve them using a binary search.
>>
>> The kbuild changes to achieve this are less scary than I expected. I'm
>> optimistic that they can be accepted without radical alteration :-).
>>
>> Quoting from the last patch in this series:
>>
>> "On my EeePC 701, coldplug is mainly cpu bound and takes 1.5 seconds
>> during boot. perf showed this change eliminated 20% of cpu cycles during
>> coldplug, saving 0.3 seconds of real time.
>>
>
> Hi Alan,
>
> This seems useful, but wouldn't it be simpler to just sort at boot time?
> The same could be done for modules, possibly reducing code.
>
> Alternately, there's a standard way of hashing ELF symbols, but I'm not sure
> we can convince the linker to generate it for vmlinux (I haven't looked
> though).
>
> Thanks!
> Rusty.
>

I'm concerned that people would object to the extra overhead at boot
time. I've hacked up a prototype to sort at boot time and it takes 7ms
on the same hardware. That's just under than the average time saved
loading one of my modules. But the comparison is dodgy because it
doesn't include side-effects (on cache) for the sort. The break-even
point will depend on the specific modules used.

That 7ms will be pure overhead in some cases - i.e. if your config is
"localyesconfig" (build in all currently used modules), but you keep
modules enabled to allow some flexibility. I'm not happy about that myself.

Hash tables have a similar disadvantage. They would add more
unswappable pages, in order to optimise a function which is only
significant at boot time. Binary search already brings symbol
resolution down from ~60% of modprobe's time to ~7%. The nice thing
about using sorted tables that they stay the same size.

Regards
Alan

2009-09-23 17:34:49

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 0/2] Use generic binary search function

> The builtin symbol tables are now sorted, so we can use a binary
> search.

Hi Alan,

There a large number hand-coded binary searches in the kernel (run
"git grep search | grep binary" to find many of them). Since getting
binary searches right is difficult, I've been meaning to submit a
patch adding a lib/bsearch.c for some time now, so that we only have
to get binary search right in one place.

This patch series contains a generic binary search implementation as
well as something converting your module.c patch to use it. I haven't
had a chance to boot-test yet, but this should give you a sense of
what this is going to look like. To me, you take some somewhat
complex code and replace it with some very straightforward code.

This generic binary search implementation comes from Ksplice. It has
the same basic API as the C library bsearch() function. Ksplice uses
it in half a dozen places with 4 different comparison functions, and I
think our code is substantially cleaner because of this.

I don't claim that this is a perfect implementation of binary search,
though it is reasonably well tested. My theory is that it is about as
good as all the hand-coded copies all over the kernel, and we can
optimize it to perfection later.

Tim Abbott (2):
lib: Add generic binary search function to the kernel.
module: use bsearch in find_symbol_in_kernel_section.

include/linux/bsearch.h | 9 ++++++++
kernel/module.c | 34 +++++++++++++----------------
lib/Makefile | 2 +-
lib/bsearch.c | 53 +++++++++++++++++++++++++++++++++++++++++++++++
4 files changed, 78 insertions(+), 20 deletions(-)
create mode 100644 include/linux/bsearch.h
create mode 100644 lib/bsearch.c

2009-09-23 17:29:45

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 1/2] lib: Add generic binary search function to the kernel.

There a large number hand-coded binary searches in the kernel (run
"git grep search | grep binary" to find many of them). Since in my
experience, hand-coding binary searches can be error-prone, it seems
worth cleaning this up by providing a generic binary search function.

This generic binary search implementation comes from Ksplice. It has
the same basic API as the C library bsearch() function. Ksplice uses
it in half a dozen places with 4 different comparison functions, and I
think our code is substantially cleaner because of this.

Signed-off-by: Tim Abbott <[email protected]>
---
include/linux/bsearch.h | 9 ++++++++
lib/Makefile | 2 +-
lib/bsearch.c | 53 +++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 63 insertions(+), 1 deletions(-)
create mode 100644 include/linux/bsearch.h
create mode 100644 lib/bsearch.c

diff --git a/include/linux/bsearch.h b/include/linux/bsearch.h
new file mode 100644
index 0000000..90b1aa8
--- /dev/null
+++ b/include/linux/bsearch.h
@@ -0,0 +1,9 @@
+#ifndef _LINUX_BSEARCH_H
+#define _LINUX_BSEARCH_H
+
+#include <linux/types.h>
+
+void *bsearch(const void *key, const void *base, size_t num, size_t size,
+ int (*cmp)(const void *key, const void *elt));
+
+#endif /* _LINUX_BSEARCH_H */
diff --git a/lib/Makefile b/lib/Makefile
index 2e78277..fb60af1 100644
--- a/lib/Makefile
+++ b/lib/Makefile
@@ -21,7 +21,7 @@ lib-y += kobject.o kref.o klist.o

obj-y += bcd.o div64.o sort.o parser.o halfmd4.o debug_locks.o random32.o \
bust_spinlocks.o hexdump.o kasprintf.o bitmap.o scatterlist.o \
- string_helpers.o gcd.o
+ string_helpers.o gcd.o bsearch.o

ifeq ($(CONFIG_DEBUG_KOBJECT),y)
CFLAGS_kobject.o += -DDEBUG
diff --git a/lib/bsearch.c b/lib/bsearch.c
new file mode 100644
index 0000000..4297c98
--- /dev/null
+++ b/lib/bsearch.c
@@ -0,0 +1,53 @@
+/*
+ * A generic implementation of binary search for the Linux kernel
+ *
+ * Copyright (C) 2008-2009 Ksplice, Inc.
+ * Author: Tim Abbott <[email protected]>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; version 2.
+ */
+
+#include <linux/module.h>
+#include <linux/bsearch.h>
+
+/*
+ * bsearch - binary search an array of elements
+ * @key: pointer to item being searched for
+ * @base: pointer to data to sort
+ * @num: number of elements
+ * @size: size of each element
+ * @cmp: pointer to comparison function
+ *
+ * This function does a binary search on the given array. The
+ * contents of the array should already be in ascending sorted order
+ * under the provided comparison function.
+ *
+ * Note that the key need not have the same type as the elements in
+ * the array, e.g. key could be a string and the comparison function
+ * could compare the string with the struct's name field. However, if
+ * the key and elements in the array are of the same type, you can use
+ * the same comparison function for both sort() and bsearch().
+ */
+void *bsearch(const void *key, const void *base, size_t num, size_t size,
+ int (*cmp)(const void *key, const void *elt))
+{
+ int start = 0, end = num - 1, mid, result;
+ if (num == 0)
+ return NULL;
+
+ while (start <= end) {
+ mid = (start + end) / 2;
+ result = cmp(key, base + mid * size);
+ if (result < 0)
+ end = mid - 1;
+ else if (result > 0)
+ start = mid + 1;
+ else
+ return (void *)base + mid * size;
+ }
+
+ return NULL;
+}
+EXPORT_SYMBOL(bsearch);
--
1.6.3.3

2009-09-23 17:34:39

by Tim Abbott

[permalink] [raw]
Subject: [PATCH 2/2] module: use bsearch in find_symbol_in_kernel_section.

Signed-off-by: Tim Abbott <[email protected]>
---
kernel/module.c | 34 +++++++++++++++-------------------
1 files changed, 15 insertions(+), 19 deletions(-)

diff --git a/kernel/module.c b/kernel/module.c
index 9b19f23..25ff16b 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -55,6 +55,7 @@
#include <linux/async.h>
#include <linux/percpu.h>
#include <linux/kmemleak.h>
+#include <linux/bsearch.h>

#define CREATE_TRACE_POINTS
#include <trace/events/module.h>
@@ -209,31 +210,26 @@ struct symsearch {
#define symversion(base, idx) ((base != NULL) ? ((base) + (idx)) : NULL)
#endif

-/* binary search on sorted symbols */
+static int symbol_compare(const void *key, const void *elt)
+{
+ const char *str = key;
+ const struct kernel_symbol *sym = elt;
+ return strcmp(sym->name, str);
+}
+
static bool find_symbol_in_kernel_section(const struct symsearch *syms,
const char *name,
unsigned int *symnum)
{
- int lo = 0, hi = syms->stop - syms->start - 1;
- int mid, cmp;
-
- while (lo <= hi) {
- mid = (lo + hi) / 2;
- cmp = strcmp(syms->start[mid].name, name);
- if (cmp == 0) {
- *symnum = mid;
- return true;
- }
- else if (cmp < 0)
- hi = mid - 1;
- else
- lo = mid + 1;
- }
-
- return false;
+ const struct kernel_symbol *sym =
+ bsearch(name, syms->start, syms->stop - syms->start,
+ sizeof(*syms->start), symbol_compare);
+ if (sym == NULL)
+ return false;
+ *symnum = sym - syms->start;
+ return true;
}

-
static bool find_symbol_in_kernel(const char *name,
struct symsearch *sym,
unsigned int *symnum)
--
1.6.3.3

2009-09-23 17:34:33

by Tim Abbott

[permalink] [raw]
Subject: Re: [PATCH 3/4] module: unexport each_symbol()

Hi Alan,

I really like what you're doing with this patch series; using a binary
search for the symbol table has been something I've wanted to do in the
kernel's module loader ever since I optimized Ksplice's symbol resolution
code to use binary rather than linear searches.

While Ksplice is not in-tree yet, Ksplice is a user of each_symbol (and in
fact was the reason why each_symbol was originally exported). Is it easy
to modify your patch series so that you don't have to remove each_symbol?

Best regards,
-Tim Abbott

On Tue, 22 Sep 2009, Alan Jenkins wrote:

> find_symbol() is about to be re-written to avoid traversing every single
> exported symbol. each_symbol() has acquired no other in-tree user, so
> let us remove it.
>
> Also struct symsearch is useless outside of module.c, so move it there
> instead of cluttering up module.h.
>
> Signed-off-by: Alan Jenkins <[email protected]>
> ---
> include/linux/module.h | 15 ---------------
> kernel/module.c | 14 ++++++++++++--
> 2 files changed, 12 insertions(+), 17 deletions(-)
>
> diff --git a/include/linux/module.h b/include/linux/module.h
> index 65b62e9..df25f99 100644
> --- a/include/linux/module.h
> +++ b/include/linux/module.h
> @@ -348,17 +348,6 @@ static inline int within_module_init(unsigned long addr, struct module *mod)
> /* Search for module by name: must hold module_mutex. */
> struct module *find_module(const char *name);
>
> -struct symsearch {
> - const struct kernel_symbol *start, *stop;
> - const unsigned long *crcs;
> - enum {
> - NOT_GPL_ONLY,
> - GPL_ONLY,
> - WILL_BE_GPL_ONLY,
> - } licence;
> - bool unused;
> -};
> -
> /* Search for an exported symbol by name. */
> const struct kernel_symbol *find_symbol(const char *name,
> struct module **owner,
> @@ -366,10 +355,6 @@ const struct kernel_symbol *find_symbol(const char *name,
> bool gplok,
> bool warn);
>
> -/* Walk the exported symbol table */
> -bool each_symbol(bool (*fn)(const struct symsearch *arr, struct module *owner,
> - unsigned int symnum, void *data), void *data);
> -
> /* Returns 0 and fills in value, defined and namebuf, or -ERANGE if
> symnum out of range. */
> int module_get_kallsym(unsigned int symnum, unsigned long *value, char *type,
> diff --git a/kernel/module.c b/kernel/module.c
> index 2d53718..b24fc55 100644
> --- a/kernel/module.c
> +++ b/kernel/module.c
> @@ -186,6 +186,17 @@ extern const unsigned long __start___kcrctab_unused[];
> extern const unsigned long __start___kcrctab_unused_gpl[];
> #endif
>
> +struct symsearch {
> + const struct kernel_symbol *start, *stop;
> + const unsigned long *crcs;
> + enum {
> + NOT_GPL_ONLY,
> + GPL_ONLY,
> + WILL_BE_GPL_ONLY,
> + } licence;
> + bool unused;
> +};
> +
> #ifndef CONFIG_MODVERSIONS
> #define symversion(base, idx) NULL
> #else
> @@ -212,7 +223,7 @@ static bool each_symbol_in_section(const struct symsearch *arr,
> }
>
> /* Returns true as soon as fn returns true, otherwise false. */
> -bool each_symbol(bool (*fn)(const struct symsearch *arr, struct module *owner,
> +static bool each_symbol(bool (*fn)(const struct symsearch *arr, struct module *owner,
> unsigned int symnum, void *data), void *data)
> {
> struct module *mod;
> @@ -266,7 +277,6 @@ bool each_symbol(bool (*fn)(const struct symsearch *arr, struct module *owner,
> }
> return false;
> }
> -EXPORT_SYMBOL_GPL(each_symbol);
>
> struct find_symbol_arg {
> /* Input */
> --
> 1.6.3.2
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

2009-09-23 17:38:59

by Sam Ravnborg

[permalink] [raw]
Subject: Re: [PATCH 2/4] kbuild: sort the list of symbols exported by the kernel (__ksymtab)

On Tue, Sep 22, 2009 at 04:08:55PM +0100, Alan Jenkins wrote:
> Sam Ravnborg wrote:
> > On Tue, Sep 22, 2009 at 02:38:36PM +0100, Alan Jenkins wrote:
> >
> >> modpost of vmlinux.o now extracts the ksymtab sections and outputs
> >> sorted versions of them as .tmp_exports.c. These sorted sections
> >> are linked into vmlinux and the original unsorted sections are
> >> discarded.
> >>
> >> This will allow modules to be loaded faster, resolving symbols using
> >> binary search, without any increase in the memory needed for the
> >> symbol tables.
> >>
> >> This does not affect the building of modules, so hopefully it won't
> >> affect compile times too much.
> >>
> >
> > I do not quite follow you here.
> >
> > With your patch:
> >
> > For vmlinux we define our symbols in sections
> > named *_sorted - but they are not sorted.
> >
> > We than create a small .c file that uses the original sections names
> > which is what is used in the final vmlinux.
> >
>
> Actually it's intended to work the other way round.
>
> EXPORT_SYMBOL() generates symbols in __ksymtab as usual. These get as
> far as vmlinux.o before I start changing anything.
>
> My .c file then uses __EXPORT_SYMBOL_SORTED() to generate symbols in
> __ksymtab_sorted (or __ksymtab_gpl_sorted etc.), based on the symbols in
> vmlinux.o. So the _sorted sections _are_ sorted.
>
> Finally, I changed the vmlinux linker script to effectively rename
> __ksymtab_sorted and discard the original unsorted __ksymtab.
>
> I hope that clears that up.
>
> > Could we replace the content of these sections rather than playing
> > games with the names?
> >
> > Sam
> >
>
> Thinking about it, I guess I could avoid using new section names. I
> could change the vmlinux linker script to discard all __ksymtab
> sections, _except_ those which came from the file .tmp_exports.o. (And
> maybe rename .tmp_exports.o to .tmp_sorted_exports.o to make it more
> obvious).
>
> That would avoid the need to add separate __EXPORT_SYMBOL_NAME() and
> __EXPORT_SYMBOL_SORTED(). OTOH, it means the linker script uses more
> features and is more tightly coupled to the build system. I.e. the
> script references a specific file.
>
> I prefer my original method, but I can change it if you disagree :-).

One of the problems you hit is that we use the same linker script for
all the links we do of the kernel image.
I had a patchset that pulled out the final link from the top-level
Makefile and made it a simple shell script.
I should give that another go which would then be a good basis for
this patchset and a simplied final linking.

Sam

2009-09-23 18:07:43

by Alan Jenkins

[permalink] [raw]
Subject: Re: [PATCH 0/2] Use generic binary search function

Tim Abbott wrote:
>> The builtin symbol tables are now sorted, so we can use a binary
>> search.
>>
>
> Hi Alan,
>
> There a large number hand-coded binary searches in the kernel (run
> "git grep search | grep binary" to find many of them). Since getting
> binary searches right is difficult, I've been meaning to submit a
> patch adding a lib/bsearch.c for some time now, so that we only have
> to get binary search right in one place.
>
> This patch series contains a generic binary search implementation as
> well as something converting your module.c patch to use it. I haven't
> had a chance to boot-test yet,

You might want to wait. I found some weirdness in my patches - as if my
bsearch is backwards but the tables are also being reversed.

Whatever the problem, it endorses the idea of having one known good
bsearch function :-).

> but this should give you a sense of
> what this is going to look like. To me, you take some somewhat
> complex code and replace it with some very straightforward code.
>
> This generic binary search implementation comes from Ksplice. It has
> the same basic API as the C library bsearch() function. Ksplice uses
> it in half a dozen places with 4 different comparison functions, and I
> think our code is substantially cleaner because of this.
>
> I don't claim that this is a perfect implementation of binary search,
> though it is reasonably well tested. My theory is that it is about as
> good as all the hand-coded copies all over the kernel, and we can
> optimize it to perfection later.
>
> Tim Abbott (2):
> lib: Add generic binary search function to the kernel.
> module: use bsearch in find_symbol_in_kernel_section.
>
> include/linux/bsearch.h | 9 ++++++++
> kernel/module.c | 34 +++++++++++++----------------
> lib/Makefile | 2 +-
> lib/bsearch.c | 53 +++++++++++++++++++++++++++++++++++++++++++++++
> 4 files changed, 78 insertions(+), 20 deletions(-)
> create mode 100644 include/linux/bsearch.h
> create mode 100644 lib/bsearch.c
>
>

2009-09-23 18:35:40

by Alan Jenkins

[permalink] [raw]
Subject: Re: [PATCH 3/4] module: unexport each_symbol()

Tim Abbott wrote:
> Hi Alan,
>
> I really like what you're doing with this patch series; using a binary
> search for the symbol table has been something I've wanted to do in the
> kernel's module loader ever since I optimized Ksplice's symbol resolution
> code to use binary rather than linear searches.
>
> While Ksplice is not in-tree yet, Ksplice is a user of each_symbol (and in
> fact was the reason why each_symbol was originally exported). Is it easy
> to modify your patch series so that you don't have to remove each_symbol?
>

I'll have to think harder about it, but that's my problem. I should have
checked the changelog :-).

Regards
Alan

2009-09-23 22:00:24

by Christoph Hellwig

[permalink] [raw]
Subject: Re: [PATCH 3/4] module: unexport each_symbol()

On Wed, Sep 23, 2009 at 01:29:43PM -0400, Tim Abbott wrote:
> While Ksplice is not in-tree yet, Ksplice is a user of each_symbol (and in
> fact was the reason why each_symbol was originally exported). Is it easy
> to modify your patch series so that you don't have to remove each_symbol?

We don't keep symbols for out of tree junk around.

2009-09-24 00:08:53

by Rusty Russell

[permalink] [raw]
Subject: Re: module: Speed up symbol resolution during module loading

On Thu, 24 Sep 2009 02:22:15 am Alan Jenkins wrote:
> [CC to lkml fixed :-/]
>
> Rusty Russell wrote:
> > On Tue, 22 Sep 2009 10:58:28 pm Alan Jenkins wrote:
> >
> >> The following series applies against v2.6.31. It sorts the tables of builtin
> >> symbols, so the module loader can resolve them using a binary search.
> >>
> >> The kbuild changes to achieve this are less scary than I expected. I'm
> >> optimistic that they can be accepted without radical alteration :-).
> >>
> >> Quoting from the last patch in this series:
> >>
> >> "On my EeePC 701, coldplug is mainly cpu bound and takes 1.5 seconds
> >> during boot. perf showed this change eliminated 20% of cpu cycles during
> >> coldplug, saving 0.3 seconds of real time.
> >>
> >
> > Hi Alan,
> >
> > This seems useful, but wouldn't it be simpler to just sort at boot time?
> > The same could be done for modules, possibly reducing code.
> >
> > Alternately, there's a standard way of hashing ELF symbols, but I'm not sure
> > we can convince the linker to generate it for vmlinux (I haven't looked
> > though).
> >
> > Thanks!
> > Rusty.
> >
>
> I'm concerned that people would object to the extra overhead at boot
> time. I've hacked up a prototype to sort at boot time and it takes 7ms
> on the same hardware.

OK, that seems a lot to me. Fair enough; if Sam acks the build parts, I'll
put it in my tree. A bit late for this merge window though.

Thanks!
Rusty.

2009-09-24 00:11:42

by Rusty Russell

[permalink] [raw]
Subject: Re: [PATCH 1/2] lib: Add generic binary search function to the kernel.

On Thu, 24 Sep 2009 02:58:45 am Tim Abbott wrote:
> There a large number hand-coded binary searches in the kernel (run
> "git grep search | grep binary" to find many of them). Since in my
> experience, hand-coding binary searches can be error-prone, it seems
> worth cleaning this up by providing a generic binary search function.
>
> This generic binary search implementation comes from Ksplice. It has
> the same basic API as the C library bsearch() function. Ksplice uses
> it in half a dozen places with 4 different comparison functions, and I
> think our code is substantially cleaner because of this.
>
> Signed-off-by: Tim Abbott <[email protected]>
> ---
> include/linux/bsearch.h | 9 ++++++++
> lib/Makefile | 2 +-
> lib/bsearch.c | 53 +++++++++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 63 insertions(+), 1 deletions(-)
> create mode 100644 include/linux/bsearch.h
> create mode 100644 lib/bsearch.c
>
> diff --git a/include/linux/bsearch.h b/include/linux/bsearch.h
> new file mode 100644
> index 0000000..90b1aa8
> --- /dev/null
> +++ b/include/linux/bsearch.h
> @@ -0,0 +1,9 @@
> +#ifndef _LINUX_BSEARCH_H
> +#define _LINUX_BSEARCH_H
> +
> +#include <linux/types.h>
> +
> +void *bsearch(const void *key, const void *base, size_t num, size_t size,
> + int (*cmp)(const void *key, const void *elt));
> +
> +#endif /* _LINUX_BSEARCH_H */
> diff --git a/lib/Makefile b/lib/Makefile
> index 2e78277..fb60af1 100644
> --- a/lib/Makefile
> +++ b/lib/Makefile
> @@ -21,7 +21,7 @@ lib-y += kobject.o kref.o klist.o
>
> obj-y += bcd.o div64.o sort.o parser.o halfmd4.o debug_locks.o random32.o \
> bust_spinlocks.o hexdump.o kasprintf.o bitmap.o scatterlist.o \
> - string_helpers.o gcd.o
> + string_helpers.o gcd.o bsearch.o
>
> ifeq ($(CONFIG_DEBUG_KOBJECT),y)
> CFLAGS_kobject.o += -DDEBUG
> diff --git a/lib/bsearch.c b/lib/bsearch.c
> new file mode 100644
> index 0000000..4297c98
> --- /dev/null
> +++ b/lib/bsearch.c
> @@ -0,0 +1,53 @@
> +/*
> + * A generic implementation of binary search for the Linux kernel
> + *
> + * Copyright (C) 2008-2009 Ksplice, Inc.
> + * Author: Tim Abbott <[email protected]>
> + *
> + * This program is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU General Public License as
> + * published by the Free Software Foundation; version 2.
> + */
> +
> +#include <linux/module.h>
> +#include <linux/bsearch.h>
> +
> +/*
> + * bsearch - binary search an array of elements
> + * @key: pointer to item being searched for
> + * @base: pointer to data to sort
> + * @num: number of elements
> + * @size: size of each element
> + * @cmp: pointer to comparison function
> + *
> + * This function does a binary search on the given array. The
> + * contents of the array should already be in ascending sorted order
> + * under the provided comparison function.
> + *
> + * Note that the key need not have the same type as the elements in
> + * the array, e.g. key could be a string and the comparison function
> + * could compare the string with the struct's name field. However, if
> + * the key and elements in the array are of the same type, you can use
> + * the same comparison function for both sort() and bsearch().
> + */
> +void *bsearch(const void *key, const void *base, size_t num, size_t size,
> + int (*cmp)(const void *key, const void *elt))
> +{
> + int start = 0, end = num - 1, mid, result;
> + if (num == 0)
> + return NULL;
> +
> + while (start <= end) {

The if (num == 0) line is superfluous.

But I'd be happy to take this as part of Alan's patches.

Thanks!
Rusty.

2009-09-24 00:15:22

by Rusty Russell

[permalink] [raw]
Subject: Re: [PATCH 3/4] module: unexport each_symbol()

On Thu, 24 Sep 2009 07:30:22 am Christoph Hellwig wrote:
> On Wed, Sep 23, 2009 at 01:29:43PM -0400, Tim Abbott wrote:
> > While Ksplice is not in-tree yet, Ksplice is a user of each_symbol (and in
> > fact was the reason why each_symbol was originally exported). Is it easy
> > to modify your patch series so that you don't have to remove each_symbol?
>
> We don't keep symbols for out of tree junk around.

I expected ksplice to go in earlier, actually. I applied their export patch
to ease the transition.

I'd still like to see ksplice merged, but it's a big hunk of code.

Rusty.

2009-09-26 12:13:36

by Alan Jenkins

[permalink] [raw]
Subject: Re: [PATCH 3/4] module: unexport each_symbol()

Christoph Hellwig wrote:
> On Wed, Sep 23, 2009 at 01:29:43PM -0400, Tim Abbott wrote:
>
>> While Ksplice is not in-tree yet, Ksplice is a user of each_symbol (and in
>> fact was the reason why each_symbol was originally exported). Is it easy
>> to modify your patch series so that you don't have to remove each_symbol?
>>
>
> We don't keep symbols for out of tree junk around.
>

We do alter them mercilessly though :-).

I don't want to preserve the current implementation of each_symbol()
because it duplicates too much of the modified find_symbol(). However,
the duplicated code can be simplified if I changed the various "syms"
fields in struct module with a single array (without increasing the size
of struct module). I consider this a cleanup; it would benefit a couple
of other sites in module.c as well.

The change would make "struct symsearch" redundant - changing the
interface of each_symbol(). If Ksplice fails to merge quickly enough it
will still be easy to remove each_symbol(), and we'll still benefit from
the cleanup in find_symbol() and elsewhere.

Rusty, does that make sense to you?

Thanks
Alan

2009-09-27 17:05:09

by Tim Abbott

[permalink] [raw]
Subject: Re: [PATCH 1/2] lib: Add generic binary search function to the kernel.


On Thu, 24 Sep 2009, Rusty Russell wrote:

> On Thu, 24 Sep 2009 02:58:45 am Tim Abbott wrote:
> > +void *bsearch(const void *key, const void *base, size_t num, size_t size,
> > + int (*cmp)(const void *key, const void *elt))
> > +{
> > + int start = 0, end = num - 1, mid, result;
> > + if (num == 0)
> > + return NULL;
> > +
> > + while (start <= end) {
>
> The if (num == 0) line is superfluous.

You are quite right.

-Tim Abbott

2009-11-03 15:14:19

by Thiago Farina

[permalink] [raw]
Subject: Re: [PATCH 1/2] lib: Add generic binary search function to the kernel.

Hi, this patch was landed? It seems not to be in the tree
(git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6).

2009-11-03 15:34:57

by Alan Jenkins

[permalink] [raw]
Subject: Re: [PATCH 1/2] lib: Add generic binary search function to the kernel.

Thiago Farina wrote:
> Hi, this patch was landed? It seems not to be in the tree
> (git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6).
>

I guess I'm sitting on it at the moment, having folded it into a patch
series which I have yet to submit.

There are apparently already a number of implementations scattered
around the kernel tree. Perhaps you can add another copy of this
implementation, wherever it is you need it :-).

Alan