2015-05-08 13:26:12

by Alexey Dobriyan

[permalink] [raw]
Subject: [PATCH v3] tags: much faster, parallel "make tags"

ctags is single-threaded program. Split list of files to be tagged into
equal parts, 1 part for each CPU and then merge the results.

Speedup on one 2-way box I have is ~143 s => ~99 s (-31%).
On another 4-way box: ~120 s => ~65 s (-46%!).

Resulting "tags" files aren't byte-for-byte identical because ctags
program numbers anon struct and enum declarations with "__anonNNN"
symbols. If those lines are removed, "tags" file becomes byte-for-byte
identical with those generated with current code.

Signed-off-by: Alexey Dobriyan <[email protected]>
---

scripts/tags.sh | 36 +++++++++++++++++++++++++++++++-----
1 file changed, 31 insertions(+), 5 deletions(-)

--- a/scripts/tags.sh
+++ b/scripts/tags.sh
@@ -152,7 +152,19 @@ dogtags()

exuberant()
{
- all_target_sources | xargs $1 -a \
+ rm -f .make-tags.*
+
+ all_target_sources >.make-tags.src
+ NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1)
+ NR_LINES=$(wc -l <.make-tags.src)
+ NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS))
+
+ split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src.
+
+ for i in .make-tags.src.*; do
+ N=$(echo $i | sed -e 's/.*\.//')
+ # -u: don't sort now, sort later
+ xargs <$i $1 -a -f .make-tags.$N -u \
-I __initdata,__exitdata,__initconst, \
-I __cpuinitdata,__initdata_memblock \
-I __refdata,__attribute,__maybe_unused,__always_unused \
@@ -211,7 +223,21 @@ exuberant()
--regex-c='/DEFINE_PCI_DEVICE_TABLE\((\w*)/\1/v/' \
--regex-c='/(^\s)OFFSET\((\w*)/\2/v/' \
--regex-c='/(^\s)DEFINE\((\w*)/\2/v/' \
- --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/'
+ --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/' \
+ &
+ done
+ wait
+ rm -f .make-tags.src .make-tags.src.*
+
+ # write header
+ $1 -f $2 /dev/null
+ # remove headers
+ for i in .make-tags.*; do
+ sed -i -e '/^!/d' $i &
+ done
+ wait
+ sort .make-tags.* >>$2
+ rm -f .make-tags.*

all_kconfigs | xargs $1 -a \
--langdef=kconfig --language-force=kconfig \
@@ -276,7 +302,7 @@ emacs()
xtags()
{
if $1 --version 2>&1 | grep -iq exuberant; then
- exuberant $1
+ exuberant $1 $2
elif $1 --version 2>&1 | grep -iq emacs; then
emacs $1
else
@@ -322,13 +348,13 @@ case "$1" in

"tags")
rm -f tags
- xtags ctags
+ xtags ctags tags
remove_structs=y
;;

"TAGS")
rm -f TAGS
- xtags etags
+ xtags etags TAGS
remove_structs=y
;;
esac


2015-05-09 05:17:16

by Pádraig Brady

[permalink] [raw]
Subject: Re: [PATCH v3] tags: much faster, parallel "make tags"

On 08/05/15 14:26, Alexey Dobriyan wrote:
> ctags is single-threaded program. Split list of files to be tagged into
> equal parts, 1 part for each CPU and then merge the results.
>
> Speedup on one 2-way box I have is ~143 s => ~99 s (-31%).
> On another 4-way box: ~120 s => ~65 s (-46%!).
>
> Resulting "tags" files aren't byte-for-byte identical because ctags
> program numbers anon struct and enum declarations with "__anonNNN"
> symbols. If those lines are removed, "tags" file becomes byte-for-byte
> identical with those generated with current code.
>
> Signed-off-by: Alexey Dobriyan <[email protected]>
> ---
>
> scripts/tags.sh | 36 +++++++++++++++++++++++++++++++-----
> 1 file changed, 31 insertions(+), 5 deletions(-)
>
> --- a/scripts/tags.sh
> +++ b/scripts/tags.sh
> @@ -152,7 +152,19 @@ dogtags()
>
> exuberant()
> {
> - all_target_sources | xargs $1 -a \
> + rm -f .make-tags.*
> +
> + all_target_sources >.make-tags.src
> + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1)

`nproc` is simpler and available since coreutils 8.1 (2009-11-18)

> + NR_LINES=$(wc -l <.make-tags.src)
> + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS))
> +
> + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src.

`split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 (2010-12-22)

> +
> + for i in .make-tags.src.*; do
> + N=$(echo $i | sed -e 's/.*\.//')
> + # -u: don't sort now, sort later
> + xargs <$i $1 -a -f .make-tags.$N -u \
> -I __initdata,__exitdata,__initconst, \
> -I __cpuinitdata,__initdata_memblock \
> -I __refdata,__attribute,__maybe_unused,__always_unused \
> @@ -211,7 +223,21 @@ exuberant()
> --regex-c='/DEFINE_PCI_DEVICE_TABLE\((\w*)/\1/v/' \
> --regex-c='/(^\s)OFFSET\((\w*)/\2/v/' \
> --regex-c='/(^\s)DEFINE\((\w*)/\2/v/' \
> - --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/'
> + --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/' \
> + &
> + done
> + wait
> + rm -f .make-tags.src .make-tags.src.*
> +
> + # write header
> + $1 -f $2 /dev/null
> + # remove headers
> + for i in .make-tags.*; do
> + sed -i -e '/^!/d' $i &
> + done
> + wait
> + sort .make-tags.* >>$2
> + rm -f .make-tags.*

Using sort --merge would speed up significantly?

Even faster would be to get sort to skip the header lines, avoiding the need for sed.
It's a bit awkward and was discussed at:
http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html
Summarising that, is if not using merge you can:

tlines=$(($(wc -l < "$2") + 1))
tail -q -n+$tlines .make-tags.* | LC_ALL=C sort >>$2

Or if merge is appropriate then:

tlines=$(($(wc -l < "$2") + 1))
eval "eval LC_ALL=C sort -m '<(tail -n+$tlines .make-tags.'{1..$(nproc)}')'" >>$2

Note eval is fine here as inputs are controlled within the script

cheers,
P?draig.

p.s. To avoid temp files altogether you could wire everything up through fifos,
though that's probably overkill here TBH

p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.*

2015-05-10 13:26:40

by Alexey Dobriyan

[permalink] [raw]
Subject: Re: [PATCH v3] tags: much faster, parallel "make tags"

On Sat, May 09, 2015 at 06:07:18AM +0100, P?draig Brady wrote:
> On 08/05/15 14:26, Alexey Dobriyan wrote:

> > exuberant()
> > {
> > - all_target_sources | xargs $1 -a \
> > + rm -f .make-tags.*
> > +
> > + all_target_sources >.make-tags.src
> > + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1)
>
> `nproc` is simpler and available since coreutils 8.1 (2009-11-18)

nproc was discarded because getconf is standartized.

> > + NR_LINES=$(wc -l <.make-tags.src)
> > + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS))
> > +
> > + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src.
>
> `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 (2010-12-22)

-nl/ can't count and always make first file somewhat bigger, which is
suspicious. What else it can't do right?

> > + sort .make-tags.* >>$2
> > + rm -f .make-tags.*
>
> Using sort --merge would speed up significantly?

By ~1 second, yes.

> Even faster would be to get sort to skip the header lines, avoiding the need for sed.
> It's a bit awkward and was discussed at:
> http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html
> Summarising that, is if not using merge you can:
>
> tlines=$(($(wc -l < "$2") + 1))
> tail -q -n+$tlines .make-tags.* | LC_ALL=C sort >>$2
>
> Or if merge is appropriate then:
>
> tlines=$(($(wc -l < "$2") + 1))
> eval "eval LC_ALL=C sort -m '<(tail -n+$tlines .make-tags.'{1..$(nproc)}')'" >>$2

Might as well teach ctags to do real parallel processing.
LC_* are set by top level Makefile.

> p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.*

The real question is how to kill ctags reliably.
Naive

trap 'kill $(jobs -p); rm -f .make-tags.*' TERM INT

doesn't work.

Files are removed, but processes aren't.

2015-05-10 13:53:17

by Alexey Dobriyan

[permalink] [raw]
Subject: Re: [PATCH v3] tags: much faster, parallel "make tags"

[fix Andrew's email]

On Sun, May 10, 2015 at 04:26:34PM +0300, Alexey Dobriyan wrote:
> On Sat, May 09, 2015 at 06:07:18AM +0100, P?draig Brady wrote:
> > On 08/05/15 14:26, Alexey Dobriyan wrote:
>
> > > exuberant()
> > > {
> > > - all_target_sources | xargs $1 -a \
> > > + rm -f .make-tags.*
> > > +
> > > + all_target_sources >.make-tags.src
> > > + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1)
> >
> > `nproc` is simpler and available since coreutils 8.1 (2009-11-18)
>
> nproc was discarded because getconf is standartized.
>
> > > + NR_LINES=$(wc -l <.make-tags.src)
> > > + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS))
> > > +
> > > + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src.
> >
> > `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 (2010-12-22)
>
> -nl/ can't count and always make first file somewhat bigger, which is
> suspicious. What else it can't do right?
>
> > > + sort .make-tags.* >>$2
> > > + rm -f .make-tags.*
> >
> > Using sort --merge would speed up significantly?
>
> By ~1 second, yes.
>
> > Even faster would be to get sort to skip the header lines, avoiding the need for sed.
> > It's a bit awkward and was discussed at:
> > http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html
> > Summarising that, is if not using merge you can:
> >
> > tlines=$(($(wc -l < "$2") + 1))
> > tail -q -n+$tlines .make-tags.* | LC_ALL=C sort >>$2
> >
> > Or if merge is appropriate then:
> >
> > tlines=$(($(wc -l < "$2") + 1))
> > eval "eval LC_ALL=C sort -m '<(tail -n+$tlines .make-tags.'{1..$(nproc)}')'" >>$2
>
> Might as well teach ctags to do real parallel processing.
> LC_* are set by top level Makefile.
>
> > p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.*
>
> The real question is how to kill ctags reliably.
> Naive
>
> trap 'kill $(jobs -p); rm -f .make-tags.*' TERM INT
>
> doesn't work.
>
> Files are removed, but processes aren't.

2015-05-10 20:58:15

by Pádraig Brady

[permalink] [raw]
Subject: Re: [PATCH v3] tags: much faster, parallel "make tags"

On 10/05/15 14:26, Alexey Dobriyan wrote:
> On Sat, May 09, 2015 at 06:07:18AM +0100, P?draig Brady wrote:
>> On 08/05/15 14:26, Alexey Dobriyan wrote:
>
>>> exuberant()
>>> {
>>> - all_target_sources | xargs $1 -a \
>>> + rm -f .make-tags.*
>>> +
>>> + all_target_sources >.make-tags.src
>>> + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1)
>>
>> `nproc` is simpler and available since coreutils 8.1 (2009-11-18)
>
> nproc was discarded because getconf is standartized.

Note getconf doesn't honor CPU affinity which may be fine here?

$ taskset -c 0 getconf _NPROCESSORS_ONLN
4
$ taskset -c 0 nproc
1

>>> + NR_LINES=$(wc -l <.make-tags.src)
>>> + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS))
>>> +
>>> + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src.
>>
>> `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 (2010-12-22)
>
> -nl/ can't count and always make first file somewhat bigger, which is
> suspicious. What else it can't do right?

It avoids the overhead of reading all data and counting the lines,
by splitting the data into approx equal numbers of lines as detailed at:
http://gnu.org/s/coreutils/split

>>> + sort .make-tags.* >>$2
>>> + rm -f .make-tags.*
>>
>> Using sort --merge would speed up significantly?
>
> By ~1 second, yes.
>
>> Even faster would be to get sort to skip the header lines, avoiding the need for sed.
>> It's a bit awkward and was discussed at:
>> http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html
>> Summarising that, is if not using merge you can:
>>
>> tlines=$(($(wc -l < "$2") + 1))
>> tail -q -n+$tlines .make-tags.* | LC_ALL=C sort >>$2
>>
>> Or if merge is appropriate then:
>>
>> tlines=$(($(wc -l < "$2") + 1))
>> eval "eval LC_ALL=C sort -m '<(tail -n+$tlines .make-tags.'{1..$(nproc)}')'" >>$2
>
> Might as well teach ctags to do real parallel processing.
> LC_* are set by top level Makefile.
>
>> p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.*
>
> The real question is how to kill ctags reliably.
> Naive
>
> trap 'kill $(jobs -p); rm -f .make-tags.*' TERM INT
>
> doesn't work.
>
> Files are removed, but processes aren't.

Is $(jobs -p) generating the correct list?
On an interactive shell here it is.
Perhaps you need to explicitly use #!/bin/sh -m
at the start to enable job control like that?
Another option would be to append each background $! pid
to a list and kill that list.
Note also you may want to `wait` after the kill too.

cheers,
P?draig.

2015-05-11 20:20:13

by Alexey Dobriyan

[permalink] [raw]
Subject: Re: [PATCH v3] tags: much faster, parallel "make tags"

On Sun, May 10, 2015 at 09:58:12PM +0100, P?draig Brady wrote:
> On 10/05/15 14:26, Alexey Dobriyan wrote:
> > On Sat, May 09, 2015 at 06:07:18AM +0100, P?draig Brady wrote:
> >> On 08/05/15 14:26, Alexey Dobriyan wrote:
> >
> >>> exuberant()
> >>> {
> >>> - all_target_sources | xargs $1 -a \
> >>> + rm -f .make-tags.*
> >>> +
> >>> + all_target_sources >.make-tags.src
> >>> + NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1)
> >>
> >> `nproc` is simpler and available since coreutils 8.1 (2009-11-18)
> >
> > nproc was discarded because getconf is standartized.
>
> Note getconf doesn't honor CPU affinity which may be fine here?
>
> $ taskset -c 0 getconf _NPROCESSORS_ONLN
> 4
> $ taskset -c 0 nproc
> 1

Why would anyone tag files under affinity?

> >>> + NR_LINES=$(wc -l <.make-tags.src)
> >>> + NR_LINES=$((($NR_LINES + $NR_CPUS - 1) / $NR_CPUS))
> >>> +
> >>> + split -a 6 -d -l $NR_LINES .make-tags.src .make-tags.src.
> >>
> >> `split -d -nl/$(nproc)` is simpler and available since coreutils 8.8 (2010-12-22)
> >
> > -nl/ can't count and always make first file somewhat bigger, which is
> > suspicious. What else it can't do right?
>
> It avoids the overhead of reading all data and counting the lines,
> by splitting the data into approx equal numbers of lines as detailed at:
> http://gnu.org/s/coreutils/split

~1 second -- statistical error.

> >>> + sort .make-tags.* >>$2
> >>> + rm -f .make-tags.*
> >>
> >> Using sort --merge would speed up significantly?
> >
> > By ~1 second, yes.
> >
> >> Even faster would be to get sort to skip the header lines, avoiding the need for sed.
> >> It's a bit awkward and was discussed at:
> >> http://lists.gnu.org/archive/html/coreutils/2013-01/msg00027.html
> >> Summarising that, is if not using merge you can:
> >>
> >> tlines=$(($(wc -l < "$2") + 1))
> >> tail -q -n+$tlines .make-tags.* | LC_ALL=C sort >>$2
> >>
> >> Or if merge is appropriate then:
> >>
> >> tlines=$(($(wc -l < "$2") + 1))
> >> eval "eval LC_ALL=C sort -m '<(tail -n+$tlines .make-tags.'{1..$(nproc)}')'" >>$2
> >
> > Might as well teach ctags to do real parallel processing.
> > LC_* are set by top level Makefile.
> >
> >> p.p.s. You may want to `trap EXIT cleanup` to rm -f .make-tags.*
> >
> > The real question is how to kill ctags reliably.
> > Naive
> >
> > trap 'kill $(jobs -p); rm -f .make-tags.*' TERM INT
> >
> > doesn't work.
> >
> > Files are removed, but processes aren't.
>
> Is $(jobs -p) generating the correct list?


It looks like it does.

> On an interactive shell here it is.
> Perhaps you need to explicitly use #!/bin/sh -m
> at the start to enable job control like that?
> Another option would be to append each background $! pid
> to a list and kill that list.
> Note also you may want to `wait` after the kill too.

All of this doesn't work reliably.

I switched to "xargs -P" and Ctrl+C became reliable, immediate and
free for programmer. See updated patch.

2015-05-11 20:25:24

by Alexey Dobriyan

[permalink] [raw]
Subject: [PATCH v4] tags: much faster, parallel "make tags"

ctags is single-threaded program. Split list of files to be tagged into
almost equal parts, process them on every CPU and merge the results.

Speedup is ~30-45% (!) (depending on number of cores).

Resulting "tags" files aren't byte-for-byte identical because ctags
program numbers anon struct and enum declarations with "__anonNNN"
symbols. If those lines are removed, "tags" file becomes byte-for-byte
identical with those generated with current code.

v4: switch from shell "&; wait"' parallelism to "xargs -P" for reliable cleanup.

Signed-off-by: Alexey Dobriyan <[email protected]>
---

scripts/tags.sh | 58 +++++++++++++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 53 insertions(+), 5 deletions(-)

--- a/scripts/tags.sh
+++ b/scripts/tags.sh
@@ -152,7 +152,41 @@ dogtags()

exuberant()
{
- all_target_sources | xargs $1 -a \
+ trap 'rm -f .make-tags.*; exit 1' TERM INT
+ rm -f .make-tags.*
+
+ all_target_sources >.make-tags.0
+
+ # Default xargs(1) total command line size.
+ XARGS_ARG_MAX=$((128 * 1024))
+ # Split is unequal w.r.t file count, but asking for both size and
+ # line count limit is too much in 2015.
+ #
+ # Reserve room for fixed ctags(1) arguments.
+ split -a 6 -d -C $(($XARGS_ARG_MAX - 4 * 1024)) .make-tags.0 .make-tags.x
+ rm -f .make-tags.0
+
+ # xargs(1) appears to not support command line tweaking,
+ # so it has to be prepared in advance (see '-f').
+ NR_TAGS=$(ls -1 .make-tags.x* | wc -l)
+ touch .make-tags.1
+ for i in $(seq 0 $(($NR_TAGS - 1))); do
+ N=$(printf "%06u" $i)
+ echo -n "-f .make-tags.t$N " >>.make-tags.1
+ tr '\n' ' ' <.make-tags.x$N >>.make-tags.1
+ echo >>.make-tags.1
+ rm -f .make-tags.x$N
+ done
+
+ # Tag files in parallel.
+ #
+ # "xargs -I" puts command line piece as one argument,
+ # so shell is employed to split it back.
+ NR_CPUS=$(getconf _NPROCESSORS_ONLN 2>/dev/null || echo 1)
+ # ctags -u: don't sort now, sort later
+ xargs -P $NR_CPUS -L 1 -I CMD -s $XARGS_ARG_MAX \
+ <.make-tags.1 \
+ sh -c "$1 -a -u \
-I __initdata,__exitdata,__initconst, \
-I __cpuinitdata,__initdata_memblock \
-I __refdata,__attribute,__maybe_unused,__always_unused \
@@ -211,7 +245,21 @@ exuberant()
--regex-c='/DEFINE_PCI_DEVICE_TABLE\((\w*)/\1/v/' \
--regex-c='/(^\s)OFFSET\((\w*)/\2/v/' \
--regex-c='/(^\s)DEFINE\((\w*)/\2/v/' \
- --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/'
+ --regex-c='/DEFINE_HASHTABLE\((\w*)/\1/v/' \
+ CMD"
+ rm -f .make-tags.1
+
+ # Remove headers.
+ for i in .make-tags.t*; do
+ sed -i -e '/^!/d' $i
+ done
+
+ # Write final header.
+ $1 -f $2 /dev/null
+
+ # Append sorted results.
+ sort .make-tags.t* >>$2
+ rm -f .make-tags.t*

all_kconfigs | xargs $1 -a \
--langdef=kconfig --language-force=kconfig \
@@ -276,7 +324,7 @@ emacs()
xtags()
{
if $1 --version 2>&1 | grep -iq exuberant; then
- exuberant $1
+ exuberant $1 $2
elif $1 --version 2>&1 | grep -iq emacs; then
emacs $1
else
@@ -322,13 +370,13 @@ case "$1" in

"tags")
rm -f tags
- xtags ctags
+ xtags ctags tags
remove_structs=y
;;

"TAGS")
rm -f TAGS
- xtags etags
+ xtags etags TAGS
remove_structs=y
;;
esac

2015-08-19 13:25:44

by Michal Marek

[permalink] [raw]
Subject: Re: [PATCH v4] tags: much faster, parallel "make tags"

On 2015-05-11 22:25, Alexey Dobriyan wrote:
> ctags is single-threaded program. Split list of files to be tagged into
> almost equal parts, process them on every CPU and merge the results.

Sorry, I missed the v4 of the patch.


> + # Remove headers.
> + for i in .make-tags.t*; do
> + sed -i -e '/^!/d' $i
> + done
> +
> + # Write final header.
> + $1 -f $2 /dev/null
> +
> + # Append sorted results.
> + sort .make-tags.t* >>$2
> + rm -f .make-tags.t*

This still breaks Exuberant ctags in emacs mode:
$ ln -s /usr/bin/ctags ~/bin/etags
$ make TAGS
GEN TAGS
etags: "TAGS" doesn't look like a tag file; I refuse to overwrite it.
etags: "TAGS" doesn't look like a tag file; I refuse to overwrite it.

The TAGS file is corrupted because of the sorting.

Michal