2009-10-26 22:36:59

by Al Viro

[permalink] [raw]
Subject: [PATCH] dcache: better name hash function

Some experiments by Octavian with large numbers of network devices identified
that name_hash does not evenly distribute values causing performance
penalties. The name hashing function is used by dcache et. all
so let's just choose a better one.

Additional standalone tests for 10,000,000 consecutive names
using lots of different algorithms shows fnv as the winner.
It is faster and has almost ideal dispersion.
string10 is slightly faster, but only works for names like ppp0, ppp1,...

Algorithm Time Ratio Max StdDev
string10 0.238201 1.00 2444 0.02
fnv32 0.240595 1.00 2576 1.05
fnv64 0.241224 1.00 2556 0.69
SuperFastHash 0.272872 1.00 2871 2.15
string_hash17 0.295160 1.00 2484 0.40
jhash_string 0.300925 1.00 2606 1.00
crc 1.606741 1.00 2474 0.29
md5_string 2.424771 1.00 2644 0.99
djb2 0.275424 1.15 3821 19.04
string_hash31 0.264806 1.21 4097 22.78
sdbm 0.371136 2.87 13016 67.54
elf 0.371279 3.59 9990 79.50
pjw 0.401172 3.59 9990 79.50
full_name_hash 0.285851 13.09 35174 171.81
kr_hash 0.245068 124.84 468448 549.89
fletcher 0.267664 124.84 468448 549.89
adler32 0.640668 124.84 468448 549.89
xor 0.220545 213.82 583189 720.85
lastchar 0.194604 409.57 1000000 998.78

Time is seconds.
Ratio is how many probes required to lookup all values versus
an ideal hash.
Max is longest chain

Reported-by: Octavian Purdila <[email protected]>
Signed-off-by: Stephen Hemminger <[email protected]>

--- a/include/linux/dcache.h 2009-10-26 14:58:45.220347300 -0700
+++ b/include/linux/dcache.h 2009-10-26 15:12:15.004160122 -0700
@@ -45,15 +45,28 @@ struct dentry_stat_t {
};
extern struct dentry_stat_t dentry_stat;

-/* Name hashing routines. Initial hash value */
-/* Hash courtesy of the R5 hash in reiserfs modulo sign bits */
-#define init_name_hash() 0
+/*
+ * Fowler / Noll / Vo (FNV) Hash
+ * see: http://www.isthe.com/chongo/tech/comp/fnv/
+ */
+#ifdef CONFIG_64BIT
+#define FNV_PRIME 1099511628211ull
+#define FNV1_INIT 14695981039346656037ull
+#else
+#define FNV_PRIME 16777619u
+#define FNV1_INIT 2166136261u
+#endif
+
+#define init_name_hash() FNV1_INIT

-/* partial hash update function. Assume roughly 4 bits per character */
+/* partial hash update function. */
static inline unsigned long
-partial_name_hash(unsigned long c, unsigned long prevhash)
+partial_name_hash(unsigned char c, unsigned long prevhash)
{
- return (prevhash + (c << 4) + (c >> 4)) * 11;
+ prevhash ^= c;
+ prevhash *= FNV_PRIME;
+
+ return prevhash;
}

/*


2009-10-27 02:45:56

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH] dcache: better name hash function

Stephen Hemminger <[email protected]>, Al Viro a ?crit :
> --- a/include/linux/dcache.h 2009-10-26 14:58:45.220347300 -0700
> +++ b/include/linux/dcache.h 2009-10-26 15:12:15.004160122 -0700
> @@ -45,15 +45,28 @@ struct dentry_stat_t {
> };
> extern struct dentry_stat_t dentry_stat;
>
> -/* Name hashing routines. Initial hash value */
> -/* Hash courtesy of the R5 hash in reiserfs modulo sign bits */
> -#define init_name_hash() 0
> +/*
> + * Fowler / Noll / Vo (FNV) Hash
> + * see: http://www.isthe.com/chongo/tech/comp/fnv/
> + */
> +#ifdef CONFIG_64BIT
> +#define FNV_PRIME 1099511628211ull
> +#define FNV1_INIT 14695981039346656037ull
> +#else
> +#define FNV_PRIME 16777619u
> +#define FNV1_INIT 2166136261u
> +#endif
> +
> +#define init_name_hash() FNV1_INIT
>
> -/* partial hash update function. Assume roughly 4 bits per character */
> +/* partial hash update function. */
> static inline unsigned long
> -partial_name_hash(unsigned long c, unsigned long prevhash)
> +partial_name_hash(unsigned char c, unsigned long prevhash)
> {
> - return (prevhash + (c << 4) + (c >> 4)) * 11;
> + prevhash ^= c;
> + prevhash *= FNV_PRIME;
> +
> + return prevhash;
> }
>
> /*

OK, but thats strlen(name) X (long,long) multiplies.

I suspect you tested on recent x86_64 cpu.

Some arches might have slow multiplies, no ?

jhash() (and others) are optimized by compiler to use basic and fast operations.
jhash operates on blocs of 12 chars per round, so it might be a pretty good choice once
out-of-line (because its pretty large and full_name_hash() is now used by
a lot of call sites)

Please provide your test program source, so that other can test on various arches.

Thanks

2009-10-27 03:53:54

by Stephen Hemminger

[permalink] [raw]
Subject: Re: [PATCH] dcache: better name hash function

On Tue, 27 Oct 2009 03:45:50 +0100
Eric Dumazet <[email protected]> wrote:

> Stephen Hemminger <[email protected]>, Al Viro a écrit :
> > --- a/include/linux/dcache.h 2009-10-26 14:58:45.220347300 -0700
> > +++ b/include/linux/dcache.h 2009-10-26 15:12:15.004160122 -0700
> > @@ -45,15 +45,28 @@ struct dentry_stat_t {
> > };
> > extern struct dentry_stat_t dentry_stat;
> >
> > -/* Name hashing routines. Initial hash value */
> > -/* Hash courtesy of the R5 hash in reiserfs modulo sign bits */
> > -#define init_name_hash() 0
> > +/*
> > + * Fowler / Noll / Vo (FNV) Hash
> > + * see: http://www.isthe.com/chongo/tech/comp/fnv/
> > + */
> > +#ifdef CONFIG_64BIT
> > +#define FNV_PRIME 1099511628211ull
> > +#define FNV1_INIT 14695981039346656037ull
> > +#else
> > +#define FNV_PRIME 16777619u
> > +#define FNV1_INIT 2166136261u
> > +#endif
> > +
> > +#define init_name_hash() FNV1_INIT
> >
> > -/* partial hash update function. Assume roughly 4 bits per character */
> > +/* partial hash update function. */
> > static inline unsigned long
> > -partial_name_hash(unsigned long c, unsigned long prevhash)
> > +partial_name_hash(unsigned char c, unsigned long prevhash)
> > {
> > - return (prevhash + (c << 4) + (c >> 4)) * 11;
> > + prevhash ^= c;
> > + prevhash *= FNV_PRIME;
> > +
> > + return prevhash;
> > }
> >
> > /*
>
> OK, but thats strlen(name) X (long,long) multiplies.
>
> I suspect you tested on recent x86_64 cpu.
>
> Some arches might have slow multiplies, no ?
>
> jhash() (and others) are optimized by compiler to use basic and fast operations.
> jhash operates on blocs of 12 chars per round, so it might be a pretty good choice once
> out-of-line (because its pretty large and full_name_hash() is now used by
> a lot of call sites)
>
> Please provide your test program source, so that other can test on various arches.
>
> Thanks

long on i386 is 32 bits so it is 32 bit multiply. There is also an optimization
that uses shift and adds.




--


Attachments:
(No filename) (1.95 kB)
hashtest.tar.bz2 (7.41 kB)
Download all attachments

2009-10-27 05:19:45

by Stephen Hemminger

[permalink] [raw]
Subject: Re: [PATCH] dcache: better name hash function

One of the root causes of slowness in network usage
was my original choice of power of 2 for hash size, to avoid
a mod operation. It turns out if size is not a power of 2
the original algorithm works fairly well.

On slow cpu; with 10million entries and 211 hash size

Algorithm Time Ratio Max StdDev
string10 1.271871 1.00 47397 0.01
djb2 1.406322 1.00 47452 0.12
SuperFastHash 1.422348 1.00 48400 1.99
string_hash31 1.424079 1.00 47437 0.08
jhash_string 1.459232 1.00 47954 1.01
sdbm 1.499209 1.00 47499 0.22
fnv32 1.539341 1.00 47728 0.75
full_name_hash 1.556792 1.00 47412 0.04
string_hash17 1.719039 1.00 47413 0.05
pjw 1.827365 1.00 47441 0.09
elf 2.033545 1.00 47441 0.09
fnv64 2.199533 1.00 47666 0.53
crc 5.705784 1.00 47913 0.95
md5_string 10.308376 1.00 47946 1.00
fletcher 1.418866 1.01 53189 18.65
adler32 2.842117 1.01 53255 18.79
kr_hash 1.175678 6.43 468517 507.44
xor 1.114692 11.02 583189 688.96
lastchar 0.795316 21.10 1000000 976.02

How important is saving the one division, versus getting better
distribution.

2009-10-27 05:24:07

by David Miller

[permalink] [raw]
Subject: Re: [PATCH] dcache: better name hash function

From: Stephen Hemminger <[email protected]>
Date: Mon, 26 Oct 2009 22:19:44 -0700 (PDT)

> How important is saving the one division, versus getting better
> distribution.

80 cpu cycles or more on some processors. Cheaper to use
jenkins with a power-of-2 sized hash.

2009-10-27 06:08:07

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH] dcache: better name hash function

Stephen Hemminger a écrit :
> One of the root causes of slowness in network usage
> was my original choice of power of 2 for hash size, to avoid
> a mod operation. It turns out if size is not a power of 2
> the original algorithm works fairly well.

Interesting, but I suspect all users have power of 2 tables :(

>
> On slow cpu; with 10million entries and 211 hash size
>
>
>
> How important is saving the one division, versus getting better
> distribution.


unsigned int fold1(unsigned hash)
{
return hash % 211;
}

Compiler uses a reciprocal divide because of 211 being a constant.

And you also could try following that contains one multiply only,
and check if hash distribution properties are still OK

unsigned int fold2(unsigned hash)
{
return ((unsigned long long)hash * 211) >> 32;
}

fold1:
movl 4(%esp), %ecx
movl $-1689489505, %edx
movl %ecx, %eax
mull %edx
shrl $7, %edx
imull $211, %edx, %edx
subl %edx, %ecx
movl %ecx, %eax
ret

fold2:
movl $211, %eax
mull 4(%esp)
movl %edx, %eax
ret

2009-10-27 06:50:51

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH] dcache: better name hash function

Eric Dumazet a écrit :
> unsigned int fold2(unsigned hash)
> {
> return ((unsigned long long)hash * 211) >> 32;
> }
>

I tried this reciprocal thing with 511 and 1023 values and got on a PIII 550 MHz, gcc-3.3.2 :

# ./hashtest 100000 511
jhash_string 0.033123 1.01 234 1.06
fnv32 0.033911 1.02 254 1.38
# ./hashtest 1000000 511
jhash_string 0.331155 1.00 2109 1.10
fnv32 0.359346 1.00 2151 1.65
# ./hashtest 10000000 511
jhash_string 3.383340 1.00 19985 1.03
fnv32 3.849359 1.00 20198 1.53

# ./hashtest 100000 1023
jhash_string 0.033123 1.03 134 1.01
fnv32 0.034260 1.03 142 1.32
# ./hashtest 1000000 1023
jhash_string 0.332329 1.00 1075 1.06
fnv32 0.422035 1.00 1121 1.59
# ./hashtest 10000000 1023
jhash_string 3.417559 1.00 10107 1.01
fnv32 3.747563 1.00 10223 1.35


511 value on 64bit, and 1023 on 32bit arches are nice because
hashsz * sizeof(pointer) <= 4096, wasting space for one pointer only.

Conclusion : jhash and 511/1023 hashsize for netdevices,
no divides, only one multiply for the fold.

2009-10-27 07:30:07

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH] dcache: better name hash function

Eric Dumazet a écrit :
>
>
> 511 value on 64bit, and 1023 on 32bit arches are nice because
> hashsz * sizeof(pointer) <= 4096, wasting space for one pointer only.
>
> Conclusion : jhash and 511/1023 hashsize for netdevices,
> no divides, only one multiply for the fold.

Just forget about 511 & 1023, as power of two works too.

-> 512 & 1024 + jhash

Guess what, David already said this :)

2009-10-27 16:38:33

by Rick Jones

[permalink] [raw]
Subject: Re: [PATCH] dcache: better name hash function

Previously Stephen kindly sent me the source and instructions, and attached are
results from 1.0 GHz Itanium "McKinley" processors using an older gcc, both -O2
and -O3, -O2 first:

>>>
>>>
>>> $ ./hashtest 10000000 14 | sort -n -k 3 -k 2
>>> Algorithm Time Ratio Max StdDev
>>> string10 0.234133 1.00 612 0.03
>>> fnv32 0.241471 1.00 689 0.93
>>> fnv64 0.241964 1.00 680 0.85
>>> string_hash17 0.269656 1.00 645 0.36
>>> jhash_string 0.295795 1.00 702 1.00
>>> crc 1.609449 1.00 634 0.41
>>> md5_string 2.479467 1.00 720 0.99
>>> SuperFastHash 0.273793 1.01 900 2.13
>>> djb2 0.265877 1.15 964 9.52
>>> string_hash31 0.259110 1.21 1039 11.39
>>> sdbm 0.369414 2.87 3268 33.77
>>> elf 0.372251 3.71 2907 40.71
>>> pjw 0.401732 3.71 2907 40.71
>>> full_name_hash 0.283508 13.09 8796 85.91
>>> kr_hash 0.220033 499.17 468448 551.55
>>> fletcher 0.267009 499.17 468448 551.55
>>> adler32 0.635047 499.17 468448 551.55
>>> xor 0.220314 854.94 583189 722.12
>>> lastchar 0.155236 1637.61 1000000 999.69
>>
>>
>> here then are both, from a 1.0 GHz McKinley system, 64-bit, using an older
>> gcc
>>
>> raj@oslowest:~/hashtest$ ./hashtest 10000000 14 | sort -n -k 3 -k 2
>> Algorithm Time Ratio Max StdDev
>> string_hash17 0.901319 1.00 645 0.36
>> string10 0.986391 1.00 612 0.03
>> jhash_string 1.422065 1.00 702 1.00
>> fnv32 1.705116 1.00 689 0.93
>> fnv64 1.900326 1.00 680 0.85
>> crc 3.651519 1.00 634 0.41
>> md5_string 14.155621 1.00 720 0.99
>> SuperFastHash 1.185206 1.01 900 2.13
>> djb2 0.977166 1.15 964 9.52
>> string_hash31 0.989804 1.21 1039 11.39
>> sdbm 1.188299 2.87 3268 33.77
>> pjw 1.185963 3.71 2907 40.71
>> elf 1.257023 3.71 2907 40.71
>> full_name_hash 1.231514 13.09 8796 85.91
>> kr_hash 0.890761 499.17 468448 551.55
>> fletcher 1.080981 499.17 468448 551.55
>> adler32 4.141714 499.17 468448 551.55
>> xor 1.061445 854.94 583189 722.12
>> lastchar 0.676697 1637.61 1000000 999.69
>>
>> raj@oslowest:~/hashtest$ ./hashtest 10000000 8 | sort -n -k 3 -k 2
>> Algorithm Time Ratio Max StdDev
>> string_hash17 0.899988 1.00 39497 1.50
>> string10 0.985100 1.00 39064 0.01
>> SuperFastHash 1.141748 1.00 40497 2.17
>> jhash_string 1.376414 1.00 39669 1.04
>> fnv32 1.656967 1.00 39895 2.25
>> fnv64 1.855259 1.00 39215 0.35
>> crc 3.615341 1.00 39088 0.07
>> md5_string 14.113307 1.00 39605 0.98
>> djb2 0.972180 1.15 60681 76.16
>> string_hash31 0.982233 1.21 64950 91.12
>> sdbm 1.181952 2.38 129900 232.22
>> pjw 1.178994 2.45 99990 237.86
>> elf 1.250936 2.45 99990 237.86
>> kr_hash 0.892633 7.80 468451 515.52
>> fletcher 1.082932 7.80 468451 515.52
>> adler32 4.142414 7.80 468451 515.52
>> full_name_hash 1.175324 13.09 562501 687.24
>> xor 1.060091 13.36 583189 694.98
>> lastchar 0.675610 25.60 1000000 980.27
>>
>> raj@oslowest:~/hashtest$ gcc -v
>> Using built-in specs.
>> Target: ia64-linux-gnu
>> Configured with: ../src/configure -v
--enable-languages=c,c++,fortran,objc,obj-c++,treelang --prefix=/usr
--enable-shared --with-system-zlib --libexecdir=/usr/lib
--without-included-gettext --enable-threads=posix --enable-nls
--program-suffix=-4.1 --enable-__cxa_atexit --enable-clocale=gnu
--enable-libstdcxx-debug --enable-mpfr --disable-libssp --with-system-libunwind
--enable-checking=release ia64-linux-gnu
>> Thread model: posix
>> gcc version 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)
>> raj@oslowest:~/hashtest$
>>
>> fnv doesn't seem to do as well there relative to the others as it did in your
>> tests.
>
>
>
> You could try -O3 since then gcc may replace the multiply with shift/add
> or is there something about forcing 32 and 64 bit that makes ia64 suffer.


It seems to speed things up, but the relative ordering remains the same:

oslowest:/home/raj/hashtest# make
cc -O3 -Wall -c -o hashtest.o hashtest.c
cc -O3 -Wall -c -o md5.o md5.c
cc -lm hashtest.o md5.o -o hashtest
oslowest:/home/raj/hashtest# ./hashtest 10000000 14 | sort -n -k 3 -k 2
Algorithm Time Ratio Max StdDev
string_hash17 0.893813 1.00 645 0.36
string10 0.965596 1.00 612 0.03
jhash_string 1.387773 1.00 702 1.00
fnv32 1.699041 1.00 689 0.93
fnv64 1.882314 1.00 680 0.85
crc 3.273676 1.00 634 0.41
md5_string 13.913745 1.00 720 0.99
SuperFastHash 1.135802 1.01 900 2.13
djb2 0.951571 1.15 964 9.52
string_hash31 0.971081 1.21 1039 11.39
sdbm 1.168148 2.87 3268 33.77
pjw 1.159304 3.71 2907 40.71
elf 1.237662 3.71 2907 40.71
full_name_hash 1.212588 13.09 8796 85.91
kr_hash 0.856584 499.17 468448 551.55
fletcher 1.054516 499.17 468448 551.55
adler32 4.123742 499.17 468448 551.55
xor 1.031910 854.94 583189 722.12
lastchar 0.648597 1637.61 1000000 999.69
oslowest:/home/raj/hashtest# ./hashtest 10000000 8 | sort -n -k 3 -k 2
Algorithm Time Ratio Max StdDev
string_hash17 0.884829 1.00 39497 1.50
string10 0.962258 1.00 39064 0.01
SuperFastHash 1.088602 1.00 40497 2.17
jhash_string 1.340878 1.00 39669 1.04
fnv32 1.637096 1.00 39895 2.25
fnv64 1.842330 1.00 39215 0.35
crc 3.230291 1.00 39088 0.07
md5_string 13.863056 1.00 39605 0.98
djb2 0.944159 1.15 60681 76.16
string_hash31 0.961978 1.21 64950 91.12
sdbm 1.159156 2.38 129900 232.22
pjw 1.154286 2.45 99990 237.86
elf 1.232842 2.45 99990 237.86
kr_hash 0.856873 7.80 468451 515.52
fletcher 1.055389 7.80 468451 515.52
adler32 4.123254 7.80 468451 515.52
full_name_hash 1.152628 13.09 562501 687.24
xor 1.033050 13.36 583189 694.98
lastchar 0.647504 25.60 1000000 980.27

2009-10-27 17:07:39

by Stephen Hemminger

[permalink] [raw]
Subject: Re: [PATCH] dcache: better name hash function

On Tue, 27 Oct 2009 08:29:51 +0100
Eric Dumazet <[email protected]> wrote:

> Eric Dumazet a écrit :
> >
> >
> > 511 value on 64bit, and 1023 on 32bit arches are nice because
> > hashsz * sizeof(pointer) <= 4096, wasting space for one pointer only.
> >
> > Conclusion : jhash and 511/1023 hashsize for netdevices,
> > no divides, only one multiply for the fold.
>
> Just forget about 511 & 1023, as power of two works too.
>
> -> 512 & 1024 + jhash
>
> Guess what, David already said this :)


Rather than wasting space, or doing expensive, modulus; just folding
the higher bits back with XOR redistributes the bits better.


On fast machine (Nehalam):

100000000 Iterations
256 Slots (order 8)
Algorithm Time Ratio Max StdDev
string10 2.505290 1.00 390628 0.00
xor 2.521329 1.00 392120 2.14
SuperFastHash 2.781745 1.00 397027 4.43
fnv32 2.847892 1.00 392139 0.98
djb2 2.886342 1.00 390827 0.12
string_hash31 2.900980 1.00 391001 0.20
string_hash17 2.938708 1.00 391122 0.20
full_name_hash 3.080886 1.00 390860 0.10
jhash_string 3.092161 1.00 392775 1.08
fnv64 5.340740 1.00 392854 0.88
kr_hash 2.395757 7.30 4379091 1568.25

On slow machine (CULV):
100000000 Iterations
256 Slots (order 8)
Algorithm Time Ratio Max StdDev
string10 10.807174 1.00 390628 0.00
SuperFastHash 11.397303 1.00 397027 4.43
xor 11.660968 1.00 392120 2.14
djb2 11.674707 1.00 390827 0.12
jhash_string 11.997104 1.00 392775 1.08
fnv32 12.289086 1.00 392139 0.98
string_hash17 12.863864 1.00 391122 0.20
full_name_hash 13.249483 1.00 390860 0.10
string_hash31 13.668270 1.00 391001 0.20
fnv64 39.808964 1.00 392854 0.88
kr_hash 10.316305 7.30 4379091 1568.25

So Eric's string10 is fastest for special case of fooNNN style names.
But probably isn't best for general strings. Orignal function
is >20% slower, which is surprising probably because of overhead
of 2 shifts and multipy. jenkins and fnv are both 10% slower.

The following seems to give best results (combination of 16bit trick
and string17).


static unsigned int xor17(const unsigned char *key, unsigned int len)
{
uint32_t h = 0;
unsigned int rem;

rem = len & 1;
len >>= 1;

while (len--) {
h = ((h << 4) + h) ^ get_unaligned16(key);
key += sizeof(uint16_t);
}

if (rem)
h = ((h << 4) + h) ^ *key;


return h;
}

2009-10-27 17:22:54

by Stephen Hemminger

[permalink] [raw]
Subject: [PATCH] net: fold network name hash

The full_name_hash does not produce a value that is evenly distributed
over the lower 8 bits. This causes name hash to be unbalanced with large
number of names. A simple fix is to just fold in the higher bits
with XOR.

This is independent of possible improvements to full_name_hash()
in future.

Signed-off-by: Stephen Hemminger <[email protected]>


--- a/net/core/dev.c 2009-10-27 09:21:46.127252547 -0700
+++ b/net/core/dev.c 2009-10-27 09:25:14.593313378 -0700
@@ -199,7 +199,11 @@ EXPORT_SYMBOL(dev_base_lock);
static inline struct hlist_head *dev_name_hash(struct net *net, const char *name)
{
unsigned hash = full_name_hash(name, strnlen(name, IFNAMSIZ));
- return &net->dev_name_head[hash & ((1 << NETDEV_HASHBITS) - 1)];
+
+ hash ^= (hash >> NETDEV_HASHBITS);
+ hash &= NETDEV_HASHENTRIES - 1;
+
+ return &net->dev_name_head[hash];
}

static inline struct hlist_head *dev_index_hash(struct net *net, int ifindex)

2009-10-27 17:32:46

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH] dcache: better name hash function



On Tue, 27 Oct 2009, Stephen Hemminger wrote:
>
> Rather than wasting space, or doing expensive, modulus; just folding
> the higher bits back with XOR redistributes the bits better.

Please don't make up any new hash functions without having a better input
set than the one you seem to use.

The 'fnv' function I can believe in, because the whole "multiply by big
prime number" thing to spread out the bits is a very traditional model.
But making up a new hash function based on essentially consecutive names
is absolutely the wrong thing to do. You need a much better corpus of path
component names for testing.

> The following seems to give best results (combination of 16bit trick
> and string17).

.. and these kinds of games are likely to work badly on some
architectures. Don't use 16-bit values, and don't use 'get_unaligned()'.
Both tend to work fine on x86, but likely suck on some other
architectures.

Also remember that the critical hash function needs to check for '/' and
'\0' while at it, which is one reason why it does things byte-at-a-time.
If you try to be smart, you'd need to be smart about the end condition
too.

The loop to optimize is _not_ based on 'name+len', it is this code:

this.name = name;
c = *(const unsigned char *)name;

hash = init_name_hash();
do {
name++;
hash = partial_name_hash(c, hash);
c = *(const unsigned char *)name;
} while (c && (c != '/'));
this.len = name - (const char *) this.name;
this.hash = end_name_hash(hash);

(which depends on us having already removed all slashed at the head, and
knowing that the string is not zero-sized)

So doing things multiple bytes at a time is certainly still possible, but
you would always have to find the slashes/NUL's in there first. Doing that
efficiently and portably is not trivial - especially since a lot of
critical path components are short.

(Remember: there may be just a few 'bin' directory names, but if you do
performance analysis, 'bin' as a path component is probably hashed a lot
more than 'five_slutty_bimbos_and_a_donkey.jpg'. So the relative weighting
of importance of the filename should probably include the frequency it
shows up in pathname lookup)

Linus

2009-10-27 17:35:39

by Stephen Hemminger

[permalink] [raw]
Subject: Re: [PATCH] dcache: better name hash function

On Tue, 27 Oct 2009 18:19:13 +0100
Eric Dumazet <[email protected]> wrote:

> Stephen Hemminger a écrit :
>
> > So Eric's string10 is fastest for special case of fooNNN style names.
> > But probably isn't best for general strings. Orignal function
> > is >20% slower, which is surprising probably because of overhead
> > of 2 shifts and multipy. jenkins and fnv are both 10% slower.
> >
>
>
> jhash() is faster when strings are longer, being able to process 12 bytes per loop.
>

But jhash is not amenable to usage in namei (with partial_name_hash).

name_hash is rarely done on long strings, the average length of a filename
is fairly short (probably leftover Unix legacy). On my system, average
path component length in /usr is 13 characters; therefore jhash has
no big benefit here.

2009-10-27 18:05:20

by Octavian Purdila

[permalink] [raw]
Subject: Re: [PATCH] net: fold network name hash

On Tuesday 27 October 2009 19:22:51 you wrote:

> The full_name_hash does not produce a value that is evenly distributed
> over the lower 8 bits. This causes name hash to be unbalanced with large
> number of names. A simple fix is to just fold in the higher bits
> with XOR.
>
> This is independent of possible improvements to full_name_hash()
> in future.
>

I can confirm that the distribution looks good now for our most common cases.

Thanks,
tavi

2009-10-27 22:04:43

by Stephen Hemminger

[permalink] [raw]
Subject: [PATCH] net: fold network name hash (v2)

The full_name_hash does not produce a value that is evenly distributed
over the lower 8 bits. This causes name hash to be unbalanced with large
number of names. There is a standard function to fold in upper bits
so use that.

This is independent of possible improvements to full_name_hash()
in future.

Signed-off-by: Stephen Hemminger <[email protected]>

--- a/net/core/dev.c 2009-10-27 14:54:21.922563076 -0700
+++ b/net/core/dev.c 2009-10-27 15:04:16.733813459 -0700
@@ -86,6 +86,7 @@
#include <linux/socket.h>
#include <linux/sockios.h>
#include <linux/errno.h>
+#include <linux/hash.h>
#include <linux/interrupt.h>
#include <linux/if_ether.h>
#include <linux/netdevice.h>
@@ -199,7 +200,7 @@ EXPORT_SYMBOL(dev_base_lock);
static inline struct hlist_head *dev_name_hash(struct net *net, const char *name)
{
unsigned hash = full_name_hash(name, strnlen(name, IFNAMSIZ));
- return &net->dev_name_head[hash & ((1 << NETDEV_HASHBITS) - 1)];
+ return &net->dev_name_head[hash_long(hash, NETDEV_HASHBITS)];
}

static inline struct hlist_head *dev_index_hash(struct net *net, int ifindex)

2009-10-27 23:08:26

by Stephen Hemminger

[permalink] [raw]
Subject: Re: [PATCH] dcache: better name hash function

On Tue, 27 Oct 2009 10:32:44 -0700 (PDT)
Linus Torvalds <[email protected]> wrote:

>
>
> On Tue, 27 Oct 2009, Stephen Hemminger wrote:
> >
> > Rather than wasting space, or doing expensive, modulus; just folding
> > the higher bits back with XOR redistributes the bits better.
>
> Please don't make up any new hash functions without having a better input
> set than the one you seem to use.
>
> The 'fnv' function I can believe in, because the whole "multiply by big
> prime number" thing to spread out the bits is a very traditional model.
> But making up a new hash function based on essentially consecutive names
> is absolutely the wrong thing to do. You need a much better corpus of path
> component names for testing.
>
> > The following seems to give best results (combination of 16bit trick
> > and string17).
>
> .. and these kinds of games are likely to work badly on some
> architectures. Don't use 16-bit values, and don't use 'get_unaligned()'.
> Both tend to work fine on x86, but likely suck on some other
> architectures.
>
> Also remember that the critical hash function needs to check for '/' and
> '\0' while at it, which is one reason why it does things byte-at-a-time.
> If you try to be smart, you'd need to be smart about the end condition
> too.
>
> The loop to optimize is _not_ based on 'name+len', it is this code:
>
> this.name = name;
> c = *(const unsigned char *)name;
>
> hash = init_name_hash();
> do {
> name++;
> hash = partial_name_hash(c, hash);
> c = *(const unsigned char *)name;
> } while (c && (c != '/'));
> this.len = name - (const char *) this.name;
> this.hash = end_name_hash(hash);
>
> (which depends on us having already removed all slashed at the head, and
> knowing that the string is not zero-sized)
>
> So doing things multiple bytes at a time is certainly still possible, but
> you would always have to find the slashes/NUL's in there first. Doing that
> efficiently and portably is not trivial - especially since a lot of
> critical path components are short.
>
> (Remember: there may be just a few 'bin' directory names, but if you do
> performance analysis, 'bin' as a path component is probably hashed a lot
> more than 'five_slutty_bimbos_and_a_donkey.jpg'. So the relative weighting
> of importance of the filename should probably include the frequency it
> shows up in pathname lookup)
>
> Linus


Going back to basics. Run tests across different input sets.
Dropping off the slow ones like crc, md5, ...
Not using jhash because it doesn't have good character at a time
interface.

Also, the folding algorithm used matters. Since the kernel
already uses hash_long() to fold back to N bits, all the
tests were rerun with that.

Test run across names in /usr
Algorithm Time Ratio Max StdDev
kr_hash 2.481275 1.21 4363 358.98
string10 2.834562 1.15 4168 303.66
fnv32 2.887600 1.18 4317 332.38
string_hash31 3.655745 1.16 4258 314.33
string_hash17 3.816443 1.16 4177 311.28
djb2 3.883914 1.18 4269 331.75
full_name_hash 4.067633 1.16 4282 312.29
pjw 6.517316 1.17 4184 316.69
sdbm 6.945385 1.17 4447 324.32
elf 7.402180 1.17 4184 316.69


And in /home (mail directories and git)
Algorithm Time Ratio Max StdDev
kr_hash 2.765015 1.44 7175 701.99
string10 3.136947 1.19 7092 469.73
fnv32 3.162626 1.19 6986 458.48
string_hash31 3.832031 1.19 7053 463.29
string_hash17 4.136220 1.19 7023 469.30
djb2 4.241706 1.23 7537 512.02
full_name_hash 4.437741 1.19 7000 467.19
pjw 6.758093 1.20 6970 476.03
sdbm 7.239758 1.22 7526 494.32
elf 7.446356 1.20 6970 476.03


And with names like pppXXX
Algorithm Time Ratio Max StdDev
kr_hash 0.849656 9.26 5520 1121.79
fnv32 1.004682 1.01 453 23.29
string10 1.004729 1.00 395 2.08
string_hash31 1.108335 1.00 409 5.14
string_hash17 1.231257 1.00 410 8.10
djb2 1.238314 1.01 435 29.88
full_name_hash 1.320822 1.00 422 11.07
elf 1.994794 1.15 716 151.19
pjw 2.063958 1.15 716 151.19
sdbm 2.070033 1.00 408 8.11

* The new test has big table so more cache effects.
* Existing full_name_hash distributes okay if folded correctly.
* fnv32 and string10 are slightly faster

More data (on /usr) from older slower machines:

IA-64
Algorithm Time Ratio Max StdDev
kr_hash 1.676064 1.17 664 63.81
string_hash17 1.773553 1.12 616 54.40
djb2 2.103359 1.12 598 54.71
string10 2.103959 1.13 698 56.80
string_hash31 2.108254 1.13 602 55.51
full_name_hash 3.237209 1.13 614 56.74
sdbm 3.279243 1.12 611 54.78
pjw 3.314135 1.13 639 56.74
elf 3.821029 1.13 639 56.74
fnv32 5.619829 1.16 865 62.51

Slow ULV 1Ghz laptop:
Algorithm Time Ratio Max StdDev
kr_hash 5.754460 1.19 2017 194.64
string10 6.698358 1.15 1638 171.29
sdbm 8.134431 1.15 1652 170.65
djb2 8.231058 1.17 1659 184.44
string_hash31 8.447873 1.15 1633 172.13
fnv32 8.552569 1.18 2170 189.61
string_hash17 9.226992 1.16 1616 175.01
full_name_hash 10.555072 1.15 1703 170.45
pjw 16.193485 1.17 1642 181.45
elf 19.770414 1.17 1642 181.45






2009-10-27 23:42:21

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH] dcache: better name hash function



On Tue, 27 Oct 2009, Stephen Hemminger wrote:
>
> Going back to basics. Run tests across different input sets.
> Dropping off the slow ones like crc, md5, ...
> Not using jhash because it doesn't have good character at a time
> interface.
>
> Also, the folding algorithm used matters. Since the kernel
> already uses hash_long() to fold back to N bits, all the
> tests were rerun with that.

Yeah, the 'hash_long()' folding matters for anything that doesn't multiply
big some big number to spread the bits out, because otherwise the bits
from the last character hashed will always be in the low bits.

That explains why our current hash looked so bad with your previous code.

>From your numbers, I think we can dismiss 'kr_hash()' as having horrible
behavior with names like pppXXX (and that isn't just a special case: it's
also noticeably worse for your /home directory case, which means that the
bad behavior shows up in practice too, not just in some special cases).

'elf' and 'pjw' don't have quite the same bad case, but the stddev for the
pppXXX cases are still clearly worse than the other alternatives. They
also seem to always be slower than what we already have.

The 'fnv32' algorithm gets fairly good behavior, but seems bad on Itanium.
Looks like it depends on a fast multiplication unit, and all even your
"slow" ULV chip seems to be a Core2 one, so all your x86 targets have
that. And our current name hash still actually seems to do better in all
cases (maybe I missed some case) even if fnv32 is slightly faster on x86.

>From your list 'string10' seems to get consistently good results and is at
or near the top of performance too. It seems to be the one that
consistently beats 'full_name_hash()' both in performance and in behavior
(string_hash17/31 come close, but aren't as clearly better performing).

But I haven't actually seen the hashes. Maybe there's something that makes
string10 bad?

Regardless, one thing your new numbers do say is that our current hash
really isn't that bad.

Linus

2009-10-28 00:10:38

by Stephen Hemminger

[permalink] [raw]
Subject: Re: [PATCH] dcache: better name hash function

On Tue, 27 Oct 2009 16:41:52 -0700 (PDT)
Linus Torvalds <[email protected]> wrote:

>
>
> On Tue, 27 Oct 2009, Stephen Hemminger wrote:
> >
> > Going back to basics. Run tests across different input sets.
> > Dropping off the slow ones like crc, md5, ...
> > Not using jhash because it doesn't have good character at a time
> > interface.
> >
> > Also, the folding algorithm used matters. Since the kernel
> > already uses hash_long() to fold back to N bits, all the
> > tests were rerun with that.
>
> Yeah, the 'hash_long()' folding matters for anything that doesn't multiply
> big some big number to spread the bits out, because otherwise the bits
> from the last character hashed will always be in the low bits.
>
> That explains why our current hash looked so bad with your previous code.
>
> From your numbers, I think we can dismiss 'kr_hash()' as having horrible
> behavior with names like pppXXX (and that isn't just a special case: it's
> also noticeably worse for your /home directory case, which means that the
> bad behavior shows up in practice too, not just in some special cases).
>
> 'elf' and 'pjw' don't have quite the same bad case, but the stddev for the
> pppXXX cases are still clearly worse than the other alternatives. They
> also seem to always be slower than what we already have.
>
> The 'fnv32' algorithm gets fairly good behavior, but seems bad on Itanium.
> Looks like it depends on a fast multiplication unit, and all even your
> "slow" ULV chip seems to be a Core2 one, so all your x86 targets have
> that. And our current name hash still actually seems to do better in all
> cases (maybe I missed some case) even if fnv32 is slightly faster on x86.
>
> From your list 'string10' seems to get consistently good results and is at
> or near the top of performance too. It seems to be the one that
> consistently beats 'full_name_hash()' both in performance and in behavior
> (string_hash17/31 come close, but aren't as clearly better performing).
>
> But I haven't actually seen the hashes. Maybe there's something that makes
> string10 bad?
>
> Regardless, one thing your new numbers do say is that our current hash
> really isn't that bad.
>
> Linus

Agreed. Here is the reduced version of the program.
To run:
find /home -printf '%f\n' 2>/dev/null | ./htest -n 100


Attachments:
(No filename) (2.29 kB)
htest.c (5.10 kB)
Download all attachments

2009-10-28 00:59:25

by Linus Torvalds

[permalink] [raw]
Subject: Re: [PATCH] dcache: better name hash function



On Tue, 27 Oct 2009, Stephen Hemminger wrote:
>
> Agreed. Here is the reduced version of the program.
> To run:
> find /home -printf '%f\n' 2>/dev/null | ./htest -n 100

The timings are very sensitive to random I$ layout at least on Nehalem.
The reason seems to be that the inner loop is _so_ tight that just
depending on exactly where the loop ends up, you can get subtle
interactions with the loop cache.

Look here:

[torvalds@nehalem ~]$ find /home -printf '%f\n' 2>/dev/null | ./htest -n 100
Algorithm Time Ratio Max StdDev
full_name_hash 1.141899 1.03 4868 263.37
djb2 0.980200 1.03 4835 266.05
string10 0.909175 1.03 4850 262.67
string10a 0.673915 1.03 4850 262.67
string10b 0.909374 1.03 4850 262.67
string_hash17 0.966050 1.03 4805 263.68
string_hash31 1.008544 1.03 4807 259.37
fnv32 0.774806 1.03 4817 259.17

what do you think the difference between 'string10', 'string10a' and
'string10b' are?

None. None what-so-ever. The source code is identical, and gcc generates
identical assembly language. Yet those timings are extremely stable for
me, and 'string10b' is 25% faster than the identical string10 and
string10a functions.

The only difference? 'string10a' starts aligned to just 16 bytes, but that
in turn happens to mean that the tight inner loop ends up aligned on a
128-byte boundary. And being cacheline aligned just there seems to matters
for some subtle micro-architectural reason.

The reason I noticed this is that I wondered what small modifications to
'string10' would do for performance, and noticed that even _without_ the
small modifications, performance fluctuated.

Lesson? Microbenchmarks like this can be dangerous and misleading. That's
_especially_ true if the loop ends up being just tight enough that it can
fit in some trace cache or similar. In real life, the name hash is
performance-critical, but at the same time almost certainly won't be run
in a tight enough loop that you'd ever notice things like that.

Linus

2009-10-28 01:56:18

by Stephen Hemminger

[permalink] [raw]
Subject: Re: [PATCH] dcache: better name hash function

On Tue, 27 Oct 2009 17:58:53 -0700 (PDT)
Linus Torvalds <[email protected]> wrote:

>
>
> On Tue, 27 Oct 2009, Stephen Hemminger wrote:
> >
> > Agreed. Here is the reduced version of the program.
> > To run:
> > find /home -printf '%f\n' 2>/dev/null | ./htest -n 100
>
> The timings are very sensitive to random I$ layout at least on Nehalem.
> The reason seems to be that the inner loop is _so_ tight that just
> depending on exactly where the loop ends up, you can get subtle
> interactions with the loop cache.
>
> Look here:
>
> [torvalds@nehalem ~]$ find /home -printf '%f\n' 2>/dev/null | ./htest -n 100
> Algorithm Time Ratio Max StdDev
> full_name_hash 1.141899 1.03 4868 263.37
> djb2 0.980200 1.03 4835 266.05
> string10 0.909175 1.03 4850 262.67
> string10a 0.673915 1.03 4850 262.67
> string10b 0.909374 1.03 4850 262.67
> string_hash17 0.966050 1.03 4805 263.68
> string_hash31 1.008544 1.03 4807 259.37
> fnv32 0.774806 1.03 4817 259.17
>
> what do you think the difference between 'string10', 'string10a' and
> 'string10b' are?
>
> None. None what-so-ever. The source code is identical, and gcc generates
> identical assembly language. Yet those timings are extremely stable for
> me, and 'string10b' is 25% faster than the identical string10 and
> string10a functions.
>
> The only difference? 'string10a' starts aligned to just 16 bytes, but that
> in turn happens to mean that the tight inner loop ends up aligned on a
> 128-byte boundary. And being cacheline aligned just there seems to matters
> for some subtle micro-architectural reason.
>
> The reason I noticed this is that I wondered what small modifications to
> 'string10' would do for performance, and noticed that even _without_ the
> small modifications, performance fluctuated.
>
> Lesson? Microbenchmarks like this can be dangerous and misleading. That's
> _especially_ true if the loop ends up being just tight enough that it can
> fit in some trace cache or similar. In real life, the name hash is
> performance-critical, but at the same time almost certainly won't be run
> in a tight enough loop that you'd ever notice things like that.
>
> Linus

Thanks. I wasn't putting huge amount of stock in the micro benchmark,
was more interested in how the distribution worked out (which is CPU
independent) rather than the time. As long as all usage of name hashing
fold properly, there isn't a lot of reason to change.



--

2009-10-28 06:07:18

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH] net: fold network name hash (v2)

Stephen Hemminger a ?crit :
> The full_name_hash does not produce a value that is evenly distributed
> over the lower 8 bits. This causes name hash to be unbalanced with large
> number of names. There is a standard function to fold in upper bits
> so use that.
>
> This is independent of possible improvements to full_name_hash()
> in future.

> static inline struct hlist_head *dev_name_hash(struct net *net, const char *name)
> {
> unsigned hash = full_name_hash(name, strnlen(name, IFNAMSIZ));
> - return &net->dev_name_head[hash & ((1 << NETDEV_HASHBITS) - 1)];
> + return &net->dev_name_head[hash_long(hash, NETDEV_HASHBITS)];
> }
>
> static inline struct hlist_head *dev_index_hash(struct net *net, int ifindex)

full_name_hash() returns an "unsigned int", which is guaranteed to be 32 bits

You should therefore use hash_32(hash, NETDEV_HASHBITS),
not hash_long() that maps to hash_64() on 64 bit arches, which is
slower and certainly not any better with a 32bits input.



/* Compute the hash for a name string. */
static inline unsigned int
full_name_hash(const unsigned char *name, unsigned int len)
{
unsigned long hash = init_name_hash();
while (len--)
hash = partial_name_hash(*name++, hash);
return end_name_hash(hash);
}

static inline u32 hash_32(u32 val, unsigned int bits)
{
/* On some cpus multiply is faster, on others gcc will do shifts */
u32 hash = val * GOLDEN_RATIO_PRIME_32;

/* High bits are more random, so use them. */
return hash >> (32 - bits);
}


static inline u64 hash_64(u64 val, unsigned int bits)
{
u64 hash = val;

/* Sigh, gcc can't optimise this alone like it does for 32 bits. */
u64 n = hash;
n <<= 18;
hash -= n;
n <<= 33;
hash -= n;
n <<= 3;
hash += n;
n <<= 3;
hash -= n;
n <<= 4;
hash += n;
n <<= 2;
hash += n;

/* High bits are more random, so use them. */
return hash >> (64 - bits);
}

2009-10-28 09:28:34

by David Miller

[permalink] [raw]
Subject: Re: [PATCH] net: fold network name hash (v2)

From: Eric Dumazet <[email protected]>
Date: Wed, 28 Oct 2009 07:07:10 +0100

> You should therefore use hash_32(hash, NETDEV_HASHBITS),
> not hash_long() that maps to hash_64() on 64 bit arches, which is
> slower and certainly not any better with a 32bits input.

Agreed.

2009-10-28 15:57:30

by Stephen Hemminger

[permalink] [raw]
Subject: Re: [PATCH] net: fold network name hash (v2)

On Wed, 28 Oct 2009 07:07:10 +0100
Eric Dumazet <[email protected]> wrote:

> Stephen Hemminger a écrit :
> > The full_name_hash does not produce a value that is evenly distributed
> > over the lower 8 bits. This causes name hash to be unbalanced with large
> > number of names. There is a standard function to fold in upper bits
> > so use that.
> >
> > This is independent of possible improvements to full_name_hash()
> > in future.
>
> > static inline struct hlist_head *dev_name_hash(struct net *net, const char *name)
> > {
> > unsigned hash = full_name_hash(name, strnlen(name, IFNAMSIZ));
> > - return &net->dev_name_head[hash & ((1 << NETDEV_HASHBITS) - 1)];
> > + return &net->dev_name_head[hash_long(hash, NETDEV_HASHBITS)];
> > }
> >
> > static inline struct hlist_head *dev_index_hash(struct net *net, int ifindex)
>
> full_name_hash() returns an "unsigned int", which is guaranteed to be 32 bits
>
> You should therefore use hash_32(hash, NETDEV_HASHBITS),
> not hash_long() that maps to hash_64() on 64 bit arches, which is
> slower and certainly not any better with a 32bits input.

OK, I was following precedent. Only a couple places use hash_32, most use
hash_long().

Using the upper bits does give better distribution.
With 100,000 network names:

Time Ratio Max StdDev
hash_32 0.002123 1.00 422 11.07
hash_64 0.002927 1.00 400 3.97

The time field is pretty meaningless for such a small sample