On Sat, May 28, 2022 at 04:12:35PM +0800, Yu-Jen Chang wrote:
> The original assembly version of memchr() is implemented with
> the byte-wise comparing technique, which does not fully
> use 64-bits registers in x86_64 CPU. We use word-wide
> comparing so that 8 characters can be compared at the same time
> on x86_64 CPU. First we align the input and then use word-wise
> comparing to find the first 64-bit word that contain the target.
> Secondly, we compare every byte in the word and get the output.
>
> We create two files to measure the performance. The first file
> contains on average 10 characters ahead the target character.
> The second file contains at least 1000 characters ahead the
> target character. Our implementation of “memchr()” is slightly
> better in the first test and nearly 4x faster than the orginal
> implementation in the second test.
>
> Signed-off-by: Yu-Jen Chang <[email protected]>
> Signed-off-by: Ching-Chun (Jim) Huang <[email protected]>
> ---
> arch/x86/include/asm/string_64.h | 3 ++
> arch/x86/lib/Makefile | 1 +
> arch/x86/lib/string_64.c | 78 ++++++++++++++++++++++++++++++++
> 3 files changed, 82 insertions(+)
> create mode 100644 arch/x86/lib/string_64.c
>
> diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
> index 6e450827f..edce657e0 100644
> --- a/arch/x86/include/asm/string_64.h
> +++ b/arch/x86/include/asm/string_64.h
> @@ -14,6 +14,9 @@
> extern void *memcpy(void *to, const void *from, size_t len);
> extern void *__memcpy(void *to, const void *from, size_t len);
>
> +#define __HAVE_ARCH_MEMCHR
> +extern void *memchr(const void *cs, int c, size_t length);
> +
> #define __HAVE_ARCH_MEMSET
> void *memset(void *s, int c, size_t n);
> void *__memset(void *s, int c, size_t n);
> diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
> index f76747862..4d530e559 100644
> --- a/arch/x86/lib/Makefile
> +++ b/arch/x86/lib/Makefile
> @@ -69,5 +69,6 @@ else
> lib-y += clear_page_64.o copy_page_64.o
> lib-y += memmove_64.o memset_64.o
> lib-y += copy_user_64.o
> + lib-y += string_64.o
> lib-y += cmpxchg16b_emu.o
> endif
> diff --git a/arch/x86/lib/string_64.c b/arch/x86/lib/string_64.c
> new file mode 100644
> index 000000000..4e067d5be
> --- /dev/null
> +++ b/arch/x86/lib/string_64.c
> @@ -0,0 +1,78 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include <linux/string.h>
> +#include <linux/export.h>
> +#include <linux/align.h>
> +
> +/* How many bytes are loaded each iteration of the word copy loop */
> +#define LBLOCKSIZE (sizeof(long))
> +
> +#ifdef __HAVE_ARCH_MEMCHR
> +
> +void *memchr(const void *cs, int c, size_t length)
> +{
> + const unsigned char *src = (const unsigned char *)cs, d = c;
I don't know why this 'd = c' is not error.
d is a char pointer and c is int. At least this is not safe me do not know.
> + while (!IS_ALIGNED((long)src, sizeof(long))) {
> + if (!length--)
> + return NULL;
> + if (*src == d)
Compare a character value to a pointer value and this value is c.
May be right do not know.
Or:
char d = c;
...
> + return (void *)src;
> + src++;
> + }
> + if (length >= LBLOCKSIZE) {
> + unsigned long mask = d << 8 | d;
> + unsigned int i = 32;
> + long xor, data;
> + const long consta = 0xFEFEFEFEFEFEFEFF,
> + constb = 0x8080808080808080;
Two magic number..
> + /*
> + * Create a 8-bytes mask for word-wise comparing.
> + * For example, a mask for 'a' is 0x6161616161616161.
> + */
> +
> + mask |= mask << 16;
> + for (i = 32; i < LBLOCKSIZE * 8; i <<= 1)
> + mask |= mask << i;
> + /*
> + * We perform word-wise comparing with following operation:
> + * 1. Perform xor on the long word @src and @mask
> + * and put into @xor.
> + * 2. Add @xor with @consta.
> + * 3. ~@xor & @constb.
> + * 4. Perform & with the result of step 2 and 3.
> + *
> + * Step 1 creates a byte which is 0 in the long word if
> + * there is at least one target byte in it.
> + *
> + * Step 2 to Step 4 find if there is a byte with 0 in
> + * the long word.
> + */
> + asm volatile("1:\n\t"
> + "movq (%0),%1\n\t"
> + "xorq %6,%1\n\t"
> + "lea (%1,%4), %2\n\t"
> + "notq %1\n\t"
> + "andq %5,%1\n\t"
> + "testq %1,%2\n\t"
> + "jne 2f\n\t"
s/jne/jnz/
Lack much here from me. But I give example that should check the
CF flag is zero.
1) contain matching byte.
1111111011111111(consta)
+add
0000000001101100(xor)
------------------------
1111111101101011 (%1)
1111111110010100(~xor)
&and
1000000010000000(constb)
------------------------
1000000010000000 (%2)
the logical and of %1 and %2 is
1000000000000000 that is not zero.
2) not contain matching byte
1111111011111111
+
0110111011011100
----------------
0110110111011011(%1)
1001000100100011
&
1000000010000000
----------------
1000000000000000(%2)
%1 and %2 is
0000000000000000 that is zero.
I guess that here should use jump instruction jnz instead.
Even though, I do not know why that two magic number is so magical..
Thanks,
Tao
> + "add $8,%0\n\t"
> + "sub $8,%3\n\t"
> + "cmp $7,%3\n\t"
> + "ja 1b\n\t"
> + "2:\n\t"
> + : "=D"(src), "=r"(xor), "=r"(data), "=r"(length)
> + : "r"(consta), "r"(constb), "r"(mask), "0"(src),
> + "1"(xor), "2"(data), "3"(length)
> + : "memory", "cc");
> + }
> +
> + while (length--) {
> + if (*src == d)
> + return (void *)src;
> + src++;
> + }
> + return NULL;
> +}
> +EXPORT_SYMBOL(memchr);
> +#endif
> --
> 2.25.1
>
On Sun, May 29, 2022 at 12:40 AM Tao Zhou <[email protected]> wrote:
>
> On Sat, May 28, 2022 at 04:12:35PM +0800, Yu-Jen Chang wrote:
>
> > The original assembly version of memchr() is implemented with
> > the byte-wise comparing technique, which does not fully
> > use 64-bits registers in x86_64 CPU. We use word-wide
> > comparing so that 8 characters can be compared at the same time
> > on x86_64 CPU. First we align the input and then use word-wise
> > comparing to find the first 64-bit word that contain the target.
> > Secondly, we compare every byte in the word and get the output.
> >
> > We create two files to measure the performance. The first file
> > contains on average 10 characters ahead the target character.
> > The second file contains at least 1000 characters ahead the
> > target character. Our implementation of “memchr()” is slightly
> > better in the first test and nearly 4x faster than the orginal
> > implementation in the second test.
> >
> > Signed-off-by: Yu-Jen Chang <[email protected]>
> > Signed-off-by: Ching-Chun (Jim) Huang <[email protected]>
> > ---
> > arch/x86/include/asm/string_64.h | 3 ++
> > arch/x86/lib/Makefile | 1 +
> > arch/x86/lib/string_64.c | 78 ++++++++++++++++++++++++++++++++
> > 3 files changed, 82 insertions(+)
> > create mode 100644 arch/x86/lib/string_64.c
> >
> > diff --git a/arch/x86/include/asm/string_64.h b/arch/x86/include/asm/string_64.h
> > index 6e450827f..edce657e0 100644
> > --- a/arch/x86/include/asm/string_64.h
> > +++ b/arch/x86/include/asm/string_64.h
> > @@ -14,6 +14,9 @@
> > extern void *memcpy(void *to, const void *from, size_t len);
> > extern void *__memcpy(void *to, const void *from, size_t len);
> >
> > +#define __HAVE_ARCH_MEMCHR
> > +extern void *memchr(const void *cs, int c, size_t length);
> > +
> > #define __HAVE_ARCH_MEMSET
> > void *memset(void *s, int c, size_t n);
> > void *__memset(void *s, int c, size_t n);
> > diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
> > index f76747862..4d530e559 100644
> > --- a/arch/x86/lib/Makefile
> > +++ b/arch/x86/lib/Makefile
> > @@ -69,5 +69,6 @@ else
> > lib-y += clear_page_64.o copy_page_64.o
> > lib-y += memmove_64.o memset_64.o
> > lib-y += copy_user_64.o
> > + lib-y += string_64.o
> > lib-y += cmpxchg16b_emu.o
> > endif
> > diff --git a/arch/x86/lib/string_64.c b/arch/x86/lib/string_64.c
> > new file mode 100644
> > index 000000000..4e067d5be
> > --- /dev/null
> > +++ b/arch/x86/lib/string_64.c
> > @@ -0,0 +1,78 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +#include <linux/string.h>
> > +#include <linux/export.h>
> > +#include <linux/align.h>
> > +
> > +/* How many bytes are loaded each iteration of the word copy loop */
> > +#define LBLOCKSIZE (sizeof(long))
> > +
> > +#ifdef __HAVE_ARCH_MEMCHR
> > +
> > +void *memchr(const void *cs, int c, size_t length)
> > +{
> > + const unsigned char *src = (const unsigned char *)cs, d = c;
>
> I don't know why this 'd = c' is not error.
> d is a char pointer and c is int. At least this is not safe me do not know.
>
The parameter d is a character value, not a pointer. Therefore 'd = c'
is not an error.
> > + while (!IS_ALIGNED((long)src, sizeof(long))) {
> > + if (!length--)
> > + return NULL;
> > + if (*src == d)
>
> Compare a character value to a pointer value and this value is c.
> May be right do not know.
>
> Or:
>
> char d = c;
> ...
>
> > + return (void *)src;
> > + src++;
> > + }
> > + if (length >= LBLOCKSIZE) {
> > + unsigned long mask = d << 8 | d;
> > + unsigned int i = 32;
> > + long xor, data;
> > + const long consta = 0xFEFEFEFEFEFEFEFF,
> > + constb = 0x8080808080808080;
>
> Two magic number..
>
> > + /*
> > + * Create a 8-bytes mask for word-wise comparing.
> > + * For example, a mask for 'a' is 0x6161616161616161.
> > + */
> > +
> > + mask |= mask << 16;
> > + for (i = 32; i < LBLOCKSIZE * 8; i <<= 1)
> > + mask |= mask << i;
> > + /*
> > + * We perform word-wise comparing with following operation:
> > + * 1. Perform xor on the long word @src and @mask
> > + * and put into @xor.
> > + * 2. Add @xor with @consta.
> > + * 3. ~@xor & @constb.
> > + * 4. Perform & with the result of step 2 and 3.
> > + *
> > + * Step 1 creates a byte which is 0 in the long word if
> > + * there is at least one target byte in it.
> > + *
> > + * Step 2 to Step 4 find if there is a byte with 0 in
> > + * the long word.
> > + */
> > + asm volatile("1:\n\t"
> > + "movq (%0),%1\n\t"
> > + "xorq %6,%1\n\t"
> > + "lea (%1,%4), %2\n\t"
> > + "notq %1\n\t"
> > + "andq %5,%1\n\t"
> > + "testq %1,%2\n\t"
> > + "jne 2f\n\t"
>
> s/jne/jnz/
>
> Lack much here from me. But I give example that should check the
> CF flag is zero.
>
> 1) contain matching byte.
>
> 1111111011111111(consta)
> +add
> 0000000001101100(xor)
> ------------------------
> 1111111101101011 (%1)
>
>
> 1111111110010100(~xor)
> &and
> 1000000010000000(constb)
> ------------------------
> 1000000010000000 (%2)
>
>
> the logical and of %1 and %2 is
> 1000000000000000 that is not zero.
>
> 2) not contain matching byte
>
> 1111111011111111
> +
> 0110111011011100
> ----------------
> 0110110111011011(%1)
>
> 1001000100100011
> &
> 1000000010000000
> ----------------
> 1000000000000000(%2)
>
> %1 and %2 is
> 0000000000000000 that is zero.
>
> I guess that here should use jump instruction jnz instead.
> Even though, I do not know why that two magic number is so magical..
>
> Thanks,
> Tao
According to the Intel® 64 and IA-32 Architectures Developer's Manual: Vol. 2A
page 3-484 and 3-485, instruction jne and jnz jump according to zero flag. They
both jump when ZF = 0. Here I believe that using jnz is easier to understand the
assembly code.
consta is the 2's complement of 0x0101010101010101. That is, we perform
subtraction here. As a result, it is not necessary to check the CF flags.
Here I explain how these two magic numbers work..
After we perform xor to the long word and the mask, the match byte will
be 0x00. The goal to subtract 0x0101010101010101 is turning the zero byte
into 0xFF. The following step, ~xor & constb, transforms the zero byte
into 0x80. Then, 0x80 & 0xFF = 0x80.
On the other hand, for a none zero byte, let k (0 <= k <=7) be the position
of the lowest 1 in this byte. The lowest k bits are 0. After the addition,
the byte ends in a single bit of value 0 and k bits of value 1. To
explain clearly,
we perform & (~xor) at first. This byte turns into k bits of 1, which
is 2^k - 1.
Then, we perform & constb with the previous result and get 0. Here we just do
(~xor) & 0x8080808080808080 at first.
Thanks,
Yu-Jen Chang
> > + "add $8,%0\n\t"
> > + "sub $8,%3\n\t"
> > + "cmp $7,%3\n\t"
> > + "ja 1b\n\t"
> > + "2:\n\t"
> > + : "=D"(src), "=r"(xor), "=r"(data), "=r"(length)
> > + : "r"(consta), "r"(constb), "r"(mask), "0"(src),
> > + "1"(xor), "2"(data), "3"(length)
> > + : "memory", "cc");
> > + }
> > +
> > + while (length--) {
> > + if (*src == d)
> > + return (void *)src;
> > + src++;
> > + }
> > + return NULL;
> > +}
> > +EXPORT_SYMBOL(memchr);
> > +#endif
> > --
> > 2.25.1
> >