2012-10-22 13:40:47

by Ling Ma

[permalink] [raw]
Subject: [PATCH RFC V2] [x86] Optimize small size memcpy by avoding long latency from decode stage

From: Ma Ling <[email protected]>

CISC code has higher instruction density, saving memory and
improving i-cache hit rate. However decode become challenge,
only one mulitple-uops(2~3)instruction could be decoded in one cycle,
and instructions containing more 4 uops(rep movsq/b) have to be handled by MS-ROM,
the process take long time and eat up the advantage from it for small size.

In order to avoid this disavantage, we take use of general instruction code
for small size copy. The result shows it can get 1~2x improvement
on Core2, Nehalem, Sandy Bridge, Ivy Bridge, Atom, and Bulldozer as well.

Signed-off-by: Ma Ling <[email protected]>
---
In this version we decrease warm up distance from 512 to 256 for coming CPUs,
which manage to reduce latency, but long time to decode is still consumed.

Thanks
Ling

arch/x86/lib/memcpy_64.S | 14 +++++++++++++-
1 files changed, 13 insertions(+), 1 deletions(-)

diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
index 1c273be..6a24c8c 100644
--- a/arch/x86/lib/memcpy_64.S
+++ b/arch/x86/lib/memcpy_64.S
@@ -5,7 +5,6 @@
#include <asm/cpufeature.h>
#include <asm/dwarf2.h>
#include <asm/alternative-asm.h>
-
/*
* memcpy - Copy a memory block.
*
@@ -19,6 +18,15 @@
*/

/*
+ * memcpy_c() and memcpy_c_e() use rep movsq/movsb respectively,
+ * the instruction have to get micro ops from Microcode Sequencser Rom.
+ * And the decode process take long latency, in order to avoid it,
+ * we choose loop unrolling routine for small size.
+ * Could vary the warm up distance.
+ */
+
+
+/*
* memcpy_c() - fast string ops (REP MOVSQ) based variant.
*
* This gets patched over the unrolled variant (below) via the
@@ -26,6 +34,8 @@
*/
.section .altinstr_replacement, "ax", @progbits
.Lmemcpy_c:
+ cmpq $256, %rdx
+ jbe memcpy
movq %rdi, %rax
movq %rdx, %rcx
shrq $3, %rcx
@@ -46,6 +56,8 @@
*/
.section .altinstr_replacement, "ax", @progbits
.Lmemcpy_c_e:
+ cmpq $256, %rdx
+ jbe memcpy
movq %rdi, %rax
movq %rdx, %rcx
rep movsb
--
1.6.5.2


2012-10-22 09:23:18

by Ling Ma

[permalink] [raw]
Subject: Re: [PATCH RFC V2] [x86] Optimize small size memcpy by avoding long latency from decode stage

Attached memcpy micro benchmark, cpu info ,comparison results between
rep movsq/b and memcpy on atom, ivb.

Thanks
Ling


2012/10/23, [email protected] <[email protected]>:
> From: Ma Ling <[email protected]>
>
> CISC code has higher instruction density, saving memory and
> improving i-cache hit rate. However decode become challenge,
> only one mulitple-uops(2~3)instruction could be decoded in one cycle,
> and instructions containing more 4 uops(rep movsq/b) have to be handled by
> MS-ROM,
> the process take long time and eat up the advantage from it for small size.
>
>
> In order to avoid this disavantage, we take use of general instruction code
> for small size copy. The result shows it can get 1~2x improvement
> on Core2, Nehalem, Sandy Bridge, Ivy Bridge, Atom, and Bulldozer as well.
>
> Signed-off-by: Ma Ling <[email protected]>
> ---
> In this version we decrease warm up distance from 512 to 256 for coming
> CPUs,
> which manage to reduce latency, but long time to decode is still consumed.
>
> Thanks
> Ling
>
> arch/x86/lib/memcpy_64.S | 14 +++++++++++++-
> 1 files changed, 13 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
> index 1c273be..6a24c8c 100644
> --- a/arch/x86/lib/memcpy_64.S
> +++ b/arch/x86/lib/memcpy_64.S
> @@ -5,7 +5,6 @@
> #include <asm/cpufeature.h>
> #include <asm/dwarf2.h>
> #include <asm/alternative-asm.h>
> -
> /*
> * memcpy - Copy a memory block.
> *
> @@ -19,6 +18,15 @@
> */
>
> /*
> + * memcpy_c() and memcpy_c_e() use rep movsq/movsb respectively,
> + * the instruction have to get micro ops from Microcode Sequencser Rom.
> + * And the decode process take long latency, in order to avoid it,
> + * we choose loop unrolling routine for small size.
> + * Could vary the warm up distance.
> + */
> +
> +
> +/*
> * memcpy_c() - fast string ops (REP MOVSQ) based variant.
> *
> * This gets patched over the unrolled variant (below) via the
> @@ -26,6 +34,8 @@
> */
> .section .altinstr_replacement, "ax", @progbits
> .Lmemcpy_c:
> + cmpq $256, %rdx
> + jbe memcpy
> movq %rdi, %rax
> movq %rdx, %rcx
> shrq $3, %rcx
> @@ -46,6 +56,8 @@
> */
> .section .altinstr_replacement, "ax", @progbits
> .Lmemcpy_c_e:
> + cmpq $256, %rdx
> + jbe memcpy
> movq %rdi, %rax
> movq %rdx, %rcx
> rep movsb
> --
> 1.6.5.2
>
>


Attachments:
atom-cpu-info (1.43 kB)
atom-memcpy-result (2.20 kB)
ivb-cpu-info (3.70 kB)
ivb-memcpy-result (2.11 kB)
memcpy-kernel.c (6.80 kB)
Download all attachments