2020-06-18 10:53:25

by David Laight

[permalink] [raw]
Subject: RE: [PATCH] x86/asm/64: Align start of __clear_user() loop to 16-bytes

From: Matt Fleming
> Sent: 18 June 2020 11:20
> x86 CPUs can suffer severe performance drops if a tight loop, such as
> the ones in __clear_user(), straddles a 16-byte instruction fetch
> window, or worse, a 64-byte cacheline. This issues was discovered in the
> SUSE kernel with the following commit,
>
> 1153933703d9 ("x86/asm/64: Micro-optimize __clear_user() - Use immediate constants")
>
> which increased the code object size from 10 bytes to 15 bytes and
> caused the 8-byte copy loop in __clear_user() to be split across a
> 64-byte cacheline.
>
> Aligning the start of the loop to 16-bytes makes this fit neatly inside
> a single instruction fetch window again and restores the performance of
> __clear_user() which is used heavily when reading from /dev/zero.
>
> Here are some numbers from running libmicro's read_z* and pread_z*
> microbenchmarks which read from /dev/zero:
>
> Zen 1 (Naples)
>
> libmicro-file
> 5.7.0-rc6 5.7.0-rc6 5.7.0-rc6
> revert-1153933703d9+ align16+
> Time mean95-pread_z100k 9.9195 ( 0.00%) 5.9856 ( 39.66%) 5.9938 ( 39.58%)
> Time mean95-pread_z10k 1.1378 ( 0.00%) 0.7450 ( 34.52%) 0.7467 ( 34.38%)
> Time mean95-pread_z1k 0.2623 ( 0.00%) 0.2251 ( 14.18%) 0.2252 ( 14.15%)
> Time mean95-pread_zw100k 9.9974 ( 0.00%) 6.0648 ( 39.34%) 6.0756 ( 39.23%)
> Time mean95-read_z100k 9.8940 ( 0.00%) 5.9885 ( 39.47%) 5.9994 ( 39.36%)
> Time mean95-read_z10k 1.1394 ( 0.00%) 0.7483 ( 34.33%) 0.7482 ( 34.33%)
>
> Note that this doesn't affect Haswell or Broadwell microarchitectures
> which seem to avoid the alignment issue by executing the loop straight
> out of the Loop Stream Detector (verified using perf events).

Which cpu was affected?
At least one source (http://www.agner.org/optimize) implies that both ivy
bridge and sandy bridge have uop caches that mean (If I've read it
correctly) the loop shouldn't be affected by the alignment).

> diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c
> index fff28c6f73a2..b0dfac3d3df7 100644
> --- a/arch/x86/lib/usercopy_64.c
> +++ b/arch/x86/lib/usercopy_64.c
> @@ -24,6 +24,7 @@ unsigned long __clear_user(void __user *addr, unsigned long size)
> asm volatile(
> " testq %[size8],%[size8]\n"
> " jz 4f\n"
> + " .align 16\n"
> "0: movq $0,(%[dst])\n"
> " addq $8,%[dst]\n"
> " decl %%ecx ; jnz 0b\n"

You can do better that that loop.
Change 'dst' to point to the end of the buffer, negate the count
and divide by 8 and you get:
"0: movq $0,($[dst],%%ecx,8)\n"
" add $1,%%ecx"
" jnz 0b\n"
which might run at one iteration per clock especially on cpu that pair
the add and jnz into a single uop.
(You need to use add not inc.)

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)


2020-06-18 18:04:08

by Alexey Dobriyan

[permalink] [raw]
Subject: Re: [PATCH] x86/asm/64: Align start of __clear_user() loop to 16-bytes

On Thu, Jun 18, 2020 at 10:48:05AM +0000, David Laight wrote:
> From: Matt Fleming
> > Sent: 18 June 2020 11:20
> > x86 CPUs can suffer severe performance drops if a tight loop, such as
> > the ones in __clear_user(), straddles a 16-byte instruction fetch
> > window, or worse, a 64-byte cacheline. This issues was discovered in the
> > SUSE kernel with the following commit,
> >
> > 1153933703d9 ("x86/asm/64: Micro-optimize __clear_user() - Use immediate constants")
> >
> > which increased the code object size from 10 bytes to 15 bytes and
> > caused the 8-byte copy loop in __clear_user() to be split across a
> > 64-byte cacheline.
> >
> > Aligning the start of the loop to 16-bytes makes this fit neatly inside
> > a single instruction fetch window again and restores the performance of
> > __clear_user() which is used heavily when reading from /dev/zero.
> >
> > Here are some numbers from running libmicro's read_z* and pread_z*
> > microbenchmarks which read from /dev/zero:
> >
> > Zen 1 (Naples)
> >
> > libmicro-file
> > 5.7.0-rc6 5.7.0-rc6 5.7.0-rc6
> > revert-1153933703d9+ align16+
> > Time mean95-pread_z100k 9.9195 ( 0.00%) 5.9856 ( 39.66%) 5.9938 ( 39.58%)
> > Time mean95-pread_z10k 1.1378 ( 0.00%) 0.7450 ( 34.52%) 0.7467 ( 34.38%)
> > Time mean95-pread_z1k 0.2623 ( 0.00%) 0.2251 ( 14.18%) 0.2252 ( 14.15%)
> > Time mean95-pread_zw100k 9.9974 ( 0.00%) 6.0648 ( 39.34%) 6.0756 ( 39.23%)
> > Time mean95-read_z100k 9.8940 ( 0.00%) 5.9885 ( 39.47%) 5.9994 ( 39.36%)
> > Time mean95-read_z10k 1.1394 ( 0.00%) 0.7483 ( 34.33%) 0.7482 ( 34.33%)
> >
> > Note that this doesn't affect Haswell or Broadwell microarchitectures
> > which seem to avoid the alignment issue by executing the loop straight
> > out of the Loop Stream Detector (verified using perf events).
>
> Which cpu was affected?
> At least one source (http://www.agner.org/optimize) implies that both ivy
> bridge and sandy bridge have uop caches that mean (If I've read it
> correctly) the loop shouldn't be affected by the alignment).
>
> > diff --git a/arch/x86/lib/usercopy_64.c b/arch/x86/lib/usercopy_64.c
> > index fff28c6f73a2..b0dfac3d3df7 100644
> > --- a/arch/x86/lib/usercopy_64.c
> > +++ b/arch/x86/lib/usercopy_64.c
> > @@ -24,6 +24,7 @@ unsigned long __clear_user(void __user *addr, unsigned long size)
> > asm volatile(
> > " testq %[size8],%[size8]\n"
> > " jz 4f\n"
> > + " .align 16\n"
> > "0: movq $0,(%[dst])\n"
> > " addq $8,%[dst]\n"
> > " decl %%ecx ; jnz 0b\n"
>
> You can do better that that loop.
> Change 'dst' to point to the end of the buffer, negate the count
> and divide by 8 and you get:
> "0: movq $0,($[dst],%%ecx,8)\n"
> " add $1,%%ecx"
> " jnz 0b\n"
> which might run at one iteration per clock especially on cpu that pair
> the add and jnz into a single uop.
> (You need to use add not inc.)

/dev/zero should probably use REP STOSB etc just like everything else.