From: Linus Torvalds
> Sent: 12 September 2023 19:49
>
> On Mon, 11 Sept 2023 at 03:38, David Laight <[email protected]> wrote:
> >
> > The overhead of 'rep movbs' is about 36 clocks, 'rep movsq' only 16.
Interestingly exactly the same test changed its mind for no reason!
While I got repeatable clock counts (+/-1 in the low hundreds) when
looping the test (ie the 'hot cache' cases), I couldn't decide on
the exact overhead to get an accurate bytes/clock.
OTOH it was between 30 and 35 - so pretty much likely to be 32.
> Note that the hard case for 'rep movsq' is when the stores cross a
> cacheline (or worse yet, a page) boundary.
Page faults on misaligned transfers are what makes them hard to implement.
OTOH that is a 'hardware problem' not specifically worth optimising
for - since unlikely.
> That is what makes 'rep movsb' fundamentally simpler in theory. The
> natural reaction is "but movsq does things 8 bytes at a time", but
> once you start doing any kind of optimizations that are actually based
> on bigger areas, the byte counts are actually simpler. You can always
> do them as masked writes up to whatever boundary you like, and just
> restart. There are never any "what about the straddling bytes" issues.
What I found seemed to imply that 'rep movsq' used the same internal
logic as 'rep movsb' (pretty easy to do in hardware) even though I
don't think the EMRS docs say anything about that.
it may well be cpu specific - but I'd expect later ones to be the same.
(AMD cpu will be different, and I don't have access to anything new.)
> That's one of the dangers with benchmarking. Do you benchmark the
> unaligned cases? How much do they matter in real life? Do they even
> happen?
A microbenchmark can tell you how bad they are.
Real life will almost certainly be different.
I did some buffer offset measurements (the buffers should have been
8k aligned).
For reasonable length copies (1024 bytes) the source alignment made
almost no difference.
What did matter was the destination alignment, anything other than
a multiple or 32 halved the transfer speed (regardless of the
source alignment).
So 32n+1, 32n+8 and 32n+16 were all equally bad.
Some values reduced it further, possibly writes affecting the
read prefetches - who knows.
Anyway it seemed like there is a pipelined barrel shifter on
the read side (so it could read 32 bytes/clock) but the write
side was having to do two writes of each misaligned block.
> And that's entirely ignoring any "cold vs hot caches" etc issues, or
> the "what is the cost of access _after_ the memcpy/memset".
I count the clocks for 5 iterations.
The first 'cold cache' is massively slower.
The rest are pretty much identical - so 5 is plenty.
For microbenchmarks you really want to assume 'hot cache'.
The cache loads/invalidates will (usually) be needed sometime
so no point destroying a benchmark by including them.
Especially when looking a short code loops.
I'm definitely against running 10000 iterations and measuring wall time.
It really doesn't tell you anything useful.
I can't even use rdtsc - I can't lock the cpu clock frequency.
> Or, in the case of the kernel, our issues with "function calls can now
> be surprisingly expensive, and if we can inline things it can win back
> 20 cycles from a forced mispredict".
And then glibc probably adds a pile of wrappers to convert open()
into openat(O_EMPTY_PATH,....) for no good reason.
Undoing all the good work.
And then there is the code I've written for an fpga soft-core
which has a limited number of clock to do its work.
Required careful analysis of static branch prediction (having
found the hidden menu to disable the dynamic one) to ensure
that the worst case paths were fast enough - rather than making
the common paths fastest.
...
> So beware microbenchmarks. That's true in general, but it's
> _particularly_ true of memset/memcpy.
I see the problem benchmarks the ones that are massively unrolled
and run fast(ish) in a benchmark but just destroy the i-cache.
If I'm mircobenchmarking I'm trying to see if i can get the
cpu to execute as many instructions in parallel as it should
so that the code loop is limited by (eg) the number of reads.
If you can get a short loop to run 'as fast as possible' there
is no point doing a 1980's unroll.
If anyone had done a microbenchmark of the 'adc' list in the ip
checksum code (eg on Intel core-2) they'd have panicked about
each one taking two clocks.
Even with more recent cpu you need to adc to alternate registers.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
On Tue, 12 Sept 2023 at 12:41, David Laight <[email protected]> wrote:
>
> What I found seemed to imply that 'rep movsq' used the same internal
> logic as 'rep movsb' (pretty easy to do in hardware)
Christ.
I told you. It's pretty easy in hardware AS LONG AS IT'S ALIGNED.
And if it's unaligned, "rep movsq" is FUNDAMENTALLY HARDER.
Really.
Linus
From: Linus Torvalds
> Sent: 12 September 2023 21:48
>
> On Tue, 12 Sept 2023 at 12:41, David Laight <[email protected]> wrote:
> >
> > What I found seemed to imply that 'rep movsq' used the same internal
> > logic as 'rep movsb' (pretty easy to do in hardware)
>
> Christ.
>
> I told you. It's pretty easy in hardware AS LONG AS IT'S ALIGNED.
>
> And if it's unaligned, "rep movsq" is FUNDAMENTALLY HARDER.
For cached memory it only has to appear to have used 8 byte
accesses.
So in the same way that 'rep movsb' could be optimised to do
cache line sized reads and writes even if the address are
completely misaligned 'rep movsq' could use exactly the same
hardware logic with a byte count that is 8 times larger.
The only subtlety is that the read length would need masking
to a multiple of 8 if there is a page fault on a misaligned
read side (so that a multiple of 8 bytes would be written).
That wouldn't really be hard.
I definitely saw exactly the same number of bytes/clock
for 'rep movsb' and 'rep movsq' when the destination was
misaligned.
The alignment made no difference except that a multiple
of 32 ran (about) twice as fast.
I even double-checked the disassembly to make sure I was
running the right code.
So it looks like the Intel hardware engineers have solved
the 'FUNDAMENTALLY HARDER' problem.
David
-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)