Questions:
- Is there a reason that SLUB does not implement cache coloring?
it will help utilizing hardware cache. Especially in block layer,
they are literally *squeezing* its performance now.
- In SLAB, do we really need to flush queues every few seconds?
(per cpu queue and shared queue). Flushing alien caches makes
sense, but flushing queues seems reducing it's fastpath.
But yeah, we need to reclaim memory. can we just defer this?
Idea:
- I don't like SLAB's per-node cache coloring, because L1 cache
isn't shared between cpus. For now, cpus in same node are sharing
its colour_next - but we can do better.
what about splitting some per-cpu variables into kmem_cache_cpu
like SLUB? I think cpu_cache, colour (and colour_next),
alloc{hit,miss}, and free{hit,miss} can be per-cpu variables.
On Sat, Oct 09, 2021 at 12:19:03AM +0000, Hyeonggon Yoo wrote:
> - Is there a reason that SLUB does not implement cache coloring?
> it will help utilizing hardware cache. Especially in block layer,
> they are literally *squeezing* its performance now.
Have you tried turning off cache colouring in SLAB and seeing if
performance changes? My impression is that it's useful for caches
with low associativity (direct mapped / 2-way / 4-way), but loses
its effectiveness for caches with higher associativity. For example,
my laptop:
L1 Data Cache: 48KB, 12-way associative, 64 byte line size
L1 Instruction Cache: 32KB, 8-way associative, 64 byte line size
L2 Unified Cache: 1280KB, 20-way associative, 64 byte line size
L3 Unified Cache: 12288KB, 12-way associative, 64 byte line size
I very much doubt that cache colouring is still useful for this machine.
On Sat, Oct 09, 2021 at 01:33:43AM +0100, Matthew Wilcox wrote:
> On Sat, Oct 09, 2021 at 12:19:03AM +0000, Hyeonggon Yoo wrote:
> > - Is there a reason that SLUB does not implement cache coloring?
> > it will help utilizing hardware cache. Especially in block layer,
> > they are literally *squeezing* its performance now.
>
> Have you tried turning off cache colouring in SLAB and seeing if
> performance changes? My impression is that it's useful for caches
> with low associativity (direct mapped / 2-way / 4-way), but loses
> its effectiveness for caches with higher associativity. For example,
> my laptop:
>
> L1 Data Cache: 48KB, 12-way associative, 64 byte line size
> L1 Instruction Cache: 32KB, 8-way associative, 64 byte line size
> L2 Unified Cache: 1280KB, 20-way associative, 64 byte line size
> L3 Unified Cache: 12288KB, 12-way associative, 64 byte line size
>
> I very much doubt that cache colouring is still useful for this machine.
Hello Matthew,
What benchmark did you use for test?
-
Hyeonggon
On Sat, Oct 09, 2021 at 01:33:43AM +0100, Matthew Wilcox wrote:
> On Sat, Oct 09, 2021 at 12:19:03AM +0000, Hyeonggon Yoo wrote:
> > - Is there a reason that SLUB does not implement cache coloring?
> > it will help utilizing hardware cache. Especially in block layer,
> > they are literally *squeezing* its performance now.
>
> Have you tried turning off cache colouring in SLAB and seeing if
> performance changes? My impression is that it's useful for caches
> with low associativity (direct mapped / 2-way / 4-way), but loses
> its effectiveness for caches with higher associativity. For example,
> my laptop:
>
> L1 Data Cache: 48KB, 12-way associative, 64 byte line size
> L1 Instruction Cache: 32KB, 8-way associative, 64 byte line size
> L2 Unified Cache: 1280KB, 20-way associative, 64 byte line size
> L3 Unified Cache: 12288KB, 12-way associative, 64 byte line size
>
> I very much doubt that cache colouring is still useful for this machine.
On my machine,
L1 Data Cache: 32KB, 8-way associative, 64 byte line size
L1 Instruction Cache: 32KB, 8-way associative, 64 byte line size
L2 Unified Cache: 1MB, 16-way associative, 64 byte line size
L3 Unified Cache: 33MB, 11-way associative, 64 byte line size
I run hackbench with per-node coloring, per-cpu coloring, and without
coloring.
hackbench -g 100 -l 200000
without coloring: 2196.787
with per-node coloring: 2193.607
with per-cpu coloring: 2198.076
it seems there is almost no difference.
How much difference did you seen low associativity processors?
Hmm... I'm gonna search if there's related paper.
On Sat, 9 Oct 2021, Hyeonggon Yoo wrote:
> - Is there a reason that SLUB does not implement cache coloring?
> it will help utilizing hardware cache. Especially in block layer,
> they are literally *squeezing* its performance now.
Well as Matthew says: The high associativity of caches and the execution
of other code path seems to make this not useful anymore.
I am sure you can find a benchmark that shows some benefit. But please
realize that in real-life the OS must perform work. This means that
multiple other code paths are executed that affect cache use and placement
of data in cache lines.
> - In SLAB, do we really need to flush queues every few seconds?
> (per cpu queue and shared queue). Flushing alien caches makes
> sense, but flushing queues seems reducing it's fastpath.
> But yeah, we need to reclaim memory. can we just defer this?
The queues are designed to track cache hot objects (See the Bonwick
paper). After a while the cachelines will be used for other purposes and
no longer reflect what is in the caches. That is why they need to be
expired.
> - I don't like SLAB's per-node cache coloring, because L1 cache
> isn't shared between cpus. For now, cpus in same node are sharing
> its colour_next - but we can do better.
This differs based on the cpu architecture in use. SLAB has an ideal model
of how caches work and keeps objects cache hot based on that. In real life
the cpu architecture differs from what SLAB things how caches operate.
> what about splitting some per-cpu variables into kmem_cache_cpu
> like SLUB? I think cpu_cache, colour (and colour_next),
> alloc{hit,miss}, and free{hit,miss} can be per-cpu variables.
That would in turn increase memory use and potentially the cache footprint
of the hot paths.
Hello Christoph, thank you for answering.
On Mon, Oct 11, 2021 at 09:13:52AM +0200, Christoph Lameter wrote:
> On Sat, 9 Oct 2021, Hyeonggon Yoo wrote:
>
> > - Is there a reason that SLUB does not implement cache coloring?
> > it will help utilizing hardware cache. Especially in block layer,
> > they are literally *squeezing* its performance now.
>
> Well as Matthew says: The high associativity of caches
it seems not useful on my both machines (4-way / 8-way set associative) too.
> and the execution
> of other code path seems to make this not useful anymore.
>
> I am sure you can find a benchmark that shows some benefit. But please
> realize that in real-life the OS must perform work. This means that
> multiple other code paths are executed that affect cache use and placement
> of data in cache lines.
>
cache coloring can make benchmark results better. But as slab uses more
cache lines - that reduces other code paths' cache line. Did I get right?
>
> > - In SLAB, do we really need to flush queues every few seconds?
> > (per cpu queue and shared queue). Flushing alien caches makes
> > sense, but flushing queues seems reducing it's fastpath.
> > But yeah, we need to reclaim memory. can we just defer this?
>
> The queues are designed to track cache hot objects (See the Bonwick
> paper). After a while the cachelines will be used for other purposes and
> no longer reflect what is in the caches. That is why they need to be
> expired.
I've read Bonwick paper but I thought expiring was need for reclaiming
memory. maybe I got it wrong.. I should read it again.
>
>
> > - I don't like SLAB's per-node cache coloring, because L1 cache
> > isn't shared between cpus. For now, cpus in same node are sharing
> > its colour_next - but we can do better.
>
> This differs based on the cpu architecture in use. SLAB has an ideal model
> of how caches work and keeps objects cache hot based on that. In real life
> the cpu architecture differs from what SLAB things how caches operate.
>
So the point is, As cache hierarchy differs based on architecture,
assuming cpus have both unique cache per cpu, and shared cache among
cpus can misfit in some architectures.
> > what about splitting some per-cpu variables into kmem_cache_cpu
> > like SLUB? I think cpu_cache, colour (and colour_next),
> > alloc{hit,miss}, and free{hit,miss} can be per-cpu variables.
>
> That would in turn increase memory use and potentially the cache footprint
> of the hot paths.
>
I thought splitting percpu data was need for coloring but it
isn't useful. So that's unnecessary cost.
Thanks,
Hyeonggon.