Hello,
I was just browsing over the latest bk tree when I saw the following change:
--- a/include/asm-generic/tlb.h Thu Aug 29 13:27:24 2002
+++ b/include/asm-generic/tlb.h Mon Sep 9 14:58:18 2002
@@ -21,7 +21,7 @@
* and page free order so much..
*/
#ifdef CONFIG_SMP
- #define FREE_PTE_NR 507
+ #define FREE_PTE_NR 506
#define tlb_fast_mode(tlb) ((tlb)->nr == ~0U)
#else
#define FREE_PTE_NR 1
@@ -40,8 +40,6 @@
unsigned int fullmm; /* non-zero means full mm flush */
unsigned long freed;
struct page * pages[FREE_PTE_NR];
- unsigned long flushes;/* stats: count avoided flushes */
- unsigned long avoided_flushes;
} mmu_gather_t;
/* Users of the generic TLB shootdown code must declare this storage
space. */
@@ -67,17 +65,10 @@
static inline void tlb_flush_mmu(mmu_gather_t *tlb, unsigned long
start, unsigned long en
d)
{
- unsigned long nr;
-
- if (!tlb->need_flush) {
- tlb->avoided_flushes++;
+ if (!tlb->need_flush)
return;
- }
tlb->need_flush = 0;
- tlb->flushes++;
-
tlb_flush(tlb);
- nr = tlb->nr;
if (!tlb_fast_mode(tlb)) {
free_pages_and_swap_cache(tlb->pages, tlb->nr);
tlb->nr = 0;
Why was that done? I'm actually about to conduct some tests where I
think that I need this information to check the L1 <-> L2 caching size
influence on kernel data structures. What is the problem with the
existing counters, did I miss some discussion on LKML?
Best regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq'|dc
Roberto Nibali <[email protected]> writes:
> Why was that done? I'm actually about to conduct some tests where I
> think that I need this information to check the L1 <-> L2 caching size
> influence on kernel data structures. What is the problem with the
> existing counters, did I miss some discussion on LKML?
You can easily get the same information from the CPU performance counters
(e.g. via oprofile)
-Andi
> You can easily get the same information from the CPU performance counters
> (e.g. via oprofile)
Thanks for the pointer Andi, I should have thought of oprofile before.
You wouldn't happen to know the equivalent counter event to a
tlb_flush_mmu() for a PIII by any chance, would you? :). I've checked
op_help and only found the ITLB_MISS. I look at the L2_* related cpu
counters but can't find a TLB flush counter.
I'm reading through Appendix A of the IA-32 Architecture Vol 3 manual
(it's actually very interesting), but I haven't found it either so far.
Do I have to check for the INVLPG instructions?
Best regards,
Roberto Nibali, ratz
--
echo '[q]sa[ln0=aln256%Pln256/snlbx]sb3135071790101768542287578439snlbxq'|dc
Roberto Nibali wrote:
>
> Hello,
>
> I was just browsing over the latest bk tree when I saw the following change:
>
> ...
> - unsigned long flushes;/* stats: count avoided flushes */
> - unsigned long avoided_flushes;
That was some statistical/debug code to evaluate how useful
that particular optimisation was being. Answer: it saves
30-35% of the global TLB invalidations coming out of there.
But it had served its purpose.