Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932695Ab2HVPKS (ORCPT ); Wed, 22 Aug 2012 11:10:18 -0400 Received: from mx1.redhat.com ([209.132.183.28]:14578 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755300Ab2HVPAX (ORCPT ); Wed, 22 Aug 2012 11:00:23 -0400 From: Andrea Arcangeli To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: Hillf Danton , Dan Smith , Linus Torvalds , Andrew Morton , Thomas Gleixner , Ingo Molnar , Paul Turner , Suresh Siddha , Mike Galbraith , "Paul E. McKenney" , Lai Jiangshan , Bharata B Rao , Lee Schermerhorn , Rik van Riel , Johannes Weiner , Srivatsa Vaddagiri , Christoph Lameter , Alex Shi , Mauricio Faria de Oliveira , Konrad Rzeszutek Wilk , Don Morris , Benjamin Herrenschmidt Subject: [PATCH 04/36] autonuma: pte_numa() and pmd_numa() Date: Wed, 22 Aug 2012 16:58:48 +0200 Message-Id: <1345647560-30387-5-git-send-email-aarcange@redhat.com> In-Reply-To: <1345647560-30387-1-git-send-email-aarcange@redhat.com> References: <1345647560-30387-1-git-send-email-aarcange@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4857 Lines: 154 Implement pte_numa and pmd_numa. We must atomically set the numa bit and clear the present bit to define a pte_numa or pmd_numa. Once a pte or pmd has been set as pte_numa or pmd_numa, the next time a thread touches a virtual address in the corresponding virtual range, a NUMA hinting page fault will trigger. The NUMA hinting page fault will clear the NUMA bit and set the present bit again to resolve the page fault. NUMA hinting page faults are used: 1) to fill in the per-thread NUMA statistic stored for each thread in a current->task_autonuma data structure 2) to track the per-node last_nid information in the page structure to detect false sharing 3) to queue the page mapped by the pte_numa or pmd_numa for async migration if there have been enough NUMA hinting page faults on the page coming from remote CPUs NUMA hinting page faults collect information and possibly add pages to migrate queues. They are extremely quick, absolutely non-blocking and do not allocate memory. The generic implementation is used when CONFIG_AUTONUMA=n. Acked-by: Rik van Riel Signed-off-by: Andrea Arcangeli --- arch/x86/include/asm/pgtable.h | 65 ++++++++++++++++++++++++++++++++++++++- include/asm-generic/pgtable.h | 12 +++++++ 2 files changed, 75 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h index b49e70d..bfe42aa 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -405,7 +405,8 @@ static inline int pte_same(pte_t a, pte_t b) static inline int pte_present(pte_t a) { - return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE); + return pte_flags(a) & (_PAGE_PRESENT | _PAGE_PROTNONE | + _PAGE_NUMA_PTE); } static inline int pte_hidden(pte_t pte) @@ -421,7 +422,63 @@ static inline int pmd_present(pmd_t pmd) * the _PAGE_PSE flag will remain set at all times while the * _PAGE_PRESENT bit is clear). */ - return pmd_flags(pmd) & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_PSE); + return pmd_flags(pmd) & (_PAGE_PRESENT | _PAGE_PROTNONE | _PAGE_PSE | + _PAGE_NUMA_PMD); +} + +#ifdef CONFIG_AUTONUMA +/* + * _PAGE_NUMA_PTE and _PAGE_NUMA_PMD works identical to + * _PAGE_PROTNONE. They're set only when _PAGE_PRESET is not + * set and they're never set if _PAGE_PRESENT is set. + * + * pte/pmd_present() returns true if pte/pmd_numa returns true. Page + * fault triggers on those regions if pte/pmd_numa returns true + * (because _PAGE_PRESENT is not set). + */ +static inline int pte_numa(pte_t pte) +{ + return (pte_flags(pte) & + (_PAGE_NUMA_PTE|_PAGE_PRESENT)) == _PAGE_NUMA_PTE; +} + +static inline int pmd_numa(pmd_t pmd) +{ + return (pmd_flags(pmd) & + (_PAGE_NUMA_PMD|_PAGE_PRESENT)) == _PAGE_NUMA_PMD; +} +#endif + +/* + * pte/pmd_mknuma sets the _PAGE_ACCESSED bitflag automatically + * because they're called by the NUMA hinting minor page fault. If we + * wouldn't set the _PAGE_ACCESSED bitflag here, the TLB miss handler + * would be forced to set it later while filling the TLB after we + * return to userland. That would trigger a second write to memory + * that we optimize away by setting _PAGE_ACCESSED here. + */ +static inline pte_t pte_mknonnuma(pte_t pte) +{ + pte = pte_clear_flags(pte, _PAGE_NUMA_PTE); + return pte_set_flags(pte, _PAGE_PRESENT|_PAGE_ACCESSED); +} + +static inline pmd_t pmd_mknonnuma(pmd_t pmd) +{ + pmd = pmd_clear_flags(pmd, _PAGE_NUMA_PMD); + return pmd_set_flags(pmd, _PAGE_PRESENT|_PAGE_ACCESSED); +} + +static inline pte_t pte_mknuma(pte_t pte) +{ + pte = pte_set_flags(pte, _PAGE_NUMA_PTE); + return pte_clear_flags(pte, _PAGE_PRESENT); +} + +static inline pmd_t pmd_mknuma(pmd_t pmd) +{ + pmd = pmd_set_flags(pmd, _PAGE_NUMA_PMD); + return pmd_clear_flags(pmd, _PAGE_PRESENT); } static inline int pmd_none(pmd_t pmd) @@ -480,6 +537,10 @@ static inline pte_t *pte_offset_kernel(pmd_t *pmd, unsigned long address) static inline int pmd_bad(pmd_t pmd) { +#ifdef CONFIG_AUTONUMA + if (pmd_numa(pmd)) + return 0; +#endif return (pmd_flags(pmd) & ~_PAGE_USER) != _KERNPG_TABLE; } diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index ff4947b..0ff87ec 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -530,6 +530,18 @@ static inline int pmd_trans_unstable(pmd_t *pmd) #endif } +#ifndef CONFIG_AUTONUMA +static inline int pte_numa(pte_t pte) +{ + return 0; +} + +static inline int pmd_numa(pmd_t pmd) +{ + return 0; +} +#endif /* CONFIG_AUTONUMA */ + #endif /* CONFIG_MMU */ #endif /* !__ASSEMBLY__ */ -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/