Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932808AbdGKTGD (ORCPT ); Tue, 11 Jul 2017 15:06:03 -0400 Received: from mga05.intel.com ([192.55.52.43]:20702 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755549AbdGKTGB (ORCPT ); Tue, 11 Jul 2017 15:06:01 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.40,347,1496127600"; d="scan'208";a="110114459" Date: Tue, 11 Jul 2017 22:05:54 +0300 From: "Kirill A. Shutemov" To: Andrey Ryabinin Cc: "Kirill A. Shutemov" , Andy Lutomirski , Dmitry Vyukov , Alexander Potapenko , Linus Torvalds , Andrew Morton , "x86@kernel.org" , Thomas Gleixner , Ingo Molnar , "H. Peter Anvin" , Andi Kleen , Dave Hansen , linux-arch , "linux-mm@kvack.org" , LKML , kasan-dev Subject: Re: KASAN vs. boot-time switching between 4- and 5-level paging Message-ID: <20170711190554.zxkpjeg2bt65wtir@black.fi.intel.com> References: <20170710184704.realchrhzpblqqlk@node.shutemov.name> <20170710212403.7ycczkhhki3vrgac@node.shutemov.name> <20170711103548.mkv5w7dd5gpdenne@node.shutemov.name> <20170711170332.wlaudicepkg35dmm@node.shutemov.name> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: NeoMutt/20161126 (1.7.0) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2953 Lines: 94 > > Can use your Signed-off-by for a [cleaned up version of your] patch? > > Sure. Another KASAN-releated issue: dumping page tables for KASAN shadow memory region takes unreasonable time due to kasan_zero_p?? mapped there. The patch below helps. Any objections? diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c index b371ab68f2d4..8601153c34e7 100644 --- a/arch/x86/mm/dump_pagetables.c +++ b/arch/x86/mm/dump_pagetables.c @@ -17,8 +17,8 @@ #include #include #include +#include -#include #include /* @@ -291,10 +291,15 @@ static void note_page(struct seq_file *m, struct pg_state *st, static void walk_pte_level(struct seq_file *m, struct pg_state *st, pmd_t addr, unsigned long P) { int i; + unsigned long pte_addr; pte_t *start; pgprotval_t prot; - start = (pte_t *)pmd_page_vaddr(addr); + pte_addr = pmd_page_vaddr(addr); + if (__pa(pte_addr) == __pa(kasan_zero_pte)) + return; + + start = (pte_t *)pte_addr; for (i = 0; i < PTRS_PER_PTE; i++) { prot = pte_flags(*start); st->current_address = normalize_addr(P + i * PTE_LEVEL_MULT); @@ -308,10 +313,15 @@ static void walk_pte_level(struct seq_file *m, struct pg_state *st, pmd_t addr, static void walk_pmd_level(struct seq_file *m, struct pg_state *st, pud_t addr, unsigned long P) { int i; + unsigned long pmd_addr; pmd_t *start; pgprotval_t prot; - start = (pmd_t *)pud_page_vaddr(addr); + pmd_addr = pud_page_vaddr(addr); + if (__pa(pmd_addr) == __pa(kasan_zero_pmd)) + return; + + start = (pmd_t *)pmd_addr; for (i = 0; i < PTRS_PER_PMD; i++) { st->current_address = normalize_addr(P + i * PMD_LEVEL_MULT); if (!pmd_none(*start)) { @@ -350,12 +360,16 @@ static bool pud_already_checked(pud_t *prev_pud, pud_t *pud, bool checkwx) static void walk_pud_level(struct seq_file *m, struct pg_state *st, p4d_t addr, unsigned long P) { int i; + unsigned long pud_addr; pud_t *start; pgprotval_t prot; pud_t *prev_pud = NULL; - start = (pud_t *)p4d_page_vaddr(addr); + pud_addr = p4d_page_vaddr(addr); + if (__pa(pud_addr) == __pa(kasan_zero_pud)) + return; + start = (pud_t *)pud_addr; for (i = 0; i < PTRS_PER_PUD; i++) { st->current_address = normalize_addr(P + i * PUD_LEVEL_MULT); if (!pud_none(*start) && @@ -386,11 +400,15 @@ static void walk_pud_level(struct seq_file *m, struct pg_state *st, p4d_t addr, static void walk_p4d_level(struct seq_file *m, struct pg_state *st, pgd_t addr, unsigned long P) { int i; + unsigned long p4d_addr; p4d_t *start; pgprotval_t prot; - start = (p4d_t *)pgd_page_vaddr(addr); + p4d_addr = pgd_page_vaddr(addr); + if (__pa(p4d_addr) == __pa(kasan_zero_p4d)) + return; + start = (p4d_t *)p4d_addr; for (i = 0; i < PTRS_PER_P4D; i++) { st->current_address = normalize_addr(P + i * P4D_LEVEL_MULT); if (!p4d_none(*start)) { -- Kirill A. Shutemov