Received: by 2002:a05:6a10:9848:0:0:0:0 with SMTP id x8csp1895653pxf; Sat, 13 Mar 2021 00:51:17 -0800 (PST) X-Google-Smtp-Source: ABdhPJwThz3+KRF2HBMKd6NlYuJparFW/ndZtIg62GEq7pUmIIIJDbWcCzkrwn4i36k1AA9+jj4a X-Received: by 2002:a05:6402:8c2:: with SMTP id d2mr19330014edz.4.1615625476918; Sat, 13 Mar 2021 00:51:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1615625476; cv=none; d=google.com; s=arc-20160816; b=HzybjWeP9+5bT8l1WI5S5qSJK/we7aHU7jpbXNeOJzQn/NuLFh9AGBWUwAKMT5Mm+7 qyUGKpnz07yEOItAeLfjz+NbeVjVYhj/1g8rof8VLm13FTz+KHXdAuxnwBQ1u4OjXL9E qdvajSNiYBOavxrWg+UNK5+UzaiCgm9eX8z7JNg0pvC5UB7jqW6dxn61DaY1C/NGA1sx XsuJlu8p4POdMn+UjEiij3wYhauEpBDSqJAZ6pkiSv0xkG9S8OMDVFzPF7265fvaggdQ I7uTUFJ03pQlKLpYoN9I4i9ZwW3VbZrqyDxAIlP2exq9SChBGszPieOH+gyANas6y07v UKKQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=yPH/fKQdSxAk5DApW7/ZCJv2edu7ddtmOfCKfuC+8UM=; b=jFI/xBwE5SsCUzVKT1G61U7/XCgRs1fA3F+djqo5YD+oGp+/Uitxwju4F/5TA/b6I0 K2wPSrU6cj/qjse5+7QeXzmH8B3zydRXEksVjkqoRKGLPzq0p/6FOjHRubpLu15WSOGR 0GweC0klSw9zVd+rILTg68eijx/C6upYZOqCIj+OAhfWmuWurPrZcAhT/V3spH1bMeKx 66CK+YI54J+EfLd7Fa2KD1LLQlducyfOH5cqDmujGEQocUTut6QDz2StI/bxvcdH8CCe 2UKQh6HaOP3LO05VKEzzSQgDZd5TJy6BLB+lkZiVeAUqV7Y+xiLqVR/oo2IW2wKfYXp1 G1xA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v23si5878157eja.176.2021.03.13.00.50.54; Sat, 13 Mar 2021 00:51:16 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230349AbhCMIre (ORCPT + 99 others); Sat, 13 Mar 2021 03:47:34 -0500 Received: from relay12.mail.gandi.net ([217.70.178.232]:36009 "EHLO relay12.mail.gandi.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232999AbhCMIrX (ORCPT ); Sat, 13 Mar 2021 03:47:23 -0500 Received: from debian.home (lfbn-lyo-1-457-219.w2-7.abo.wanadoo.fr [2.7.49.219]) (Authenticated sender: alex@ghiti.fr) by relay12.mail.gandi.net (Postfix) with ESMTPSA id A4B27200005; Sat, 13 Mar 2021 08:47:19 +0000 (UTC) From: Alexandre Ghiti To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Nylon Chen , Nick Hu , Andrey Ryabinin , Alexander Potapenko , Dmitry Vyukov , linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com Cc: Alexandre Ghiti , Palmer Dabbelt Subject: [PATCH v3 2/2] riscv: Cleanup KASAN_VMALLOC support Date: Sat, 13 Mar 2021 03:45:05 -0500 Message-Id: <20210313084505.16132-3-alex@ghiti.fr> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20210313084505.16132-1-alex@ghiti.fr> References: <20210313084505.16132-1-alex@ghiti.fr> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When KASAN vmalloc region is populated, there is no userspace process and the page table in use is swapper_pg_dir, so there is no need to read SATP. Then we can use the same scheme used by kasan_populate_p*d functions to go through the page table, which harmonizes the code. In addition, make use of set_pgd that goes through all unused page table levels, contrary to p*d_populate functions, which makes this function work whatever the number of page table levels. Signed-off-by: Alexandre Ghiti Reviewed-by: Palmer Dabbelt --- arch/riscv/mm/kasan_init.c | 59 ++++++++++++-------------------------- 1 file changed, 18 insertions(+), 41 deletions(-) diff --git a/arch/riscv/mm/kasan_init.c b/arch/riscv/mm/kasan_init.c index 57bf4ae09361..c16178918239 100644 --- a/arch/riscv/mm/kasan_init.c +++ b/arch/riscv/mm/kasan_init.c @@ -11,18 +11,6 @@ #include #include -static __init void *early_alloc(size_t size, int node) -{ - void *ptr = memblock_alloc_try_nid(size, size, - __pa(MAX_DMA_ADDRESS), MEMBLOCK_ALLOC_ACCESSIBLE, node); - - if (!ptr) - panic("%pS: Failed to allocate %zu bytes align=%zx nid=%d from=%llx\n", - __func__, size, size, node, (u64)__pa(MAX_DMA_ADDRESS)); - - return ptr; -} - extern pgd_t early_pg_dir[PTRS_PER_PGD]; asmlinkage void __init kasan_early_init(void) { @@ -155,38 +143,27 @@ static void __init kasan_populate(void *start, void *end) memset(start, KASAN_SHADOW_INIT, end - start); } -void __init kasan_shallow_populate(void *start, void *end) +static void __init kasan_shallow_populate_pgd(unsigned long vaddr, unsigned long end) { - unsigned long vaddr = (unsigned long)start & PAGE_MASK; - unsigned long vend = PAGE_ALIGN((unsigned long)end); - unsigned long pfn; - int index; + unsigned long next; void *p; - pud_t *pud_dir, *pud_k; - pgd_t *pgd_dir, *pgd_k; - p4d_t *p4d_dir, *p4d_k; - - while (vaddr < vend) { - index = pgd_index(vaddr); - pfn = csr_read(CSR_SATP) & SATP_PPN; - pgd_dir = (pgd_t *)pfn_to_virt(pfn) + index; - pgd_k = init_mm.pgd + index; - pgd_dir = pgd_offset_k(vaddr); - set_pgd(pgd_dir, *pgd_k); - - p4d_dir = p4d_offset(pgd_dir, vaddr); - p4d_k = p4d_offset(pgd_k, vaddr); - - vaddr = (vaddr + PUD_SIZE) & PUD_MASK; - pud_dir = pud_offset(p4d_dir, vaddr); - pud_k = pud_offset(p4d_k, vaddr); - - if (pud_present(*pud_dir)) { - p = early_alloc(PAGE_SIZE, NUMA_NO_NODE); - pud_populate(&init_mm, pud_dir, p); + pgd_t *pgd_k = pgd_offset_k(vaddr); + + do { + next = pgd_addr_end(vaddr, end); + if (pgd_page_vaddr(*pgd_k) == (unsigned long)lm_alias(kasan_early_shadow_pmd)) { + p = memblock_alloc(PAGE_SIZE, PAGE_SIZE); + set_pgd(pgd_k, pfn_pgd(PFN_DOWN(__pa(p)), PAGE_TABLE)); } - vaddr += PAGE_SIZE; - } + } while (pgd_k++, vaddr = next, vaddr != end); +} + +static void __init kasan_shallow_populate(void *start, void *end) +{ + unsigned long vaddr = (unsigned long)start & PAGE_MASK; + unsigned long vend = PAGE_ALIGN((unsigned long)end); + + kasan_shallow_populate_pgd(vaddr, vend); local_flush_tlb_all(); } -- 2.20.1