Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp420128pxj; Fri, 14 May 2021 06:43:01 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxrykRknhx5huCfqXKEVDHo3dk7Iz3waSKdfyVhIUUEBXXMm9j0RSz2E9ARGSbNvjLDeB1U X-Received: by 2002:a6b:b409:: with SMTP id d9mr33666209iof.57.1620999781221; Fri, 14 May 2021 06:43:01 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620999781; cv=none; d=google.com; s=arc-20160816; b=qBy06IVrnVw4KDkYy6OubcsNjo/8WNQT4jcjLf/5taXQ0GnyXCnXDOmLbBPZXH5dIa BWxeH+qHTLAGWa85uJYXgRhbMwS49bvSqZZYLsd2l2UQl63gaIu8iAq0kcGYyR64RU1M W4alSwrLhvJFXEMslsHNQz3swhSBmap2UYji9UUqi7YNX6s6wMwj1DZS9xlx4SGdMxg6 9qumxqdecmUnHjC8fL7opxo+qPvQaGl8qhRQCCXR8sVkVQR2SS0wznPwuiX6PaWs8mpG d00u8ZSQCcW0y3wLVUH5RXoKv+Q6/kfbQ5Lu2OGri4p1iY9Mna9pHcxWJLdY3qeMu9Ah 3ZYg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:cc:to:from; bh=J/AT4UY5pnAR7D7K0k13qqQ47FVbBOjWkZrpkDXS1sM=; b=Wdpm7WVeI8IO3CPoKyoQb/PHrZMH+TbwzbMmiRh6kjZroa4Z7XAVJkBvo6/lY8mf0r YevTzwpHygTYmxZi2u7vigJcj6zaqgGq9WV3Hk9Tgu1pkytxEELRuyFh5dp6SjSO2r1J aCmMzSjaAeu1uTHYF7hpnDwizIYPHI0Ivgsk40ZNxjmhgsR6tvd/NDUJqzk9U3CHffN6 DBDqvr0gveVa5WXcA4EjfGH81XLUFrHsge14p4GIV/l5PBNKlf3mf8gEO4laikCQUnLF 10aP5ceeGnqBwXY5l5Brs38me/rdHToxtNxYAb7HffmfNnExqPSR2PpVVN0ZXurWMORm er6A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id v3si7668293jas.76.2021.05.14.06.42.48; Fri, 14 May 2021 06:43:01 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231313AbhENDMg (ORCPT + 99 others); Thu, 13 May 2021 23:12:36 -0400 Received: from szxga06-in.huawei.com ([45.249.212.32]:2663 "EHLO szxga06-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231296AbhENDMf (ORCPT ); Thu, 13 May 2021 23:12:35 -0400 Received: from DGGEMS401-HUB.china.huawei.com (unknown [172.30.72.58]) by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4FhD5L6qvYzmWBC; Fri, 14 May 2021 11:09:10 +0800 (CST) Received: from huawei.com (10.175.113.32) by DGGEMS401-HUB.china.huawei.com (10.3.19.201) with Microsoft SMTP Server id 14.3.498.0; Fri, 14 May 2021 11:11:15 +0800 From: Liu Shixin To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Potapenko , Marco Elver , Dmitry Vyukov CC: , , , Liu Shixin Subject: [PATCH RFC v2] riscv: Enable KFENCE for riscv64 Date: Fri, 14 May 2021 11:44:32 +0800 Message-ID: <20210514034432.2004082-1-liushixin2@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.175.113.32] X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add architecture specific implementation details for KFENCE and enable KFENCE for the riscv64 architecture. In particular, this implements the required interface in . KFENCE requires that attributes for pages from its memory pool can individually be set. Therefore, force the kfence pool to be mapped at page granularity. I tested this patch using the testcases in kfence_test.c and all passed. Signed-off-by: Liu Shixin --- v1->v2: Change kmalloc() to pte_alloc_one_kernel() for allocating pte. arch/riscv/Kconfig | 1 + arch/riscv/include/asm/kfence.h | 51 +++++++++++++++++++++++++++++++++ arch/riscv/mm/fault.c | 11 ++++++- 3 files changed, 62 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/include/asm/kfence.h diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index c426e7d20907..000d8aba1030 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -64,6 +64,7 @@ config RISCV select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if MMU && 64BIT select HAVE_ARCH_KASAN_VMALLOC if MMU && 64BIT + select HAVE_ARCH_KFENCE if MMU && 64BIT select HAVE_ARCH_KGDB select HAVE_ARCH_KGDB_QXFER_PKT select HAVE_ARCH_MMAP_RND_BITS if MMU diff --git a/arch/riscv/include/asm/kfence.h b/arch/riscv/include/asm/kfence.h new file mode 100644 index 000000000000..c25d67e0b8ba --- /dev/null +++ b/arch/riscv/include/asm/kfence.h @@ -0,0 +1,51 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _ASM_RISCV_KFENCE_H +#define _ASM_RISCV_KFENCE_H + +#include +#include +#include +#include + +static inline bool arch_kfence_init_pool(void) +{ + int i; + unsigned long addr; + pte_t *pte; + pmd_t *pmd; + + for (addr = (unsigned long)__kfence_pool; is_kfence_address((void *)addr); + addr += PAGE_SIZE) { + pte = virt_to_kpte(addr); + pmd = pmd_off_k(addr); + + if (!pmd_leaf(*pmd) && pte_present(*pte)) + continue; + + pte = pte_alloc_one_kernel(&init_mm); + for (i = 0; i < PTRS_PER_PTE; i++) + set_pte(pte + i, pfn_pte(PFN_DOWN(__pa((addr & PMD_MASK) + i * PAGE_SIZE)), PAGE_KERNEL)); + + set_pmd(pmd, pfn_pmd(PFN_DOWN(__pa(pte)), PAGE_TABLE)); + flush_tlb_kernel_range(addr, addr + PMD_SIZE); + } + + return true; +} + +static inline bool kfence_protect_page(unsigned long addr, bool protect) +{ + pte_t *pte = virt_to_kpte(addr); + + if (protect) + set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT)); + else + set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT)); + + flush_tlb_kernel_range(addr, addr + PAGE_SIZE); + + return true; +} + +#endif /* _ASM_RISCV_KFENCE_H */ diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 096463cc6fff..aa08dd2f8fae 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -45,7 +46,15 @@ static inline void no_context(struct pt_regs *regs, unsigned long addr) * Oops. The kernel tried to access some bad page. We'll have to * terminate things with extreme prejudice. */ - msg = (addr < PAGE_SIZE) ? "NULL pointer dereference" : "paging request"; + if (addr < PAGE_SIZE) + msg = "NULL pointer dereference"; + else { + if (kfence_handle_page_fault(addr, regs->cause == EXC_STORE_PAGE_FAULT, regs)) + return; + + msg = "paging request"; + } + die_kernel_fault(msg, addr, regs); } -- 2.18.0.huawei.25