Received: by 2002:a05:6a10:206:0:0:0:0 with SMTP id 6csp1024189pxj; Sat, 29 May 2021 00:32:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyw+78QlBni1jBOMuEselgSdho0ia0m08rrntwoYhysqxs/vhD1P2v//zVSF1/Vr1LRYCoH X-Received: by 2002:a17:906:2b8c:: with SMTP id m12mr12797192ejg.358.1622273547809; Sat, 29 May 2021 00:32:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1622273547; cv=none; d=google.com; s=arc-20160816; b=fDXjpmMrUOswPi/l7Ie7GOOaTlIfi3mOSMdAgh6ZcLlmwDcmZPTSOQzlID0SM+w/kR 94W/IfL8T9l4zXTtojFolQkqgr4P/g4g3kARMmEg8EuL8uZ8JI5zjOmX43HX5ACIHT3u AsREqVtr1pRnCmhyL8TU5JBMPKLZuDxfzqQW61yrJrckRuzcJ7Uyf/hwIwX+c4W21tJp WAtraThm75FS2SaW0XacXM/aA6fT4EBSCuHryVltS42gwguErb1h8/M6DXFrEBsXwZQD e799qU6R72+CkVSzrqNzni8EWJ7rtgCbH625Nx080hu9lMb+nMfuOTC/bEACE5Ogmk/i daTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:mime-version:message-id:date:subject:cc:to:from; bh=Gy5nwgidZDYOgpUym7Cj6gFsP13g8wiS0WTVsg1IKkk=; b=G/SuvXXIukvJkU6edOO8Osg3ugz/HXFc94U/5jXbluygtRLLXwGB/uZYXEm0tQ/au3 f50eYuJ3P+0PUSEuEGFgUvxZrH3UI3DJSgY1+4Iey5U1901ZvgZbJ1Y3xt9UP1mejZof alCvD1iyG8hkpu9qa6o1HSnP/4zrWs5Vlrj0oqxNfAUXopNHMoSJzZZ2FAROOkqTeMn3 we95iT54awDHkYUlA9p5GwcygPGOEtuHfbnr0argWffuvO1dlLldtuA0MmcN+3tdCo0P 2UGRNs9V7AR1wKPaVS5K1RRkxeUNhbiPmvM5MWDWbGPevnJ8r12KN4afvAcRdKUxvFe1 OVag== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id jx18si7294301ejb.283.2021.05.29.00.32.04; Sat, 29 May 2021 00:32:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229575AbhE2Hcv (ORCPT + 99 others); Sat, 29 May 2021 03:32:51 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:5139 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229549AbhE2Hcu (ORCPT ); Sat, 29 May 2021 03:32:50 -0400 Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4FsY7g50KqzYnXs; Sat, 29 May 2021 15:28:31 +0800 (CST) Received: from dggpemm500009.china.huawei.com (7.185.36.225) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Sat, 29 May 2021 15:31:12 +0800 Received: from huawei.com (10.175.113.32) by dggpemm500009.china.huawei.com (7.185.36.225) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Sat, 29 May 2021 15:31:12 +0800 From: Liu Shixin To: Paul Walmsley , Palmer Dabbelt , Albert Ou , Alexander Potapenko , Marco Elver , Dmitry Vyukov CC: , , , Liu Shixin Subject: [PATCH -next] riscv: Enable KFENCE for riscv64 Date: Sat, 29 May 2021 16:03:40 +0800 Message-ID: <20210529080340.2987212-1-liushixin2@huawei.com> X-Mailer: git-send-email 2.18.0.huawei.25 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.175.113.32] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm500009.china.huawei.com (7.185.36.225) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add architecture specific implementation details for KFENCE and enable KFENCE for the riscv64 architecture. In particular, this implements the required interface in . KFENCE requires that attributes for pages from its memory pool can individually be set. Therefore, force the kfence pool to be mapped at page granularity. Testing this patch using the testcases in kfence_test.c and all passed. Signed-off-by: Liu Shixin --- 1. Add helper function split_pmd_page() which is used to split a pmd to ptes. 2. Add the judgment on the result of pte_alloc_one_kernel(). arch/riscv/Kconfig | 1 + arch/riscv/include/asm/kfence.h | 63 +++++++++++++++++++++++++++++++++ arch/riscv/mm/fault.c | 11 +++++- 3 files changed, 74 insertions(+), 1 deletion(-) create mode 100644 arch/riscv/include/asm/kfence.h diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig index 4982130064ef..2f4903a7730f 100644 --- a/arch/riscv/Kconfig +++ b/arch/riscv/Kconfig @@ -65,6 +65,7 @@ config RISCV select HAVE_ARCH_JUMP_LABEL_RELATIVE select HAVE_ARCH_KASAN if MMU && 64BIT select HAVE_ARCH_KASAN_VMALLOC if MMU && 64BIT + select HAVE_ARCH_KFENCE if MMU && 64BIT select HAVE_ARCH_KGDB select HAVE_ARCH_KGDB_QXFER_PKT select HAVE_ARCH_MMAP_RND_BITS if MMU diff --git a/arch/riscv/include/asm/kfence.h b/arch/riscv/include/asm/kfence.h new file mode 100644 index 000000000000..d887a54042aa --- /dev/null +++ b/arch/riscv/include/asm/kfence.h @@ -0,0 +1,63 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef _ASM_RISCV_KFENCE_H +#define _ASM_RISCV_KFENCE_H + +#include +#include +#include +#include + +static inline int split_pmd_page(unsigned long addr) +{ + int i; + unsigned long pfn = PFN_DOWN(__pa((addr & PMD_MASK))); + pmd_t *pmd = pmd_off_k(addr); + pte_t *pte = pte_alloc_one_kernel(&init_mm); + + if (!pte) + return -ENOMEM; + + for (i = 0; i < PTRS_PER_PTE; i++) + set_pte(pte + i, pfn_pte(pfn + i, PAGE_KERNEL)); + set_pmd(pmd, pfn_pmd(PFN_DOWN(__pa(pte)), PAGE_TABLE)); + + flush_tlb_kernel_range(addr, addr + PMD_SIZE); + return 0; +} + +static inline bool arch_kfence_init_pool(void) +{ + int ret; + unsigned long addr; + pmd_t *pmd; + + for (addr = (unsigned long)__kfence_pool; is_kfence_address((void *)addr); + addr += PAGE_SIZE) { + pmd = pmd_off_k(addr); + + if (pmd_leaf(*pmd)) { + ret = split_pmd_page(addr); + if (ret) + return false; + } + } + + return true; +} + +static inline bool kfence_protect_page(unsigned long addr, bool protect) +{ + pte_t *pte = virt_to_kpte(addr); + + if (protect) + set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT)); + else + set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT)); + + flush_tlb_kernel_range(addr, addr + PAGE_SIZE); + + return true; +} + +#endif /* _ASM_RISCV_KFENCE_H */ diff --git a/arch/riscv/mm/fault.c b/arch/riscv/mm/fault.c index 096463cc6fff..aa08dd2f8fae 100644 --- a/arch/riscv/mm/fault.c +++ b/arch/riscv/mm/fault.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include @@ -45,7 +46,15 @@ static inline void no_context(struct pt_regs *regs, unsigned long addr) * Oops. The kernel tried to access some bad page. We'll have to * terminate things with extreme prejudice. */ - msg = (addr < PAGE_SIZE) ? "NULL pointer dereference" : "paging request"; + if (addr < PAGE_SIZE) + msg = "NULL pointer dereference"; + else { + if (kfence_handle_page_fault(addr, regs->cause == EXC_STORE_PAGE_FAULT, regs)) + return; + + msg = "paging request"; + } + die_kernel_fault(msg, addr, regs); } -- 2.18.0.huawei.25