Received: by 2002:a05:6a10:1d13:0:0:0:0 with SMTP id pp19csp253901pxb; Wed, 25 Aug 2021 02:19:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw5RdNbBahUirgfR3l2elGQ1BLn3Bxagg8RBcx0kwo5Ik8+g6wfdZGhAlKHMUIuQOyj2vSy X-Received: by 2002:a6b:905:: with SMTP id t5mr34546315ioi.209.1629883185921; Wed, 25 Aug 2021 02:19:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1629883185; cv=none; d=google.com; s=arc-20160816; b=cjoDHUi+mwruQ2YP+Y8CXTSNqWozXAjA4LN/nGesLvbbrNEqt96JIgy9DU4oRioXiv pL7lzXEoTGW6PeIrZBTG7toYfLN7uDGS6Nd9zIJi2RUe/Sj1guGHwsb4JOs/35auGkae nk10U/y/p4pV9S3OY1TQ1WNk8HNaqG7pTrRcNt2VGb2YgdCImw7SHnKSAls9AUFrCqnb klMN/KzGergV0mDmdT9J99ltVPVGx8NIBFeHxuRUhNz+sAWFupBFBDh8Uw4i73dQrj+n lUYdCkl25xTphfoB0K+hgZ6IW1yl8FhmytoaVbZ84uvSCqG+qxoORDcl4/tivQYijg+Z MgYQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=x5YYx0apQR+eDUeXNtV6ji9xtv18+uPcHeqTHwfQsCc=; b=PR9ZznFnHLpx7fsiTL6VbA949oXWXxKsaDn84AwkM/VLH7XK2qspuoiOSCzbOxBy4I INB2XjyIQLcRyWh/TXzntyNVP+obEKhYYI/y+2TcWWovPxp+G6BeF/fyuewJNTQ2/3p+ oYbEO5K3FUAXmA4h05VemYKXOshb9x1K8hZDkt37ZtcC3DuMz0d5y5IAZwlN4svKw6uh vArNG9ndL9naEg1dXKw4LI22ghXar6n+VBvl/ljxkmU8mWqHpV8bpxPysQ11WDUIkiSF w6eayokGNBZ+UB98qGGv/U6ffxYjQJGM9yL+R7IQsfBHnmxcCbf1J5fy9WTp7nlkwlkv FmYg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a12si19309651jac.69.2021.08.25.02.19.34; Wed, 25 Aug 2021 02:19:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239697AbhHYJSR (ORCPT + 99 others); Wed, 25 Aug 2021 05:18:17 -0400 Received: from szxga08-in.huawei.com ([45.249.212.255]:15214 "EHLO szxga08-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239598AbhHYJSJ (ORCPT ); Wed, 25 Aug 2021 05:18:09 -0400 Received: from dggemv711-chm.china.huawei.com (unknown [172.30.72.56]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4GvgN10WvBz1DDJK; Wed, 25 Aug 2021 17:16:49 +0800 (CST) Received: from dggpemm500001.china.huawei.com (7.185.36.107) by dggemv711-chm.china.huawei.com (10.1.198.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 25 Aug 2021 17:17:22 +0800 Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Wed, 25 Aug 2021 17:17:21 +0800 From: Kefeng Wang To: Russell King , Alexander Potapenko , Marco Elver , Dmitry Vyukov , , , CC: Andrew Morton , Kefeng Wang Subject: [PATCH 3/4] ARM: Support KFENCE for ARM Date: Wed, 25 Aug 2021 17:21:15 +0800 Message-ID: <20210825092116.149975-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210825092116.149975-1-wangkefeng.wang@huawei.com> References: <20210825092116.149975-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: 7BIT Content-Type: text/plain; charset=US-ASCII X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Add architecture specific implementation details for KFENCE and enable KFENCE on ARM. In particular, this implements the required interface in . KFENCE requires that attributes for pages from its memory pool can individually be set. Therefore, force the kfence pool to be mapped at page granularity. Testing this patch using the testcases in kfence_test.c and all passed with or without ARM_LPAE. Signed-off-by: Kefeng Wang --- arch/arm/Kconfig | 1 + arch/arm/include/asm/kfence.h | 52 +++++++++++++++++++++++++++++++++++ arch/arm/mm/fault.c | 9 ++++-- 3 files changed, 60 insertions(+), 2 deletions(-) create mode 100644 arch/arm/include/asm/kfence.h diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig index 7a8059ff6bb0..3798f82a0c0d 100644 --- a/arch/arm/Kconfig +++ b/arch/arm/Kconfig @@ -73,6 +73,7 @@ config ARM select HAVE_ARCH_AUDITSYSCALL if AEABI && !OABI_COMPAT select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6 select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU + select HAVE_ARCH_KFENCE if MMU select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU select HAVE_ARCH_KASAN if MMU && !XIP_KERNEL select HAVE_ARCH_MMAP_RND_BITS if MMU diff --git a/arch/arm/include/asm/kfence.h b/arch/arm/include/asm/kfence.h new file mode 100644 index 000000000000..eae7a12ab2a9 --- /dev/null +++ b/arch/arm/include/asm/kfence.h @@ -0,0 +1,52 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __ASM_ARM_KFENCE_H +#define __ASM_ARM_KFENCE_H + +#include +#include +#include + +static inline int split_pmd_page(pmd_t *pmd, unsigned long addr) +{ + int i; + unsigned long pfn = PFN_DOWN(__pa((addr & PMD_MASK))); + pte_t *pte = pte_alloc_one_kernel(&init_mm); + + if (!pte) + return -ENOMEM; + + for (i = 0; i < PTRS_PER_PTE; i++) + set_pte_ext(pte + i, pfn_pte(pfn + i, PAGE_KERNEL), 0); + pmd_populate_kernel(&init_mm, pmd, pte); + + flush_tlb_kernel_range(addr, addr + PMD_SIZE); + return 0; +} + +static inline bool arch_kfence_init_pool(void) +{ + unsigned long addr; + pmd_t *pmd; + + for (addr = (unsigned long)__kfence_pool; is_kfence_address((void *)addr); + addr += PAGE_SIZE) { + pmd = pmd_off_k(addr); + + if (pmd_leaf(*pmd)) { + if (split_pmd_page(pmd, addr)) + return false; + } + } + + return true; +} + +static inline bool kfence_protect_page(unsigned long addr, bool protect) +{ + set_memory_valid(addr, 1, !protect); + + return true; +} + +#endif /* __ASM_ARM_KFENCE_H */ diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c index f7ab6dabe89f..9fa221ffa1b9 100644 --- a/arch/arm/mm/fault.c +++ b/arch/arm/mm/fault.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -131,10 +132,14 @@ __do_kernel_fault(struct mm_struct *mm, unsigned long addr, unsigned int fsr, /* * No handler, we'll have to terminate things with extreme prejudice. */ - if (addr < PAGE_SIZE) + if (addr < PAGE_SIZE) { msg = "NULL pointer dereference"; - else + } else { + if (kfence_handle_page_fault(addr, is_write_fault(fsr), regs)) + return; + msg = "paging request"; + } die_kernel_fault(msg, mm, addr, fsr, regs); } -- 2.26.2