Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp4640717pxb; Tue, 28 Sep 2021 00:04:38 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyCrVHmPliaG01L3UyOfK7da8nVTC+8i7Bh5X0CPat8bghZRNtwpeSBMLtVBjQ5xhu7gsel X-Received: by 2002:a17:90b:3ec6:: with SMTP id rm6mr3751402pjb.68.1632812678509; Tue, 28 Sep 2021 00:04:38 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1632812678; cv=none; d=google.com; s=arc-20160816; b=Xb0rpohd5WdRXUZOZHjj63ZdNMZ/3iZBcC8bIUWHgbnov9U3grJLzlJkjVv0ww6wAR SNo6rUGdem3f+KynvhRIWZ+B1q5bzxfj88s6d8DsRkYLP/KI9edEVmUlTK3JMbV5sMD7 CNLIpcwaShjZDU0F6jlLjzD8+q/o2kxI0Psdcxlwc/ks7a7iFZe6Pi7by+55S+1cFjhY HwMIV5NC/8JBwvu9YuM9FTds4ugDrAMkPUbAghqDgnXlM0KUewe9NmccKg09VFbDwdTh 8RXu4N8rJEQSuWS9SacbSL0INkvqfhSIlPJsgrcvyxwe/KcDmWKhPUVQiBlCgkrqxeV+ NhcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:in-reply-to :mime-version:user-agent:date:message-id:from:cc:references:to :subject; bh=hHSWNcp/SHYwmOEtkHIQRbNmL7chsDyvLvvl45Waq/A=; b=Js8l9Fxm4KMDlsPgEfXdM57epJf2gRlcFEtTYwlioaRvfNeRnQG9Sbb2SddZJ2BqlY C6RyU09rousmSXoJhgmHrvhKh8MJVytvi9AEBNus1libtMpnKlxRrOGJcewURux+G6fu Ng18XS2ygS0sADwXSmIoW2J4bvShT0Wug690OXSMmfMu2DULKj5fUY+al8bThs3m+nqN MTzyjyAHm/UOxr/AuLg1tgDGAsmnbzAKnL6It7XXEYzcdj3liypcwv3E3XP5PS2sy0kI QuQx2q2FXF2LrIAIe3rKNKDHEwkqDOY6rAeD5oVxpNpssO1mu8y+rJASZ6nLDMEE1LY7 Th1A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id q10si24287341pfg.263.2021.09.28.00.04.25; Tue, 28 Sep 2021 00:04:38 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239064AbhI1HFJ (ORCPT + 99 others); Tue, 28 Sep 2021 03:05:09 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:26931 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S237706AbhI1HFI (ORCPT ); Tue, 28 Sep 2021 03:05:08 -0400 Received: from dggemv704-chm.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4HJVjX06xqzbmtP; Tue, 28 Sep 2021 14:59:12 +0800 (CST) Received: from dggpemm500009.china.huawei.com (7.185.36.225) by dggemv704-chm.china.huawei.com (10.3.19.47) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 28 Sep 2021 15:03:22 +0800 Received: from [10.174.179.24] (10.174.179.24) by dggpemm500009.china.huawei.com (7.185.36.225) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2308.8; Tue, 28 Sep 2021 15:03:21 +0800 Subject: Re: [PATCH] arm64: remove page granularity limitation from KFENCE To: Alexander Potapenko References: <20210918083849.2696287-1-liushixin2@huawei.com> CC: Marco Elver , Dmitry Vyukov , Catalin Marinas , Will Deacon , kasan-dev , Linux ARM , LKML , , Ard Biesheuvel , Mark Rutland From: Liu Shixin Message-ID: <0676448f-08f9-f498-5fb3-b88fd3810c58@huawei.com> Date: Tue, 28 Sep 2021 15:03:21 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.7.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.179.24] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500009.china.huawei.com (7.185.36.225) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/9/18 19:50, Alexander Potapenko wrote: > On Sat, Sep 18, 2021 at 10:10 AM Liu Shixin wrote: >> Currently if KFENCE is enabled in arm64, the entire linear map will be >> mapped at page granularity which seems overkilled. Actually only the >> kfence pool requires to be mapped at page granularity. We can remove the >> restriction from KFENCE and force the linear mapping of the kfence pool >> at page granularity later in arch_kfence_init_pool(). > There was a previous patch by Jisheng Zhang intended to remove this > requirement: https://lore.kernel.org/linux-arm-kernel/20210524180656.395e45f6@xhacker.debian/ > Which of the two is more preferable? The previous patch by Jisheng Zhang guarantees kfence pool to be mapped at page granularity by allocating kfence pool before paging_init(), and then map it at page granularity during map_mem(). The previous patch has a problem: Even if kfence is disabled in cmdline, kfence pool is still allocated, which is a waste of memory. I'm sorry for sending it repeatly, and I have no idea how to limit the email format to TEXT/PLAIN. thanks. >> Signed-off-by: Liu Shixin >> --- >> arch/arm64/include/asm/kfence.h | 69 ++++++++++++++++++++++++++++++++- >> arch/arm64/mm/mmu.c | 4 +- >> 2 files changed, 70 insertions(+), 3 deletions(-) >> >> diff --git a/arch/arm64/include/asm/kfence.h b/arch/arm64/include/asm/kfence.h >> index aa855c6a0ae6..bee101eced0b 100644 >> --- a/arch/arm64/include/asm/kfence.h >> +++ b/arch/arm64/include/asm/kfence.h >> @@ -8,9 +8,76 @@ >> #ifndef __ASM_KFENCE_H >> #define __ASM_KFENCE_H >> >> +#include >> #include >> +#include >> >> -static inline bool arch_kfence_init_pool(void) { return true; } >> +static inline int split_pud_page(pud_t *pud, unsigned long addr) >> +{ >> + int i; >> + pmd_t *pmd = pmd_alloc_one(&init_mm, addr); >> + unsigned long pfn = PFN_DOWN(__pa(addr)); >> + >> + if (!pmd) >> + return -ENOMEM; >> + >> + for (i = 0; i < PTRS_PER_PMD; i++) >> + set_pmd(pmd + i, pmd_mkhuge(pfn_pmd(pfn + i * PTRS_PER_PTE, PAGE_KERNEL))); >> + >> + smp_wmb(); /* See comment in __pte_alloc */ >> + pud_populate(&init_mm, pud, pmd); >> + flush_tlb_kernel_range(addr, addr + PUD_SIZE); >> + return 0; >> +} >> + >> +static inline int split_pmd_page(pmd_t *pmd, unsigned long addr) >> +{ >> + int i; >> + pte_t *pte = pte_alloc_one_kernel(&init_mm); >> + unsigned long pfn = PFN_DOWN(__pa(addr)); >> + >> + if (!pte) >> + return -ENOMEM; >> + >> + for (i = 0; i < PTRS_PER_PTE; i++) >> + set_pte(pte + i, pfn_pte(pfn + i, PAGE_KERNEL)); >> + >> + smp_wmb(); /* See comment in __pte_alloc */ >> + pmd_populate_kernel(&init_mm, pmd, pte); >> + >> + flush_tlb_kernel_range(addr, addr + PMD_SIZE); >> + return 0; >> +} >> + >> +static inline bool arch_kfence_init_pool(void) >> +{ >> + unsigned long addr; >> + pgd_t *pgd; >> + p4d_t *p4d; >> + pud_t *pud; >> + pmd_t *pmd; >> + >> + for (addr = (unsigned long)__kfence_pool; is_kfence_address((void *)addr); >> + addr += PAGE_SIZE) { >> + pgd = pgd_offset(&init_mm, addr); >> + if (pgd_leaf(*pgd)) >> + return false; >> + p4d = p4d_offset(pgd, addr); >> + if (p4d_leaf(*p4d)) >> + return false; >> + pud = pud_offset(p4d, addr); >> + if (pud_leaf(*pud)) { >> + if (split_pud_page(pud, addr & PUD_MASK)) >> + return false; >> + } >> + pmd = pmd_offset(pud, addr); >> + if (pmd_leaf(*pmd)) { >> + if (split_pmd_page(pmd, addr & PMD_MASK)) >> + return false; >> + } >> + } >> + return true; >> +} >> >> static inline bool kfence_protect_page(unsigned long addr, bool protect) >> { >> diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c >> index cfd9deb347c3..b2c79ccfb1c5 100644 >> --- a/arch/arm64/mm/mmu.c >> +++ b/arch/arm64/mm/mmu.c >> @@ -516,7 +516,7 @@ static void __init map_mem(pgd_t *pgdp) >> */ >> BUILD_BUG_ON(pgd_index(direct_map_end - 1) == pgd_index(direct_map_end)); >> >> - if (can_set_direct_map() || crash_mem_map || IS_ENABLED(CONFIG_KFENCE)) >> + if (can_set_direct_map() || crash_mem_map) >> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; >> >> /* >> @@ -1485,7 +1485,7 @@ int arch_add_memory(int nid, u64 start, u64 size, >> * KFENCE requires linear map to be mapped at page granularity, so that >> * it is possible to protect/unprotect single pages in the KFENCE pool. >> */ >> - if (can_set_direct_map() || IS_ENABLED(CONFIG_KFENCE)) >> + if (can_set_direct_map()) >> flags |= NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; >> >> __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start), >> -- >> 2.18.0.huawei.25 >> >> -- >> You received this message because you are subscribed to the Google Groups "kasan-dev" group. >> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com. >> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20210918083849.2696287-1-liushixin2%40huawei.com. > >