Received: by 2002:a05:6a10:9afc:0:0:0:0 with SMTP id t28csp431324pxm; Wed, 2 Mar 2022 19:16:16 -0800 (PST) X-Google-Smtp-Source: ABdhPJxrbIjGnOpJwUyQ1/4mr6FkRYNxIDeQljt3aJxo0t69VDqwA8NW5jZqXkVeRmPBFoL639+D X-Received: by 2002:a05:6a00:1145:b0:4f6:3ebc:a79b with SMTP id b5-20020a056a00114500b004f63ebca79bmr8610317pfm.41.1646277376462; Wed, 02 Mar 2022 19:16:16 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1646277376; cv=none; d=google.com; s=arc-20160816; b=ke6k0FzRGWFGb+d0i8Dxi2nvJVAtoiFmW+/XpfsXmOnuWDsx0UOwmfr81WIZWaMPuh zrI8dUAT2kXCc108aQFzwiB7NVNTebcYbpTP4kElMqgc2AzQoTaVdeeY0QhzchXEdYb5 DvXM+gjMib2dqz2Bg0mxLdZWCNh7qEa6GF4LZkB60zxGbZqLIaq9TuyWpJITILp5SISB 5bXBf/VO+5ZyScSfTG6yPee70EYQQ62GO84uchHunI/kN+eWC9W7xZjUzOxPc3nbqPkJ 6JfIx+CosPslIB8PaFcBUS6n67p1gQxZieas2XqfpuezK3M77ER7f/ituadn3TJ2mrdD wizw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from; bh=tFumf1uxPHXey7k74/mmLJmz1siOSZWFmTx6iHT6xnM=; b=aDF//NiI5mTY/aBbBUTsEOO8hHrEifuUNsNOltHzl0DSRgIgYczMwtEUbSQxDa37iI nGTth7qqeb0YRDVGrJljtvsmYP/xvQ8u2C7DqED5A6oleY4lBkv9NlNe3yKk7ewN2J2I 46hymIsFS1RZVjokhx5NUO9da1foobk3uY8i5iXpbEqjxz3tq2J28iHwugTVe+ajGhyv qkHgfacQmmDa33dwu/UkAoeDB5qon+K06nW9CpJjMNz2TRL8xvLdPFyDJRujyvlcMS1y 4cQlCdbvvBJRMelXPK8ncTpDcBQBQQ44ST4yHNadVxQ5kf4DKsZZROxbr0Rq8NyNhlzh H15Q== ARC-Authentication-Results: i=1; mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Return-Path: Received: from lindbergh.monkeyblade.net (lindbergh.monkeyblade.net. [23.128.96.19]) by mx.google.com with ESMTPS id p8-20020a056a000a0800b004f2a6ab0ce4si1080967pfh.5.2022.03.02.19.16.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Mar 2022 19:16:16 -0800 (PST) Received-SPF: softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) client-ip=23.128.96.19; Authentication-Results: mx.google.com; spf=softfail (google.com: domain of transitioning linux-kernel-owner@vger.kernel.org does not designate 23.128.96.19 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=alibaba.com Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9048E3BA44; Wed, 2 Mar 2022 19:15:48 -0800 (PST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231932AbiCCDQJ (ORCPT + 99 others); Wed, 2 Mar 2022 22:16:09 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51430 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231981AbiCCDQI (ORCPT ); Wed, 2 Mar 2022 22:16:08 -0500 Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DEF2EF70E5 for ; Wed, 2 Mar 2022 19:15:18 -0800 (PST) X-Alimail-AntiSpam: AC=PASS;BC=-1|-1;BR=01201311R591e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04423;MF=dtcccc@linux.alibaba.com;NM=1;PH=DS;RN=7;SR=0;TI=SMTPD_---0V650kGb_1646277315; Received: from localhost.localdomain(mailfrom:dtcccc@linux.alibaba.com fp:SMTPD_---0V650kGb_1646277315) by smtp.aliyun-inc.com(127.0.0.1); Thu, 03 Mar 2022 11:15:15 +0800 From: Tianchen Ding To: Alexander Potapenko , Marco Elver , Dmitry Vyukov , Andrew Morton Cc: kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [RFC PATCH 2/2] kfence: Alloc kfence_pool after system startup Date: Thu, 3 Mar 2022 11:15:05 +0800 Message-Id: <20220303031505.28495-3-dtcccc@linux.alibaba.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220303031505.28495-1-dtcccc@linux.alibaba.com> References: <20220303031505.28495-1-dtcccc@linux.alibaba.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-0.9 required=5.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,HK_RANDOM_FROM,MAILING_LIST_MULTI, RDNS_NONE,SPF_HELO_NONE,T_SCC_BODY_TEXT_LINE,UNPARSEABLE_RELAY autolearn=no autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org KFENCE aims at production environments, but it does not allow enabling after system startup because kfence_pool only alloc pages from memblock. Consider the following production scene: At first, for performance considerations, production machines do not enable KFENCE. However, after running for a while, the kernel is suspected to have memory errors. (e.g., a sibling machine crashed.) So other production machines need to enable KFENCE, but it's hard for them to reboot. Allow enabling KFENCE by alloc pages after system startup, even if KFENCE is not enabled during booting. Signed-off-by: Tianchen Ding --- This patch is similar to what the KFENCE(early version) do on ARM64. Instead of alloc_pages(), we'd prefer alloc_contig_pages() to get exact number of pages. I'm not sure about the impact of breaking __ro_after_init. I've tested with hackbench, and it seems no performance regression. Or any problem about security? --- mm/kfence/core.c | 96 ++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 76 insertions(+), 20 deletions(-) diff --git a/mm/kfence/core.c b/mm/kfence/core.c index 19eb123c0bba..ae69b2a113a4 100644 --- a/mm/kfence/core.c +++ b/mm/kfence/core.c @@ -93,7 +93,7 @@ static unsigned long kfence_skip_covered_thresh __read_mostly = 75; module_param_named(skip_covered_thresh, kfence_skip_covered_thresh, ulong, 0644); /* The pool of pages used for guard pages and objects. */ -char *__kfence_pool __ro_after_init; +char *__kfence_pool __read_mostly; EXPORT_SYMBOL(__kfence_pool); /* Export for test modules. */ /* @@ -534,17 +534,18 @@ static void rcu_guarded_free(struct rcu_head *h) kfence_guarded_free((void *)meta->addr, meta, false); } -static bool __init kfence_init_pool(void) +/* + * The main part of init kfence pool. + * Return 0 if succeed. Otherwise return the address where error occurs. + */ +static unsigned long __kfence_init_pool(void) { unsigned long addr = (unsigned long)__kfence_pool; struct page *pages; int i; - if (!__kfence_pool) - return false; - if (!arch_kfence_init_pool()) - goto err; + return addr; pages = virt_to_page(addr); @@ -562,7 +563,7 @@ static bool __init kfence_init_pool(void) /* Verify we do not have a compound head page. */ if (WARN_ON(compound_head(&pages[i]) != &pages[i])) - goto err; + return addr; __SetPageSlab(&pages[i]); } @@ -575,7 +576,7 @@ static bool __init kfence_init_pool(void) */ for (i = 0; i < 2; i++) { if (unlikely(!kfence_protect(addr))) - goto err; + return addr; addr += PAGE_SIZE; } @@ -592,7 +593,7 @@ static bool __init kfence_init_pool(void) /* Protect the right redzone. */ if (unlikely(!kfence_protect(addr + PAGE_SIZE))) - goto err; + return addr; addr += 2 * PAGE_SIZE; } @@ -605,9 +606,21 @@ static bool __init kfence_init_pool(void) */ kmemleak_free(__kfence_pool); - return true; + return 0; +} + +static bool __init kfence_init_pool(void) +{ + unsigned long addr; + + if (!__kfence_pool) + return false; + + addr = __kfence_init_pool(); + + if (!addr) + return true; -err: /* * Only release unprotected pages, and do not try to go back and change * page attributes due to risk of failing to do so as well. If changing @@ -620,6 +633,22 @@ static bool __init kfence_init_pool(void) return false; } +static bool kfence_init_pool_late(void) +{ + unsigned long addr, free_pages; + + addr = __kfence_init_pool(); + + if (!addr) + return true; + + /* Same as above. */ + free_pages = (KFENCE_POOL_SIZE - (addr - (unsigned long)__kfence_pool)) / PAGE_SIZE; + free_contig_range(page_to_pfn(virt_to_page(addr)), free_pages); + __kfence_pool = NULL; + return false; +} + /* === DebugFS Interface ==================================================== */ static int stats_show(struct seq_file *seq, void *v) @@ -768,31 +797,58 @@ void __init kfence_alloc_pool(void) pr_err("failed to allocate pool\n"); } +static inline void __kfence_init(void) +{ + if (!IS_ENABLED(CONFIG_KFENCE_STATIC_KEYS)) + static_branch_enable(&kfence_allocation_key); + WRITE_ONCE(kfence_enabled, true); + queue_delayed_work(system_unbound_wq, &kfence_timer, 0); + pr_info("initialized - using %lu bytes for %d objects at 0x%p-0x%p\n", KFENCE_POOL_SIZE, + CONFIG_KFENCE_NUM_OBJECTS, (void *)__kfence_pool, + (void *)(__kfence_pool + KFENCE_POOL_SIZE)); +} + void __init kfence_init(void) { + stack_hash_seed = (u32)random_get_entropy(); + /* Setting kfence_sample_interval to 0 on boot disables KFENCE. */ if (!kfence_sample_interval) return; - stack_hash_seed = (u32)random_get_entropy(); if (!kfence_init_pool()) { pr_err("%s failed\n", __func__); return; } - if (!IS_ENABLED(CONFIG_KFENCE_STATIC_KEYS)) - static_branch_enable(&kfence_allocation_key); - WRITE_ONCE(kfence_enabled, true); - queue_delayed_work(system_unbound_wq, &kfence_timer, 0); - pr_info("initialized - using %lu bytes for %d objects at 0x%p-0x%p\n", KFENCE_POOL_SIZE, - CONFIG_KFENCE_NUM_OBJECTS, (void *)__kfence_pool, - (void *)(__kfence_pool + KFENCE_POOL_SIZE)); + __kfence_init(); +} + +static int kfence_init_late(void) +{ + struct page *pages; + const unsigned long nr_pages = KFENCE_POOL_SIZE / PAGE_SIZE; + + pages = alloc_contig_pages(nr_pages, GFP_KERNEL, first_online_node, NULL); + + if (!pages) + return -ENOMEM; + + __kfence_pool = page_to_virt(pages); + + if (!kfence_init_pool_late()) { + pr_err("%s failed\n", __func__); + return -EBUSY; + } + + __kfence_init(); + return 0; } static int kfence_enable_late(void) { if (!__kfence_pool) - return -EINVAL; + return kfence_init_late(); WRITE_ONCE(kfence_enabled, true); queue_delayed_work(system_unbound_wq, &kfence_timer, 0); -- 2.27.0