Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp1146872pxu; Thu, 8 Oct 2020 04:42:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzp/4HDNeNrqYhxJfN267uyCJRnzYfH7uu8e/3YJHC7DW778UXdAQ218cEiP1S+j5Qw4ieT X-Received: by 2002:a05:6402:207c:: with SMTP id bd28mr8260595edb.316.1602157370693; Thu, 08 Oct 2020 04:42:50 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602157370; cv=none; d=google.com; s=arc-20160816; b=JzI1s4ume2vsUduIOYA3jwyV6vsj99rjyRlSD6uQnlGDSS/0dNENFUYB4D+SD59OHb 8gOgeWEWQ5AXGHmjXW1PB6WrSOYsm4LRIlImnjtrx1CPoKvi5TuZQI1d71vt9G6tZRv5 OJHleFdzT7xcihrRilc/Y/6iy7y69ht/MCz31A/vocSEJQqpnZAWcXfb2Q35g7K3tYEn RMp29H+98jzi8fRnTiooG8g6ZPIbLK/TyU/nSn9WEGdH96cSYPOpDPRMAbC3zNhOQPcg 1wJWJyGBdMY2/KcYwvxgQ4INZZV7NccglX4zy3dj1jI5bVF3Y/1ErfNtMiCjygZFv69n ATcA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=1jK0FulwQ5Ppc/gGW9wLUwq/hBio/cEOXBceu8f6RWI=; b=kUYilUk1Xff1xD+NT+eXd2PhJEbaN1Qbx6qRh5yuoJvl7sPjLTu4yAfYx7DjV5qYxp WNC9wMxt4A1+5PHX65dEZ8aTdyxoKjG9pyn3mYnNDa9W6PjVghyTneS09lZNyQc08puX YQUHp5bGoFAMXE5TwxldzF+mFyzWYRuP8VJA1wWf370hk4ILfwBd/D85eYBv4GwOhrYm FSwubhfsNtCPKJSDDMiAIEMXI8i7nyiawXb1ueug+4TRQ61ChR7TdIwveqk3WWdTKQ2Z 8FiyJphZ/U3d41RBGIaoKtJ1Xtahr2Jy/TWOSihflI4rPSNtmwzMDIlMJE9JI/QHhFE+ krSg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id g1si3762829ejf.525.2020.10.08.04.42.26; Thu, 08 Oct 2020 04:42:50 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=arm.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729524AbgJHKpN (ORCPT + 99 others); Thu, 8 Oct 2020 06:45:13 -0400 Received: from foss.arm.com ([217.140.110.172]:50036 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726766AbgJHKpM (ORCPT ); Thu, 8 Oct 2020 06:45:12 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 21CCBD6E; Thu, 8 Oct 2020 03:45:11 -0700 (PDT) Received: from C02TD0UTHF1T.local (unknown [10.57.52.79]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1EBF83F70D; Thu, 8 Oct 2020 03:45:03 -0700 (PDT) Date: Thu, 8 Oct 2020 11:45:01 +0100 From: Mark Rutland To: Marco Elver Cc: Alexander Potapenko , Will Deacon , Andrew Morton , "H. Peter Anvin" , "Paul E. McKenney" , Andrey Konovalov , Andrey Ryabinin , Andy Lutomirski , Borislav Petkov , Catalin Marinas , Christoph Lameter , Dave Hansen , David Rientjes , Dmitriy Vyukov , Eric Dumazet , Greg Kroah-Hartman , Hillf Danton , Ingo Molnar , Jann Horn , Jonathan Cameron , Jonathan Corbet , Joonsoo Kim , Kees Cook , Pekka Enberg , Peter Zijlstra , SeongJae Park , Thomas Gleixner , Vlastimil Babka , the arch/x86 maintainers , "open list:DOCUMENTATION" , LKML , kasan-dev , Linux ARM , Linux Memory Management List Subject: Re: [PATCH v3 03/10] arm64, kfence: enable KFENCE for ARM64 Message-ID: <20201008104501.GB72325@C02TD0UTHF1T.local> References: <20200921132611.1700350-1-elver@google.com> <20200921132611.1700350-4-elver@google.com> <20200921143059.GO2139@willie-the-truck> <20200929140226.GB53442@C02TD0UTHF1T.local> <20201001175716.GA89689@C02TD0UTHF1T.local> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Oct 08, 2020 at 11:40:52AM +0200, Marco Elver wrote: > On Thu, 1 Oct 2020 at 19:58, Mark Rutland wrote: > [...] > > > > If you need virt_to_page() to work, the address has to be part of the > > > > linear/direct map. > [...] > > > > What's the underlying requirement here? Is this a performance concern, > > codegen/codesize, or something else? > > It used to be performance, since is_kfence_address() is used in the > fast path. However, with some further tweaks we just did to > is_kfence_address(), our benchmarks show a pointer load can be > tolerated. Great! I reckon that this is something we can optimize in futue if necessary (e.g. with some form of code-patching for immediate values), but it's good to have a starting point that works everywhere! [...] > > I'm not too worried about allocating this dynamically, but: > > > > * The arch code needs to set up the translation tables for this, as we > > cannot safely change the mapping granularity live. > > > > * As above I'm fairly certain x86 needs to use a carevout from the > > linear map to function correctly anyhow, so we should follow the same > > approach for both arm64 and x86. That might be a static carevout that > > we figure out the aliasing for, or something entirely dynamic. > > We're going with dynamically allocating the pool (for both x86 and > arm64), since any benefits we used to measure from the static pool are > no longer measurable (after removing a branch from > is_kfence_address()). It should hopefully simplify a lot of things, > given all the caveats that you pointed out. > > For arm64, the only thing left then is to fix up the case if the > linear map is not forced to page granularity. The simplest way to do this is to modify arm64's arch_add_memory() to force the entire linear map to be mapped at page granularity when KFENCE is enabled, something like: | diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c | index 936c4762dadff..f6eba0642a4a3 100644 | --- a/arch/arm64/mm/mmu.c | +++ b/arch/arm64/mm/mmu.c | @@ -1454,7 +1454,8 @@ int arch_add_memory(int nid, u64 start, u64 size, | { | int ret, flags = 0; | | - if (rodata_full || debug_pagealloc_enabled()) | + if (rodata_full || debug_pagealloc_enabled() || | + IS_ENABLED(CONFIG_KFENCE)) | flags = NO_BLOCK_MAPPINGS | NO_CONT_MAPPINGS; | | __create_pgd_mapping(swapper_pg_dir, start, __phys_to_virt(start), ... and I given that RODATA_FULL_DEFAULT_ENABLED is the default, I suspect it's not worth trying to only for that for the KFENCE region unless someone complains. Thanks, Mark.