Received: by 2002:a05:6a10:6744:0:0:0:0 with SMTP id w4csp467577pxu; Wed, 7 Oct 2020 07:43:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJw1XcVCzbmViqin3/DvB4CHNOxaGZfp4Y9epnk3SuaUcORivQXWD376pUVTt6y2Is/76K1j X-Received: by 2002:a05:6402:13d7:: with SMTP id a23mr3782241edx.352.1602081826094; Wed, 07 Oct 2020 07:43:46 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1602081826; cv=none; d=google.com; s=arc-20160816; b=pvtK1MCQxgp8Z9MCxZDEbuUW7gaRX7RodzSZTjHghgssHwysI7VLNjm4ZjPRLdLGS5 LrXczUwsvAH2kWmn9OVQmAvwZ9TyDosCjWOze1iBZTlxnVdobsbUbgrL1LewBRREjNO1 cTUCUTJ0ta5sLWbDSG8lYLgv0o4PE0E79FPyHEHRyTu6y9sba9nQ94JfBCAqq1GPo+0D HWbGsMMiNqAdQUaLFX/89i/NzAmB5BUHAMs+yfsBeK4lXx3L78OZ/stOChIVQq2WvqY5 lwky7TvkHdBSJnKbBYKfd9irzUCv2I3boh54jH6XgcMCoitxHHh/Ur82uJpaNamKPJrM dLUQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=ln7qxyNzii1scl8HLSA2+oOjYgQ14R9ppJu+tWqxheQ=; b=u73Z4osHVIL08N1wa5BmFNz1PKsmjrdhFT/bv9B7hAsqo7oyHcc9vUkpobCNwZ9pe/ k9pm+YuFHi0LVfudcjfh3O+YAuTJFhEnvB06UkYml1mC2OdSjGp6eM90+ZutHIeOCDj2 WliCbxmDdcJq8NQUwW55MEbqTHeS9X172HBjwsKzASDGoGPo+FXzWaWlyPatzR6IdEyA nott46KDUUVD9OnygMRHcBD8HxqYrLroCLh8h2drhb4UTaIWen+MGzh4S463f/z7dwIA +XB2ECyKaspQDOL69zs11Yj0Ytnxo5B2q50mfRtTItRufhcamCXJrtEQG7zRjpwn1E0o vIkw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b="G/1uZ6Me"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id z27si1583384ejj.493.2020.10.07.07.43.21; Wed, 07 Oct 2020 07:43:46 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b="G/1uZ6Me"; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728671AbgJGOli (ORCPT + 99 others); Wed, 7 Oct 2020 10:41:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33544 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728535AbgJGOlh (ORCPT ); Wed, 7 Oct 2020 10:41:37 -0400 Received: from mail-ot1-x343.google.com (mail-ot1-x343.google.com [IPv6:2607:f8b0:4864:20::343]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A3ECC061755 for ; Wed, 7 Oct 2020 07:41:37 -0700 (PDT) Received: by mail-ot1-x343.google.com with SMTP id d28so2467341ote.1 for ; Wed, 07 Oct 2020 07:41:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ln7qxyNzii1scl8HLSA2+oOjYgQ14R9ppJu+tWqxheQ=; b=G/1uZ6MeGY8rOjaNAklSVvWjwj9mhvIyr//BSIgpkmzlm4xSzk1UliIldkVStFhjzK guXYhVU3K20+r7wiNGmWowgpWVD8ICk7FEs94assDRJ2Qz4pEif0o1cqV5V9nQIPgWDn 0M5+V8V4CzXE9kSiToHHUNDVSlO/qwGepPLA1XkLBo7eOkTkVyrK//Ty1x9bdan86r18 0ji9kl1sYrSRcowtfN78ieadrASPfbrfupDUNWTQt9I8ppvr7ZV3aDnI5rnUV/B2MUNh ZJ8YpLLkmWO9SzFNoM/W53MoIGouRDcaTeTueqzB/cIM4zF116m2OcZARASeI9zROTVu fn9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ln7qxyNzii1scl8HLSA2+oOjYgQ14R9ppJu+tWqxheQ=; b=r55z/KADfrfjPU+0hyxbLNEFkRQ0N3K22dB6L3rwn6Ahuc2mawAp+EF/zBqzo6avnu b3Bwt6LouPM1uF595+9yRPmJoKyptCfJaJHeg5EQIIj7Eb4EPIZmu1aBJ73G4vTYPrKp rSqzDxx6sMDz6g/9Nox1P/IUJt6+v3EYQ/8mo5kQIwDmi5ylvRrS4NH4UM3Hyyi+JNIy 4XH0vJib+FYaMv3MiKoEKUoRipHlxshaK+RhsSEshfhztFSrYkCAqQ9lRjNxEWoTZcq7 wtxv0cPLicjNvU/tqqTqLZhn5z1EfuZ6J37hE73hfGCwmz+0XoKtM18/iua6LdnJ1g42 REjA== X-Gm-Message-State: AOAM531RN21K6QLHmRCWj7ehH4cn+AG4vdc33f6bcK1XHZKg7XgwbRI+ zlBVEcjcU+78AGbcgAN8z7YtpzdxkHkka6EL+QAboA== X-Received: by 2002:a9d:66a:: with SMTP id 97mr2142529otn.233.1602081696697; Wed, 07 Oct 2020 07:41:36 -0700 (PDT) MIME-Version: 1.0 References: <20200929133814.2834621-1-elver@google.com> <20200929133814.2834621-3-elver@google.com> In-Reply-To: From: Marco Elver Date: Wed, 7 Oct 2020 16:41:25 +0200 Message-ID: Subject: Re: [PATCH v4 02/11] x86, kfence: enable KFENCE for x86 To: Jann Horn Cc: Andrew Morton , Alexander Potapenko , "H . Peter Anvin" , "Paul E . McKenney" , Andrey Konovalov , Andrey Ryabinin , Andy Lutomirski , Borislav Petkov , Catalin Marinas , Christoph Lameter , Dave Hansen , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Hillf Danton , Ingo Molnar , Jonathan Cameron , Jonathan Corbet , Joonsoo Kim , Kees Cook , Mark Rutland , Pekka Enberg , Peter Zijlstra , SeongJae Park , Thomas Gleixner , Vlastimil Babka , Will Deacon , "the arch/x86 maintainers" , "open list:DOCUMENTATION" , kernel list , kasan-dev , Linux ARM , Linux-MM Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 7 Oct 2020 at 16:15, Jann Horn wrote: > > On Wed, Oct 7, 2020 at 3:09 PM Marco Elver wrote: > > On Fri, 2 Oct 2020 at 07:45, Jann Horn wrote: > > > On Tue, Sep 29, 2020 at 3:38 PM Marco Elver wrote: > > > > Add architecture specific implementation details for KFENCE and enable > > > > KFENCE for the x86 architecture. In particular, this implements the > > > > required interface in for setting up the pool and > > > > providing helper functions for protecting and unprotecting pages. > > > > > > > > For x86, we need to ensure that the pool uses 4K pages, which is done > > > > using the set_memory_4k() helper function. > > > [...] > > > > diff --git a/arch/x86/include/asm/kfence.h b/arch/x86/include/asm/kfence.h > > > [...] > > > > +/* Protect the given page and flush TLBs. */ > > > > +static inline bool kfence_protect_page(unsigned long addr, bool protect) > > > > +{ > > > > + unsigned int level; > > > > + pte_t *pte = lookup_address(addr, &level); > > > > + > > > > + if (!pte || level != PG_LEVEL_4K) > > > > > > Do we actually expect this to happen, or is this just a "robustness" > > > check? If we don't expect this to happen, there should be a WARN_ON() > > > around the condition. > > > > It's not obvious here, but we already have this covered with a WARN: > > the core.c code has a KFENCE_WARN_ON, which disables KFENCE on a > > warning. > > So for this specific branch: Can it ever happen? If not, please either > remove it or add WARN_ON(). That serves two functions: It ensures that > if something unexpected happens, we see a warning, and it hints to > people reading the code "this isn't actually expected to happen, you > don't have to wrack your brain trying to figure out for which scenario > this branch is intended". Perhaps I could have been clearer: we already have this returning false covered by a WARN+disable KFENCE in core.c. We'll add another WARN_ON right here, as it doesn't hurt, and hopefully improves readability. > > > > + return false; > > > > + > > > > + if (protect) > > > > + set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT)); > > > > + else > > > > + set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT)); > > > > > > Hmm... do we have this helper (instead of using the existing helpers > > > for modifying memory permissions) to work around the allocation out of > > > the data section? > > > > I just played around with using the set_memory.c functions, to remind > > myself why this didn't work. I experimented with using > > set_memory_{np,p}() functions; set_memory_p() isn't implemented, but > > is easily added (which I did for below experiment). However, this > > didn't quite work: > [...] > > For one, smp_call_function_many_cond() doesn't want to be called with > > interrupts disabled, and we may very well get a KFENCE allocation or > > page fault with interrupts disabled / within interrupts. > > > > Therefore, to be safe, we should avoid IPIs. > > set_direct_map_invalid_noflush() does that, too, I think? And that's > already implemented for both arm64 and x86. Sure, that works. We still want the flush_tlb_one_kernel(), at least so the local CPU's TLB is flushed. > > It follows that setting > > the page attribute is best-effort, and we can tolerate some > > inaccuracy. Lazy fault handling should take care of faults after we > > set the page as PRESENT. > [...] > > > Shouldn't kfence_handle_page_fault() happen after prefetch handling, > > > at least? Maybe directly above the "oops" label? > > > > Good question. AFAIK it doesn't matter, as is_kfence_address() should > > never apply for any of those that follow, right? In any case, it > > shouldn't hurt to move it down. > > is_prefetch() ignores any #PF not caused by instruction fetch if it > comes from kernel mode and the faulting instruction is one of the > PREFETCH* instructions. (Which is not supposed to happen - the > processor should just be ignoring the fault for PREFETCH instead of > generating an exception AFAIK. But the comments say that this is about > CPU bugs and stuff.) While this is probably not a big deal anymore > partly because the kernel doesn't use software prefetching in many > places anymore, it seems to me like, in principle, this could also > cause page faults that should be ignored in KFENCE regions if someone > tries to do PREFETCH on an out-of-bounds array element or a dangling > pointer or something. Thanks for the clarification.