Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp1047439pxk; Thu, 1 Oct 2020 22:47:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzSznXEK3lf915oKU00w7XZ9KOmlfsucbeBKpV+PG/BdlkfBy3JdLrgYxmLvvYrlN7VWP1q X-Received: by 2002:a17:906:aec1:: with SMTP id me1mr524655ejb.225.1601617623070; Thu, 01 Oct 2020 22:47:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601617623; cv=none; d=google.com; s=arc-20160816; b=rowbS+9araDQY2t0Sn77a8nu8+us/oWVNM3FHFGzRJ5j6qNVycDFfE4nP/6w2zliu9 hjWJvUBdydB+P1zExg+mm6IgitCjVwxncl70x6vMNENpBR+wtKyBLEIP3GvnZc4xl67o cIDB6qIYmACT8AqSvI1d7+dfI7vt8bO/wbHYLRFwj01hu5gLlqsm8GQw/OCbqe2P7sNQ 7nFVBHiDUfjLi6mygtnuiUzWB5PulvLgSgReDG+h8NJ5MRLYYyHQxCz1f64RYpveFMCv CBTo4gOctnr5+ulOkFNXYoFzcJvA0iuJIqSZVYRCakA/EnYX90LvzM5lGVmJf/LQveVi sCkQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=X/Mw4dhIBbKvuicToqPryCf8ZS6JkibVyJYbIfuE7Gs=; b=WKWuB0KwBsMZyp3n4epLS3Hrf2wRHAT0ms/5iDxlIb0sscghMRBZfXx75Yt7L/a5RF JIHOPTTtGb5BLZt8vVAoRyOGaxS9g4FrF+/1EFqmvmTQfg/3rOAOpMDtwkH8xcfzI0CU R/ruOwAjP0FD8BBj8QLKzgCGi3gKU5xB8oisGOHAbHmXIo2qS54h9nP+9GrBnOFuzO+X 7BmaQmDyWBq2Dkquj9vAPFqnf00z+Y5mV3ZtkRihrXZQsZuxkiCv4ucg8LP7pPE8oYkW SepFxuk4dXzpfTEXpXxW2yDZvLFszS/os+/qhD40nTLlJPIABcRHgIGZ7pnfOy9ORU3S pSjw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=nEcpsbTx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id oq7si328381ejb.650.2020.10.01.22.46.40; Thu, 01 Oct 2020 22:47:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=nEcpsbTx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725991AbgJBFpt (ORCPT + 99 others); Fri, 2 Oct 2020 01:45:49 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58748 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725951AbgJBFpt (ORCPT ); Fri, 2 Oct 2020 01:45:49 -0400 Received: from mail-ej1-x641.google.com (mail-ej1-x641.google.com [IPv6:2a00:1450:4864:20::641]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14764C0613E2 for ; Thu, 1 Oct 2020 22:45:49 -0700 (PDT) Received: by mail-ej1-x641.google.com with SMTP id a3so242543ejy.11 for ; Thu, 01 Oct 2020 22:45:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=X/Mw4dhIBbKvuicToqPryCf8ZS6JkibVyJYbIfuE7Gs=; b=nEcpsbTxpqqR71sX3y9AeMu/5IDZjBWpyv3LTWExEnlMfdABMkG6DRWo06vinq90ka v0yKY98+cFdnzQaGPNqgP67Qi9jy40ePsimddpuPZK6/8+YptTg00PoI6JI+R6+xHlM7 7IIh84MVJc1TIsQGsF0COrkSpLSXnQuttWl3oCsAFb/86eFiRu8LMpBI62BrvWHqYDCt igLUiBNCWVCc82jzMx1R4TEruzIymmgeSBaPaguSMN8pUkQYZNLXXkP4V51VdUdj5w19 7NpXjxLmPtWTYGqVPqKTlrhupfuKQklo6bXJHXlJsBt1W9xk1SGNWZqoa8SM/vrMPR8I TOXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=X/Mw4dhIBbKvuicToqPryCf8ZS6JkibVyJYbIfuE7Gs=; b=Qr2yLTQkTliWlSVK/i1rrvEIRu9UyBtDRHdh1pA3tEP7DzrJMVn9kAqhY1Cy5nb6fe fMCJtlbG4+Jo+VKyz8P4KevSZEsKkff0zlSVP+0R/a/c+/cXgD++Y3iUA08iurdGP/Zp C6aw+NtdTCw51whKkoeRnXpu99HjnBuYX2NXF7r0znKu+3NaembO6c5fphRF48I0x3IR xTkBLfCHhliL6QLNe+Vj6A6SAZxsXjtsfUrdTxyZjecZI3SYmMfCr9Ppi2DfO2k7TePv FwdE0Xm4dy7nwSjwzVkdToCV0V5CmX4kqD/zmQJZG+23aOCaj4VGLDwqW8kx4S7IsxWG dYhA== X-Gm-Message-State: AOAM530AyY+dw1237yODkVbx/LSoVgoyC6aukOOm7Th/hFjFktePW/fn E3hbaGYA2XayVGnfo74R7xG6j/s7t3ERMG/HuxI2LQ== X-Received: by 2002:a17:906:9156:: with SMTP id y22mr174829ejw.184.1601617547226; Thu, 01 Oct 2020 22:45:47 -0700 (PDT) MIME-Version: 1.0 References: <20200929133814.2834621-1-elver@google.com> <20200929133814.2834621-3-elver@google.com> In-Reply-To: <20200929133814.2834621-3-elver@google.com> From: Jann Horn Date: Fri, 2 Oct 2020 07:45:20 +0200 Message-ID: Subject: Re: [PATCH v4 02/11] x86, kfence: enable KFENCE for x86 To: Marco Elver Cc: Andrew Morton , Alexander Potapenko , "H . Peter Anvin" , "Paul E . McKenney" , Andrey Konovalov , Andrey Ryabinin , Andy Lutomirski , Borislav Petkov , Catalin Marinas , Christoph Lameter , Dave Hansen , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Hillf Danton , Ingo Molnar , Jonathan.Cameron@huawei.com, Jonathan Corbet , Joonsoo Kim , Kees Cook , Mark Rutland , Pekka Enberg , Peter Zijlstra , sjpark@amazon.com, Thomas Gleixner , Vlastimil Babka , Will Deacon , "the arch/x86 maintainers" , linux-doc@vger.kernel.org, kernel list , kasan-dev , Linux ARM , Linux-MM Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Sep 29, 2020 at 3:38 PM Marco Elver wrote: > Add architecture specific implementation details for KFENCE and enable > KFENCE for the x86 architecture. In particular, this implements the > required interface in for setting up the pool and > providing helper functions for protecting and unprotecting pages. > > For x86, we need to ensure that the pool uses 4K pages, which is done > using the set_memory_4k() helper function. [...] > diff --git a/arch/x86/include/asm/kfence.h b/arch/x86/include/asm/kfence.h [...] > +/* Protect the given page and flush TLBs. */ > +static inline bool kfence_protect_page(unsigned long addr, bool protect) > +{ > + unsigned int level; > + pte_t *pte = lookup_address(addr, &level); > + > + if (!pte || level != PG_LEVEL_4K) Do we actually expect this to happen, or is this just a "robustness" check? If we don't expect this to happen, there should be a WARN_ON() around the condition. > + return false; > + > + if (protect) > + set_pte(pte, __pte(pte_val(*pte) & ~_PAGE_PRESENT)); > + else > + set_pte(pte, __pte(pte_val(*pte) | _PAGE_PRESENT)); Hmm... do we have this helper (instead of using the existing helpers for modifying memory permissions) to work around the allocation out of the data section? > + flush_tlb_one_kernel(addr); > + return true; > +} > + > +#endif /* _ASM_X86_KFENCE_H */ > diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c [...] > @@ -701,6 +702,9 @@ no_context(struct pt_regs *regs, unsigned long error_code, > } > #endif > > + if (kfence_handle_page_fault(address)) > + return; > + > /* > * 32-bit: > * The standard 5 lines of diff context don't really make it obvious what's going on here. Here's a diff with more context: /* * Stack overflow? During boot, we can fault near the initial * stack in the direct map, but that's not an overflow -- check * that we're in vmalloc space to avoid this. */ if (is_vmalloc_addr((void *)address) && (((unsigned long)tsk->stack - 1 - address < PAGE_SIZE) || address - ((unsigned long)tsk->stack + THREAD_SIZE) < PAGE_SIZE)) { unsigned long stack = __this_cpu_ist_top_va(DF) - sizeof(void *); /* * We're likely to be running with very little stack space * left. It's plausible that we'd hit this condition but * double-fault even before we get this far, in which case * we're fine: the double-fault handler will deal with it. * * We don't want to make it all the way into the oops code * and then double-fault, though, because we're likely to * break the console driver and lose most of the stack dump. */ asm volatile ("movq %[stack], %%rsp\n\t" "call handle_stack_overflow\n\t" "1: jmp 1b" : ASM_CALL_CONSTRAINT : "D" ("kernel stack overflow (page fault)"), "S" (regs), "d" (address), [stack] "rm" (stack)); unreachable(); } #endif + if (kfence_handle_page_fault(address)) + return; + /* * 32-bit: * * Valid to do another page fault here, because if this fault * had been triggered by is_prefetch fixup_exception would have * handled it. * * 64-bit: * * Hall of shame of CPU/BIOS bugs. */ if (is_prefetch(regs, error_code, address)) return; if (is_errata93(regs, address)) return; /* * Buggy firmware could access regions which might page fault, try to * recover from such faults. */ if (IS_ENABLED(CONFIG_EFI)) efi_recover_from_page_fault(address); oops: /* * Oops. The kernel tried to access some bad page. We'll have to * terminate things with extreme prejudice: */ flags = oops_begin(); Shouldn't kfence_handle_page_fault() happen after prefetch handling, at least? Maybe directly above the "oops" label?