Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp3834470pxk; Tue, 29 Sep 2020 07:25:46 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxllFXaJMLCSmtun2+XkSlP9mH/4Oqwiv8yiT+WLwrDBQDK5YFuTrRs/vQS5F6G3V2/KRav X-Received: by 2002:a50:dec7:: with SMTP id d7mr3476561edl.212.1601389545909; Tue, 29 Sep 2020 07:25:45 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1601389545; cv=none; d=google.com; s=arc-20160816; b=OetHhv8iSQHc2MsiC1Zf+srLxZtnkYJHdjI9pFPq81Wh/qr84U6PSMpJamcRL0aVbR WL3OS9HlAHIwTqvJ3itsKnTFNJKPx/6n0ZZOykfvjqKZn1LNqOKvd2Lyh/cfKUoOxkMr s2xadq62vE3bdg4nwRgLinsxjFj3yfxINqJrAcIGLWUsgQgi+yuOjyPbwwqpEYDJ/6WS /5Ki7h5AWNTPRT0JEUOYhIa6dF0K/DoZY/KGlthMUmped4hIjq6Ipmp0hsaGGKyslU8k RUWUtqAbrW5i7TfVxptbhbhZblvWS2I2PS9cu+ZVa+PGqdPs8ZXAJ5uwNTsI8LoonvnI WY4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:in-reply-to:content-disposition:mime-version :references:message-id:subject:cc:to:from:date; bh=ZW+4NwmacTrZa2z5dSmMLmft55ZJ3zPWHeEX67t+K9U=; b=ixqVV/KJEIr0SjpQE46NxJLXw/ZMMbWjdpyMjfdLTl/y5xBXXbRZqBZy0wijTOlYHH 4R/E1NxYb3RVwoFai79LDSzOESbX/eS7IEWSG9jyoC86iD5wer9+B8cf+MpfDX8D9DKq EbIHkgT7VBsFfhNQ8UBsxCJkkAIM0GW77hCMwp1+P5dBwHBUp1qAeDPivprOBmO+yvQc xOuxac1+SofH5K+mWyXcC0TxwzTVh5rbPQtMbXvVDvKJ7R2BbPq0UQgSwv8KGV9MWavw wtbSrTHJLNzUAzYPpJRGucoS1M/Nc9QDRZioR2vIX2VTQzXLYjX8p7jHlA0YcuRbAVgT xHFQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id d17si3135494edp.500.2020.09.29.07.25.20; Tue, 29 Sep 2020 07:25:45 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728831AbgI2OYU (ORCPT + 99 others); Tue, 29 Sep 2020 10:24:20 -0400 Received: from foss.arm.com ([217.140.110.172]:46102 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727543AbgI2OYT (ORCPT ); Tue, 29 Sep 2020 10:24:19 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C3E5331B; Tue, 29 Sep 2020 07:24:18 -0700 (PDT) Received: from C02TD0UTHF1T.local (unknown [10.57.51.69]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D59CA3F6CF; Tue, 29 Sep 2020 07:24:13 -0700 (PDT) Date: Tue, 29 Sep 2020 15:24:11 +0100 From: Mark Rutland To: Marco Elver Cc: akpm@linux-foundation.org, glider@google.com, hpa@zytor.com, paulmck@kernel.org, andreyknvl@google.com, aryabinin@virtuozzo.com, luto@kernel.org, bp@alien8.de, catalin.marinas@arm.com, cl@linux.com, dave.hansen@linux.intel.com, rientjes@google.com, dvyukov@google.com, edumazet@google.com, gregkh@linuxfoundation.org, hdanton@sina.com, mingo@redhat.com, jannh@google.com, Jonathan.Cameron@huawei.com, corbet@lwn.net, iamjoonsoo.kim@lge.com, keescook@chromium.org, penberg@kernel.org, peterz@infradead.org, sjpark@amazon.com, tglx@linutronix.de, vbabka@suse.cz, will@kernel.org, x86@kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, kasan-dev@googlegroups.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org Subject: Re: [PATCH v3 01/10] mm: add Kernel Electric-Fence infrastructure Message-ID: <20200929142411.GC53442@C02TD0UTHF1T.local> References: <20200921132611.1700350-1-elver@google.com> <20200921132611.1700350-2-elver@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200921132611.1700350-2-elver@google.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Sep 21, 2020 at 03:26:02PM +0200, Marco Elver wrote: > From: Alexander Potapenko > > This adds the Kernel Electric-Fence (KFENCE) infrastructure. KFENCE is a > low-overhead sampling-based memory safety error detector of heap > use-after-free, invalid-free, and out-of-bounds access errors. > > KFENCE is designed to be enabled in production kernels, and has near > zero performance overhead. Compared to KASAN, KFENCE trades performance > for precision. The main motivation behind KFENCE's design, is that with > enough total uptime KFENCE will detect bugs in code paths not typically > exercised by non-production test workloads. One way to quickly achieve a > large enough total uptime is when the tool is deployed across a large > fleet of machines. > > KFENCE objects each reside on a dedicated page, at either the left or > right page boundaries. The pages to the left and right of the object > page are "guard pages", whose attributes are changed to a protected > state, and cause page faults on any attempted access to them. Such page > faults are then intercepted by KFENCE, which handles the fault > gracefully by reporting a memory access error. To detect out-of-bounds > writes to memory within the object's page itself, KFENCE also uses > pattern-based redzones. The following figure illustrates the page > layout: > > ---+-----------+-----------+-----------+-----------+-----------+--- > | xxxxxxxxx | O : | xxxxxxxxx | : O | xxxxxxxxx | > | xxxxxxxxx | B : | xxxxxxxxx | : B | xxxxxxxxx | > | x GUARD x | J : RED- | x GUARD x | RED- : J | x GUARD x | > | xxxxxxxxx | E : ZONE | xxxxxxxxx | ZONE : E | xxxxxxxxx | > | xxxxxxxxx | C : | xxxxxxxxx | : C | xxxxxxxxx | > | xxxxxxxxx | T : | xxxxxxxxx | : T | xxxxxxxxx | > ---+-----------+-----------+-----------+-----------+-----------+--- > > Guarded allocations are set up based on a sample interval (can be set > via kfence.sample_interval). After expiration of the sample interval, a > guarded allocation from the KFENCE object pool is returned to the main > allocator (SLAB or SLUB). At this point, the timer is reset, and the > next allocation is set up after the expiration of the interval. From other sub-threads it sounds like these addresses are not part of the linear/direct map. Having kmalloc return addresses outside of the linear map is going to break anything that relies on virt<->phys conversions, and is liable to make DMA corrupt memory. There were problems of that sort with VMAP_STACK, and this is why kvmalloc() is separate from kmalloc(). Have you tested with CONFIG_DEBUG_VIRTUAL? I'd expect that to scream. I strongly suspect this isn't going to be safe unless you always use an in-place carevout from the linear map (which could be the linear alias of a static carevout). [...] > +static __always_inline void *kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) > +{ > + return static_branch_unlikely(&kfence_allocation_key) ? __kfence_alloc(s, size, flags) : > + NULL; > +} Minor (unrelated) nit, but this would be easier to read as: static __always_inline void *kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags) { if (static_branch_unlikely(&kfence_allocation_key)) return __kfence_alloc(s, size, flags); return NULL; } Thanks, Mark.