Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752256AbdGFQQl (ORCPT ); Thu, 6 Jul 2017 12:16:41 -0400 Received: from mail-it0-f66.google.com ([209.85.214.66]:34483 "EHLO mail-it0-f66.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751828AbdGFQQk (ORCPT ); Thu, 6 Jul 2017 12:16:40 -0400 Message-ID: <1499357796.1428.2.camel@gmail.com> Subject: Re: [kernel-hardening] Re: [PATCH v3] mm: Add SLUB free list pointer obfuscation From: Daniel Micay To: Christoph Lameter , Kees Cook Cc: Andrew Morton , Pekka Enberg , David Rientjes , Joonsoo Kim , "Paul E. McKenney" , Ingo Molnar , Josh Triplett , Andy Lutomirski , Nicolas Pitre , Tejun Heo , Daniel Mack , Sebastian Andrzej Siewior , Sergey Senozhatsky , Helge Deller , Rik van Riel , Linux-MM , Tycho Andersen , LKML , "kernel-hardening@lists.openwall.com" Date: Thu, 06 Jul 2017 12:16:36 -0400 In-Reply-To: References: <20170706002718.GA102852@beast> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.24.3 Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3617 Lines: 81 On Thu, 2017-07-06 at 10:55 -0500, Christoph Lameter wrote: > On Thu, 6 Jul 2017, Kees Cook wrote: > > > On Thu, Jul 6, 2017 at 6:43 AM, Christoph Lameter > > wrote: > > > On Wed, 5 Jul 2017, Kees Cook wrote: > > > > > > > @@ -3536,6 +3565,9 @@ static int kmem_cache_open(struct > > > > kmem_cache *s, unsigned long flags) > > > > { > > > > s->flags = kmem_cache_flags(s->size, flags, s->name, s- > > > > >ctor); > > > > s->reserved = 0; > > > > +#ifdef CONFIG_SLAB_FREELIST_HARDENED > > > > + s->random = get_random_long(); > > > > +#endif > > > > > > > > if (need_reserve_slab_rcu && (s->flags & > > > > SLAB_TYPESAFE_BY_RCU)) > > > > s->reserved = sizeof(struct rcu_head); > > > > > > > > > > So if an attacker knows the internal structure of data then he can > > > simply > > > dereference page->kmem_cache->random to decode the freepointer. > > > > That requires a series of arbitrary reads. This is protecting > > against > > attacks that use an adjacent slab object write overflow to write the > > freelist pointer. This internal structure is very reliable, and has > > been the basis of freelist attacks against the kernel for a decade. > > These reads are not arbitrary. You can usually calculate the page > struct > address easily from the address and then do a couple of loads to get > there. You're describing an arbitrary read vulnerability: an attacker able to read the value of an address of their choosing. Requiring a powerful additional primitive rather than only a small fixed size overflow or a weak use-after-free vulnerability to use a common exploit vector is useful. A deterministic mitigation would be better, but I don't think an extra slab allocator for hardened kernels would be welcomed. Since there isn't a separate allocator for that niche, SLAB or SLUB are used. The ideal would be bitmaps in `struct page` but that implies another allocator, using single pages for the smallest size classes and potentially needing to bloat `struct page` even with that. There's definitely a limit to the hardening that can be done for SLUB, but unless forking it into a different allocator is welcome that's what will be suggested. Similarly, the slab freelist randomization feature is a much weaker mitigation than it could be without these constraints placed on it. This is much lower complexity than that and higher value though... > Ok so you get rid of the old attacks because we did not have that > hardening in effect when they designed their approaches? > > > It is a probabilistic defense, but then so is the stack protector. > > This is a similar defense; while not perfect it makes the class of > > attack much more difficult to mount. > > Na I am not convinced of the "much more difficult". Maybe they will > just > have to upgrade their approaches to fetch the proper values to decode. To fetch the values they would need an arbitrary read vulnerability or the ability to dump them via uninitialized slab allocations as an extra requirement. An attacker can similarly bypass the stack canary by reading them from stack frames via a stack buffer read overflow or uninitialized variable usage leaking stack data. On non-x86, at least with SMP, the stack canary is just a global variable that remains the same after initialization too. That doesn't make it useless, although the kernel doesn't have many linear overflows on the stack which is the real issue with it as a mitigation. Despite that, most people are using kernels with stack canaries, and that has a significant performance cost unlike these kinds of changes.