Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751152AbdFTSF0 (ORCPT ); Tue, 20 Jun 2017 14:05:26 -0400 Received: from mail-qt0-f181.google.com ([209.85.216.181]:34713 "EHLO mail-qt0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751056AbdFTSFX (ORCPT ); Tue, 20 Jun 2017 14:05:23 -0400 Subject: Re: [PATCH] mm: Add SLUB free list pointer obfuscation To: Kees Cook , Christoph Lameter Cc: Daniel Micay , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , "Paul E. McKenney" , Ingo Molnar , Andy Lutomirski , Nicolas Pitre , Tejun Heo , Daniel Mack , Sebastian Andrzej Siewior , Sergey Senozhatsky , Helge Deller , Rik van Riel , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-hardening@lists.openwall.com References: <20170620030112.GA140256@beast> From: Laura Abbott Message-ID: <505961f9-b266-191a-f4b7-931410a55149@redhat.com> Date: Tue, 20 Jun 2017 11:05:17 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.1.0 MIME-Version: 1.0 In-Reply-To: <20170620030112.GA140256@beast> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 8bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4919 Lines: 141 On 06/19/2017 08:01 PM, Kees Cook wrote: > This SLUB free list pointer obfuscation code is modified from Brad > Spengler/PaX Team's code in the last public patch of grsecurity/PaX based > on my understanding of the code. Changes or omissions from the original > code are mine and don't reflect the original grsecurity/PaX code. > > This adds a per-cache random value to SLUB caches that is XORed with > their freelist pointers. This adds nearly zero overhead and frustrates the > very common heap overflow exploitation method of overwriting freelist > pointers. A recent example of the attack is written up here: > http://cyseclabs.com/blog/cve-2016-6187-heap-off-by-one-exploit > > This is based on patches by Daniel Micay, and refactored to avoid lots > of #ifdef code. > > Suggested-by: Daniel Micay > Signed-off-by: Kees Cook > --- > include/linux/slub_def.h | 4 ++++ > init/Kconfig | 10 ++++++++++ > mm/slub.c | 32 +++++++++++++++++++++++++++----- > 3 files changed, 41 insertions(+), 5 deletions(-) > > diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h > index 07ef550c6627..0258d6d74e9c 100644 > --- a/include/linux/slub_def.h > +++ b/include/linux/slub_def.h > @@ -93,6 +93,10 @@ struct kmem_cache { > #endif > #endif > > +#ifdef CONFIG_SLAB_HARDENED > + unsigned long random; > +#endif > + > #ifdef CONFIG_NUMA > /* > * Defragmentation by allocating from a remote node. > diff --git a/init/Kconfig b/init/Kconfig > index 1d3475fc9496..eb91082546bf 100644 > --- a/init/Kconfig > +++ b/init/Kconfig > @@ -1900,6 +1900,16 @@ config SLAB_FREELIST_RANDOM > security feature reduces the predictability of the kernel slab > allocator against heap overflows. > > +config SLAB_HARDENED > + bool "Harden slab cache infrastructure" > + default y > + depends on SLAB_FREELIST_RANDOM && SLUB> + help > + Many kernel heap attacks try to target slab cache metadata and > + other infrastructure. This options makes minor performance > + sacrifies to harden the kernel slab allocator against common > + exploit methods. > + Going to bikeshed on SLAB_HARDENED unless this is intended to be used for more things. Perhaps SLAB_FREELIST_HARDENED? What's the reason for the dependency on SLAB_FREELIST_RANDOM? > config SLUB_CPU_PARTIAL > default y > depends on SLUB && SMP > diff --git a/mm/slub.c b/mm/slub.c > index 57e5156f02be..ffede2e0c5c1 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -34,6 +34,7 @@ > #include > #include > #include > +#include > > #include > > @@ -238,30 +239,50 @@ static inline void stat(const struct kmem_cache *s, enum stat_item si) > * Core slab cache functions > *******************************************************************/ > > +#ifdef CONFIG_SLAB_HARDENED > +# define initialize_random(s) \ > + do { \ > + s->random = get_random_long(); \ > + } while (0) > +# define FREEPTR_VAL(ptr, ptr_addr, s) \ > + (void *)((unsigned long)(ptr) ^ s->random ^ (ptr_addr)) > +#else > +# define initialize_random(s) do { } while (0) > +# define FREEPTR_VAL(ptr, addr, s) ((void *)(ptr)) > +#endif > +#define FREELIST_ENTRY(ptr_addr, s) \ > + FREEPTR_VAL(*(unsigned long *)(ptr_addr), \ > + (unsigned long)ptr_addr, s) > + > static inline void *get_freepointer(struct kmem_cache *s, void *object) > { > - return *(void **)(object + s->offset); > + return FREELIST_ENTRY(object + s->offset, s); > } > > static void prefetch_freepointer(const struct kmem_cache *s, void *object) > { > - prefetch(object + s->offset); > + if (object) > + prefetch(FREELIST_ENTRY(object + s->offset, s)); > } > > static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) > { > + unsigned long freepointer_addr; > void *p; > > if (!debug_pagealloc_enabled()) > return get_freepointer(s, object); > > - probe_kernel_read(&p, (void **)(object + s->offset), sizeof(p)); > - return p; > + freepointer_addr = (unsigned long)object + s->offset; > + probe_kernel_read(&p, (void **)freepointer_addr, sizeof(p)); > + return FREEPTR_VAL(p, freepointer_addr, s); > } > > static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp) > { > - *(void **)(object + s->offset) = fp; > + unsigned long freeptr_addr = (unsigned long)object + s->offset; > + > + *(void **)freeptr_addr = FREEPTR_VAL(fp, freeptr_addr, s); > } > > /* Loop over all objects in a slab */ > @@ -3536,6 +3557,7 @@ static int kmem_cache_open(struct kmem_cache *s, unsigned long flags) > { > s->flags = kmem_cache_flags(s->size, flags, s->name, s->ctor); > s->reserved = 0; > + initialize_random(s); > > if (need_reserve_slab_rcu && (s->flags & SLAB_TYPESAFE_BY_RCU)) > s->reserved = sizeof(struct rcu_head); >