Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1164665AbdD1GQ4 (ORCPT ); Fri, 28 Apr 2017 02:16:56 -0400 Received: from mx2.suse.de ([195.135.220.15]:36312 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S935780AbdD1GQr (ORCPT ); Fri, 28 Apr 2017 02:16:47 -0400 Date: Fri, 28 Apr 2017 08:16:38 +0200 From: Michal Hocko To: Kees Cook Cc: Christoph Lameter , Andrew Morton , Pekka Enberg , David Rientjes , Joonsoo Kim , Linux-MM , LKML Subject: Re: [PATCH] mm: Add additional consistency check Message-ID: <20170428061637.GB8143@dhcp22.suse.cz> References: <20170404151600.GN15132@dhcp22.suse.cz> <20170404194220.GT15132@dhcp22.suse.cz> <20170404201334.GV15132@dhcp22.suse.cz> <20170411134618.GN6729@dhcp22.suse.cz> <20170411141956.GP6729@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.23 (2014-03-12) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2363 Lines: 67 On Thu 27-04-17 18:11:28, Kees Cook wrote: > On Tue, Apr 11, 2017 at 7:19 AM, Michal Hocko wrote: > > I would do something like... > > --- > > diff --git a/mm/slab.c b/mm/slab.c > > index bd63450a9b16..87c99a5e9e18 100644 > > --- a/mm/slab.c > > +++ b/mm/slab.c > > @@ -393,10 +393,15 @@ static inline void set_store_user_dirty(struct kmem_cache *cachep) {} > > static int slab_max_order = SLAB_MAX_ORDER_LO; > > static bool slab_max_order_set __initdata; > > > > +static inline struct kmem_cache *page_to_cache(struct page *page) > > +{ > > + return page->slab_cache; > > +} > > + > > static inline struct kmem_cache *virt_to_cache(const void *obj) > > { > > struct page *page = virt_to_head_page(obj); > > - return page->slab_cache; > > + return page_to_cache(page); > > } > > > > static inline void *index_to_obj(struct kmem_cache *cache, struct page *page, > > @@ -3813,14 +3818,18 @@ void kfree(const void *objp) > > { > > struct kmem_cache *c; > > unsigned long flags; > > + struct page *page; > > > > trace_kfree(_RET_IP_, objp); > > > > if (unlikely(ZERO_OR_NULL_PTR(objp))) > > return; > > + page = virt_to_head_page(obj); > > + if (CHECK_DATA_CORRUPTION(!PageSlab(page))) > > + return; > > local_irq_save(flags); > > kfree_debugcheck(objp); > > - c = virt_to_cache(objp); > > + c = page_to_cache(page); > > debug_check_no_locks_freed(objp, c->object_size); > > > > debug_check_no_obj_freed(objp, c->object_size); > > Sorry for the delay, I've finally had time to look at this again. > > So, this only handles the kfree() case, not the kmem_cache_free() nor > kmem_cache_free_bulk() cases, so it misses all the non-kmalloc > allocations (and kfree() ultimately calls down to kmem_cache_free()). > Similarly, my proposed patch missed the kfree() path. :P yes > As I work on a replacement, is the goal to avoid the checks while > under local_irq_save()? (i.e. I can't just put the check in > virt_to_cache(), etc.) You would have to check all callers of virt_to_cache. I would simply replace BUG_ON(!PageSlab()) in cache_from_obj. kmem_cache_free already handles NULL cache. kmem_cache_free_bulk and build_detached_freelist can be made to do so. -- Michal Hocko SUSE Labs