Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754849AbeAIVA6 (ORCPT + 1 other); Tue, 9 Jan 2018 16:00:58 -0500 Received: from mail-pg0-f67.google.com ([74.125.83.67]:40448 "EHLO mail-pg0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1759810AbeAIU5P (ORCPT ); Tue, 9 Jan 2018 15:57:15 -0500 X-Google-Smtp-Source: ACJfBov9g3UUQGAFNiAyhkn2mFhEBxeLtf5zY59aMb4o1LGJDSVV7BQTXeXdodqj4fDGFbgCdpVFVA== From: Kees Cook To: linux-kernel@vger.kernel.org Cc: Kees Cook , David Windsor , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Laura Abbott , Ingo Molnar , Mark Rutland , linux-mm@kvack.org, linux-xfs@vger.kernel.org, Linus Torvalds , Alexander Viro , Andy Lutomirski , Christoph Hellwig , "David S. Miller" , "Martin K. Petersen" , Paolo Bonzini , Christian Borntraeger , Christoffer Dall , Dave Kleikamp , Jan Kara , Luis de Bethencourt , Marc Zyngier , Rik van Riel , Matthew Garrett , linux-fsdevel@vger.kernel.org, linux-arch@vger.kernel.org, netdev@vger.kernel.org, kernel-hardening@lists.openwall.com Subject: [PATCH 05/36] usercopy: WARN() on slab cache usercopy region violations Date: Tue, 9 Jan 2018 12:55:34 -0800 Message-Id: <1515531365-37423-6-git-send-email-keescook@chromium.org> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1515531365-37423-1-git-send-email-keescook@chromium.org> References: <1515531365-37423-1-git-send-email-keescook@chromium.org> Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Return-Path: From: David Windsor This patch adds checking of usercopy cache whitelisting, and is modified from Brad Spengler/PaX Team's PAX_USERCOPY whitelisting code in the last public patch of grsecurity/PaX based on my understanding of the code. Changes or omissions from the original code are mine and don't reflect the original grsecurity/PaX code. The SLAB and SLUB allocators are modified to WARN() on all copy operations in which the kernel heap memory being modified falls outside of the cache's defined usercopy region. Signed-off-by: David Windsor [kees: adjust commit log and comments, switch to WARN-by-default] Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: Laura Abbott Cc: Ingo Molnar Cc: Mark Rutland Cc: linux-mm@kvack.org Cc: linux-xfs@vger.kernel.org Signed-off-by: Kees Cook --- mm/slab.c | 30 +++++++++++++++++++++++++----- mm/slub.c | 34 +++++++++++++++++++++++++++------- mm/usercopy.c | 12 ++++++++++++ 3 files changed, 64 insertions(+), 12 deletions(-) diff --git a/mm/slab.c b/mm/slab.c index f1ead7b7909d..d9939828f8e4 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -4392,7 +4392,9 @@ module_init(slab_proc_init); #ifdef CONFIG_HARDENED_USERCOPY /* - * Rejects objects that are incorrectly sized. + * Rejects incorrectly sized objects and objects that are to be copied + * to/from userspace but do not fall entirely within the containing slab + * cache's usercopy region. * * Returns NULL if check passes, otherwise const char * to name of cache * to indicate an error. @@ -4412,11 +4414,29 @@ int __check_heap_object(const void *ptr, unsigned long n, struct page *page, /* Find offset within object. */ offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep); - /* Allow address range falling entirely within object size. */ - if (offset <= cachep->object_size && n <= cachep->object_size - offset) - return 0; + /* Make sure object falls entirely within cache's usercopy region. */ + if (offset < cachep->useroffset || + offset - cachep->useroffset > cachep->usersize || + n > cachep->useroffset - offset + cachep->usersize) { + /* + * If the copy is still within the allocated object, produce + * a warning instead of rejecting the copy. This is intended + * to be a temporary method to find any missing usercopy + * whitelists. + */ + if (offset <= cachep->object_size && + n <= cachep->object_size - offset) { + WARN_ONCE(1, "unexpected usercopy %s with bad or missing whitelist with SLAB object '%s' (offset %lu, size %lu)", + to_user ? "exposure" : "overwrite", + cachep->name, offset, n); + return 0; + } - return report_usercopy("SLAB object", cachep->name, to_user, offset, n); + return report_usercopy("SLAB object", cachep->name, to_user, + offset, n); + } + + return 0; } #endif /* CONFIG_HARDENED_USERCOPY */ diff --git a/mm/slub.c b/mm/slub.c index 8738a8d8bf8e..2aa4972a2058 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3813,7 +3813,9 @@ EXPORT_SYMBOL(__kmalloc_node); #ifdef CONFIG_HARDENED_USERCOPY /* - * Rejects objects that are incorrectly sized. + * Rejects incorrectly sized objects and objects that are to be copied + * to/from userspace but do not fall entirely within the containing slab + * cache's usercopy region. * * Returns NULL if check passes, otherwise const char * to name of cache * to indicate an error. @@ -3823,11 +3825,9 @@ int __check_heap_object(const void *ptr, unsigned long n, struct page *page, { struct kmem_cache *s; unsigned long offset; - size_t object_size; /* Find object and usable object size. */ s = page->slab_cache; - object_size = slab_ksize(s); /* Reject impossible pointers. */ if (ptr < page_address(page)) @@ -3845,11 +3845,31 @@ int __check_heap_object(const void *ptr, unsigned long n, struct page *page, offset -= s->red_left_pad; } - /* Allow address range falling entirely within object size. */ - if (offset <= object_size && n <= object_size - offset) - return 0; + /* Make sure object falls entirely within cache's usercopy region. */ + if (offset < s->useroffset || + offset - s->useroffset > s->usersize || + n > s->useroffset - offset + s->usersize) { + size_t object_size; - return report_usercopy("SLUB object", s->name, to_user, offset, n); + /* + * If the copy is still within the allocated object, produce + * a warning instead of rejecting the copy. This is intended + * to be a temporary method to find any missing usercopy + * whitelists. + */ + object_size = slab_ksize(s); + if ((offset <= object_size && n <= object_size - offset)) { + WARN_ONCE(1, "unexpected usercopy %s with bad or missing whitelist with SLUB object '%s' (offset %lu size %lu)", + to_user ? "exposure" : "overwrite", + s->name, offset, n); + return 0; + } + + return report_usercopy("SLUB object", s->name, to_user, + offset, n); + } + + return 0; } #endif /* CONFIG_HARDENED_USERCOPY */ diff --git a/mm/usercopy.c b/mm/usercopy.c index a8426a502136..4ed615d4efc8 100644 --- a/mm/usercopy.c +++ b/mm/usercopy.c @@ -58,6 +58,18 @@ static noinline int check_stack_object(const void *obj, unsigned long len) return GOOD_STACK; } +/* + * If this function is reached, then CONFIG_HARDENED_USERCOPY has found an + * unexpected state during a copy_from_user() or copy_to_user() call. + * There are several checks being performed on the buffer by the + * __check_object_size() function. Normal stack buffer usage should never + * trip the checks, and kernel text addressing will always trip the check. + * For cache objects, it is checking that only the whitelisted range of + * bytes for a given cache is being accessed (via the cache's usersize and + * useroffset fields). To adjust a cache whitelist, use the usercopy-aware + * kmem_cache_create_usercopy() function to create the cache (and + * carefully audit the whitelist range). + */ int report_usercopy(const char *name, const char *detail, bool to_user, unsigned long offset, unsigned long len) { -- 2.7.4