Received: by 2002:a25:f815:0:0:0:0:0 with SMTP id u21csp877208ybd; Wed, 26 Jun 2019 07:28:41 -0700 (PDT) X-Google-Smtp-Source: APXvYqy4xjkXhJLrANC6kjSQr8wjT7hQAyx1HgNRT7XpMrokq9E/F0B1Cyhf0PS/YKa8s1ny2cNy X-Received: by 2002:a63:dc11:: with SMTP id s17mr3333064pgg.47.1561559321647; Wed, 26 Jun 2019 07:28:41 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1561559321; cv=none; d=google.com; s=arc-20160816; b=vU385cjCcm12kpozE2lOtTx5TVIJD8DbCVdFkhRcgrOE1cjgZabRLZKZ/yZKAW+SZJ DAfMGJpWt2o2w/ZkXn7f/ueSi1DHzk9EkcQ7ItQP2IujA9PoVdJLHTUd+32OCZ5HJahV PCyb99Ztb7T9xP0Cc+perfZ2hH2munocKnpbwcuCMe2erfVfCkB4D4PpxG/dNtMUctDR /FKHTYlaUYcL2JjHPyVOcjxEPLw5UxLNRx/Q7gcPqFuENhQeoVa3bvXqaN+t9czyMfXe pQblTHTCLwFVHcSgF0s+Jt1I4jtmgfqzFuq1Zyx4cDVvBpKkYGpkgJbGJ6WGjasx3zey w89w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:from:subject:references :mime-version:message-id:in-reply-to:date:dkim-signature; bh=3eGEWBsY8dqF2V6DSZ5u38cqY6eLJM9O9Aa4+MoaYxw=; b=iw35XFmkjLWR4QR00SXFy2Y9XiaAE1LejGlSf5x5IS0Swe1g1s0ROMHHMEUazdiN7A 1RDAC6qwAjpwO/EiA6YVz9He6zygNwi0LHAX2yi14RPxQIvE2gt3h96ePGmDHP9QwhYe KiPsB7I3AydQUASW0xflUMqtcKYPXKW3z+YTeOCbM+duoD4+mF9b5FQQaTNCCXtTz7Wc An5zfvODe5xEEBEEGmc6zLgdfTjvxSezZ2pVls3pH1VeXpnwx5SdEB0BGvr76WBfv9LC jYU6ZISLmon7jg2ADDTrHINFcXdjRDdbudq1BJK/6n4Pxlw0YLC964QPIkveKE8zGdWA cbfg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b="u/9U+nio"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id m16si11959437pgh.143.2019.06.26.07.28.25; Wed, 26 Jun 2019 07:28:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b="u/9U+nio"; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728083AbfFZO2I (ORCPT + 99 others); Wed, 26 Jun 2019 10:28:08 -0400 Received: from mail-vk1-f202.google.com ([209.85.221.202]:46729 "EHLO mail-vk1-f202.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728063AbfFZO2H (ORCPT ); Wed, 26 Jun 2019 10:28:07 -0400 Received: by mail-vk1-f202.google.com with SMTP id p64so943438vkp.13 for ; Wed, 26 Jun 2019 07:28:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=3eGEWBsY8dqF2V6DSZ5u38cqY6eLJM9O9Aa4+MoaYxw=; b=u/9U+nio0GSQlTuwXQJYoLEcLF76o87YU8PcOckc4P78mxdxcX2UHm9EJLgXpLkXI5 HQt7VNp7dkReXf/MW6OH7DXj8PZkvAKjLN1R9ZAJ3mJ/d13kIUhoP7HmTZrewr22chyt T309sesiuUjKxYHbwNl0ml81KCv75+HCSipQYmUn34aqKbM465NC1YTY0rRGd/fuqGH4 l91CX114soALpbHKmRDF0xGDcGJfDc5o4imbC3mAOQPLzLPWoFHSWX8OvNtpv4uhs/3s 7dkzwq+JudTY0lRiJUeHcINv/obrEvTfimyBai//ek9cE1vxy9hLCL9Uji+qwgilNtbz Hzqw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=3eGEWBsY8dqF2V6DSZ5u38cqY6eLJM9O9Aa4+MoaYxw=; b=LAwlehcXPgWUpuNfXz8jXHCAPRxLHLuV8YvwX8BgrsNI5FZqMaBE0llICf6Xd9YEV3 b02t3bFGtFRBus0MNbW3hdt+HGxtWMtnKZKyOpx5UrEAQauTwgV7czy97c1m6zmX+KCA Miswpav0wJUEDlEhp+R4ABU2R6ZnYcvP0SiaDh3aKjg5aKOlfqq7wa3wP7IWVxKcJDqZ jRII2st4+LhbW0Lvc6d9si+0KGwxr91Ni2pDfTvpWfXSjVvOuyCpBzfup0chKMgO7aGt vRo/olO8itlfBwxNEmPlgMsUibXtRAn4bWGGT+IYotDF0yi64PfMaPX4gJnpZlaxHDiH X3Pg== X-Gm-Message-State: APjAAAXngw4Qk0zI2uuKmpL2Z1gil/5T+pUOKTns7X6ytlgPggIxCEVm Rfjo1bB2xpFS5WHki3IU47GoqP/9BA== X-Received: by 2002:a1f:14c1:: with SMTP id 184mr1327869vku.69.1561559285813; Wed, 26 Jun 2019 07:28:05 -0700 (PDT) Date: Wed, 26 Jun 2019 16:20:13 +0200 In-Reply-To: <20190626142014.141844-1-elver@google.com> Message-Id: <20190626142014.141844-5-elver@google.com> Mime-Version: 1.0 References: <20190626142014.141844-1-elver@google.com> X-Mailer: git-send-email 2.22.0.410.gd8fdbe21b5-goog Subject: [PATCH v3 4/5] mm/slab: Refactor common ksize KASAN logic into slab_common.c From: Marco Elver To: elver@google.com Cc: linux-kernel@vger.kernel.org, Andrey Ryabinin , Dmitry Vyukov , Alexander Potapenko , Andrey Konovalov , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Mark Rutland , kasan-dev@googlegroups.com, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This refactors common code of ksize() between the various allocators into slab_common.c: __ksize() is the allocator-specific implementation without instrumentation, whereas ksize() includes the required KASAN logic. Signed-off-by: Marco Elver Cc: Andrey Ryabinin Cc: Dmitry Vyukov Cc: Alexander Potapenko Cc: Andrey Konovalov Cc: Christoph Lameter Cc: Pekka Enberg Cc: David Rientjes Cc: Joonsoo Kim Cc: Andrew Morton Cc: Mark Rutland Cc: kasan-dev@googlegroups.com Cc: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org --- include/linux/slab.h | 1 + mm/slab.c | 28 ++++++---------------------- mm/slab_common.c | 26 ++++++++++++++++++++++++++ mm/slob.c | 4 ++-- mm/slub.c | 14 ++------------ 5 files changed, 37 insertions(+), 36 deletions(-) diff --git a/include/linux/slab.h b/include/linux/slab.h index 9449b19c5f10..98c3d12b7275 100644 --- a/include/linux/slab.h +++ b/include/linux/slab.h @@ -184,6 +184,7 @@ void * __must_check __krealloc(const void *, size_t, gfp_t); void * __must_check krealloc(const void *, size_t, gfp_t); void kfree(const void *); void kzfree(const void *); +size_t __ksize(const void *); size_t ksize(const void *); #ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR diff --git a/mm/slab.c b/mm/slab.c index f7117ad9b3a3..394e7c7a285e 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -4204,33 +4204,17 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, #endif /* CONFIG_HARDENED_USERCOPY */ /** - * ksize - get the actual amount of memory allocated for a given object - * @objp: Pointer to the object + * __ksize -- Uninstrumented ksize. * - * kmalloc may internally round up allocations and return more memory - * than requested. ksize() can be used to determine the actual amount of - * memory allocated. The caller may use this additional memory, even though - * a smaller amount of memory was initially specified with the kmalloc call. - * The caller must guarantee that objp points to a valid object previously - * allocated with either kmalloc() or kmem_cache_alloc(). The object - * must not be freed during the duration of the call. - * - * Return: size of the actual memory used by @objp in bytes + * Unlike ksize(), __ksize() is uninstrumented, and does not provide the same + * safety checks as ksize() with KASAN instrumentation enabled. */ -size_t ksize(const void *objp) +size_t __ksize(const void *objp) { - size_t size; - BUG_ON(!objp); if (unlikely(objp == ZERO_SIZE_PTR)) return 0; - size = virt_to_cache(objp)->object_size; - /* We assume that ksize callers could use the whole allocated area, - * so we need to unpoison this area. - */ - kasan_unpoison_shadow(objp, size); - - return size; + return virt_to_cache(objp)->object_size; } -EXPORT_SYMBOL(ksize); +EXPORT_SYMBOL(__ksize); diff --git a/mm/slab_common.c b/mm/slab_common.c index 58251ba63e4a..b7c6a40e436a 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -1597,6 +1597,32 @@ void kzfree(const void *p) } EXPORT_SYMBOL(kzfree); +/** + * ksize - get the actual amount of memory allocated for a given object + * @objp: Pointer to the object + * + * kmalloc may internally round up allocations and return more memory + * than requested. ksize() can be used to determine the actual amount of + * memory allocated. The caller may use this additional memory, even though + * a smaller amount of memory was initially specified with the kmalloc call. + * The caller must guarantee that objp points to a valid object previously + * allocated with either kmalloc() or kmem_cache_alloc(). The object + * must not be freed during the duration of the call. + * + * Return: size of the actual memory used by @objp in bytes + */ +size_t ksize(const void *objp) +{ + size_t size = __ksize(objp); + /* + * We assume that ksize callers could use whole allocated area, + * so we need to unpoison this area. + */ + kasan_unpoison_shadow(objp, size); + return size; +} +EXPORT_SYMBOL(ksize); + /* Tracepoints definitions. */ EXPORT_TRACEPOINT_SYMBOL(kmalloc); EXPORT_TRACEPOINT_SYMBOL(kmem_cache_alloc); diff --git a/mm/slob.c b/mm/slob.c index 84aefd9b91ee..7f421d0ca9ab 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -527,7 +527,7 @@ void kfree(const void *block) EXPORT_SYMBOL(kfree); /* can't use ksize for kmem_cache_alloc memory, only kmalloc */ -size_t ksize(const void *block) +size_t __ksize(const void *block) { struct page *sp; int align; @@ -545,7 +545,7 @@ size_t ksize(const void *block) m = (unsigned int *)(block - align); return SLOB_UNITS(*m) * SLOB_UNIT; } -EXPORT_SYMBOL(ksize); +EXPORT_SYMBOL(__ksize); int __kmem_cache_create(struct kmem_cache *c, slab_flags_t flags) { diff --git a/mm/slub.c b/mm/slub.c index cd04dbd2b5d0..05a8d17dd9b2 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3901,7 +3901,7 @@ void __check_heap_object(const void *ptr, unsigned long n, struct page *page, } #endif /* CONFIG_HARDENED_USERCOPY */ -static size_t __ksize(const void *object) +size_t __ksize(const void *object) { struct page *page; @@ -3917,17 +3917,7 @@ static size_t __ksize(const void *object) return slab_ksize(page->slab_cache); } - -size_t ksize(const void *object) -{ - size_t size = __ksize(object); - /* We assume that ksize callers could use whole allocated area, - * so we need to unpoison this area. - */ - kasan_unpoison_shadow(object, size); - return size; -} -EXPORT_SYMBOL(ksize); +EXPORT_SYMBOL(__ksize); void kfree(const void *x) { -- 2.22.0.410.gd8fdbe21b5-goog