Received: by 2002:a05:6a10:2726:0:0:0:0 with SMTP id ib38csp1062220pxb; Tue, 29 Mar 2022 16:15:28 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwiBqn85wnJRSgF+uMgPoEXa6zLC6IkV/zwSvuJQKdevdEmD4WxBzWUw2LqeulcfmcOgr9X X-Received: by 2002:a05:6402:34cf:b0:419:75b1:99ad with SMTP id w15-20020a05640234cf00b0041975b199admr7500890edc.228.1648595727967; Tue, 29 Mar 2022 16:15:27 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1648595727; cv=none; d=google.com; s=arc-20160816; b=w7Ywx6u7KGpk5t0dzy4otWUKJk6ImFUAhuiFvKT4IC9YXoBYrNRxgs66lfYeqitqQa U00Pikf0Vp+OG0DoXbZcOhvVih/D0TGV/N6oc/715AeqVd92l+fR13mf0t9U9s81pSyX hTt1aRCRFBKikiJosVxkEA8f3RlMlZMgm9LbQ7zpkzj0GA9C3OB/8lnoUho9gxT+XYia PPIFh1tRzbjXDJHj3gzruknqLzWrttiqGoiORv3+8qhc13h0uYRKI9rulByhVzHoVZKv kdxSn6twp3Kt+iluejhs4pVassg2PbjeVMp/BoY8IQloABGxXkWoL6coJjJWH+G+xECO sqRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:references:mime-version :message-id:in-reply-to:date:dkim-signature; bh=eLTwq5fXtDdDgS/orJshG1NPMtxpnPMwecHlrlYjY5k=; b=NoMZDjrF1LZM87S6IBifmaxZbcytk/IBCfYKXMOE4aia8hC6Iadjxb/O5rHgzSfoJc nF+QidIyZm5dMKLjt0K3um6+Q8IcEIFXV8PHPbGCo4bkzU3EYw/NpEm7wVJB3rQA9EOs 374kkfK9nPX9x4RY/IdM9OK0y40UAFEd/UAk7hdokalFsHSnDM/LYY4H7C+uR8Z193eq wor10bE/ZfsW9HAttyCzsX3jgHmYIFurTyq0AxvkjxGgvwPyuZn9afq8NT8Wx74RABNQ ZFT55azzdkvVr5NhsA5hdINCg2as4SaJRDKc+PwmGGSIqjHV+dZFqycetyJ3YGsETip9 cCkg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=YeLodpZf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id g17-20020a1709062db100b006df76385bf1si19143988eji.145.2022.03.29.16.15.02; Tue, 29 Mar 2022 16:15:27 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=YeLodpZf; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236352AbiC2MoY (ORCPT + 99 others); Tue, 29 Mar 2022 08:44:24 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35254 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236488AbiC2MnE (ORCPT ); Tue, 29 Mar 2022 08:43:04 -0400 Received: from mail-ed1-x549.google.com (mail-ed1-x549.google.com [IPv6:2a00:1450:4864:20::549]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A67BD21C046 for ; Tue, 29 Mar 2022 05:41:19 -0700 (PDT) Received: by mail-ed1-x549.google.com with SMTP id c31-20020a509fa2000000b004190d43d28fso10860828edf.9 for ; Tue, 29 Mar 2022 05:41:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=eLTwq5fXtDdDgS/orJshG1NPMtxpnPMwecHlrlYjY5k=; b=YeLodpZfduGO15yqSmZY77L+R71ULPMnJ6yUkyXtKqSlSSA54mtn2FcyNxxJVc4T8n Ps9eMetHiUKmh6e62QxJb+qy1wCTBSB9PsJGwttP10YPFhLiqtO3KCvAT2VGmKImTn5h FOXZ/dkZtREgpO/E7GOkbwTdi+os2zsItYWb7iH5IdjoGExdSbceax2P1ziSXdr+hgpK OMqfkCr+biZ8QakcknP/V/sQQP0lnJKG1x7AXoh5aOPYwyaJWOwVekm8k6WsAVaSt6u9 +09yHcZhlZUPQeEMReQIf2GWG6+2oppKnWRZv+PgK+qDgEndnY4xeZLppveUBoHHTq28 oreA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=eLTwq5fXtDdDgS/orJshG1NPMtxpnPMwecHlrlYjY5k=; b=ExJwrwXNValZf8LAj0/7ML6unO+3tmqb5M0Aff6ibH3GWoiPHYvs8NLmWhfOUIliK3 pqmgUc4Cn71hKPtIbKqNt67mVSneXSAFa+IiXZEP6wa3hLWqsTXg5kk1pYerLCmYv6ED dQTUL5wSwLQXHskKPM8mA362MsCvW9McduBp2SSIK57GBNrznuBszS7HuiiCORnjhPop htfPd570fskAXr1rjYXGwg8oe2ceqF9TM9zIJwYnb4VYjH53AezF9W6vdX5nZN3cQ7BC UFbBa0PZYbNIMt9yghhjrVtbqVDWQTJtjMUipdVnSsW5FFPYHXky/xdaUrekx5QcM4H5 /vzA== X-Gm-Message-State: AOAM530HBuZGisuhh5moTZs65Hj2Tc5xZRy+FGHPaeaepkyu7hyq7xDk iMxA+dpl42u/XrDNBBZh5UeZUkPmFp0= X-Received: from glider.muc.corp.google.com ([2a00:79e0:15:13:36eb:759:798f:98c3]) (user=glider job=sendgmr) by 2002:a17:907:1c10:b0:6da:6316:d009 with SMTP id nc16-20020a1709071c1000b006da6316d009mr34133251ejc.621.1648557677849; Tue, 29 Mar 2022 05:41:17 -0700 (PDT) Date: Tue, 29 Mar 2022 14:39:47 +0200 In-Reply-To: <20220329124017.737571-1-glider@google.com> Message-Id: <20220329124017.737571-19-glider@google.com> Mime-Version: 1.0 References: <20220329124017.737571-1-glider@google.com> X-Mailer: git-send-email 2.35.1.1021.g381101b075-goog Subject: [PATCH v2 18/48] kmsan: mm: call KMSAN hooks from SLUB code From: Alexander Potapenko To: glider@google.com Cc: Alexander Viro , Andrew Morton , Andrey Konovalov , Andy Lutomirski , Arnd Bergmann , Borislav Petkov , Christoph Hellwig , Christoph Lameter , David Rientjes , Dmitry Vyukov , Eric Dumazet , Greg Kroah-Hartman , Herbert Xu , Ilya Leoshkevich , Ingo Molnar , Jens Axboe , Joonsoo Kim , Kees Cook , Marco Elver , Mark Rutland , Matthew Wilcox , "Michael S. Tsirkin" , Pekka Enberg , Peter Zijlstra , Petr Mladek , Steven Rostedt , Thomas Gleixner , Vasily Gorbik , Vegard Nossum , Vlastimil Babka , linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,T_SCC_BODY_TEXT_LINE,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org In order to report uninitialized memory coming from heap allocations KMSAN has to poison them unless they're created with __GFP_ZERO. It's handy that we need KMSAN hooks in the places where init_on_alloc/init_on_free initialization is performed. Signed-off-by: Alexander Potapenko --- v2: -- move the implementation of SLUB hooks here Link: https://linux-review.googlesource.com/id/I6954b386c5c5d7f99f48bb6cbcc74b75136ce86e --- include/linux/kmsan.h | 57 ++++++++++++++++++++++++++++++ mm/kmsan/hooks.c | 80 +++++++++++++++++++++++++++++++++++++++++++ mm/slab.h | 1 + mm/slub.c | 21 ++++++++++-- 4 files changed, 157 insertions(+), 2 deletions(-) diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index da41850b46cbd..ed3630068e2ef 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -16,6 +16,7 @@ #include struct page; +struct kmem_cache; #ifdef CONFIG_KMSAN @@ -73,6 +74,44 @@ void kmsan_free_page(struct page *page, unsigned int order); */ void kmsan_copy_page_meta(struct page *dst, struct page *src); +/** + * kmsan_slab_alloc() - Notify KMSAN about a slab allocation. + * @s: slab cache the object belongs to. + * @object: object pointer. + * @flags: GFP flags passed to the allocator. + * + * Depending on cache flags and GFP flags, KMSAN sets up the metadata of the + * newly created object, marking it as initialized or uninitialized. + */ +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags); + +/** + * kmsan_slab_free() - Notify KMSAN about a slab deallocation. + * @s: slab cache the object belongs to. + * @object: object pointer. + * + * KMSAN marks the freed object as uninitialized. + */ +void kmsan_slab_free(struct kmem_cache *s, void *object); + +/** + * kmsan_kmalloc_large() - Notify KMSAN about a large slab allocation. + * @ptr: object pointer. + * @size: object size. + * @flags: GFP flags passed to the allocator. + * + * Similar to kmsan_slab_alloc(), but for large allocations. + */ +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags); + +/** + * kmsan_kfree_large() - Notify KMSAN about a large slab deallocation. + * @ptr: object pointer. + * + * Similar to kmsan_slab_free(), but for large allocations. + */ +void kmsan_kfree_large(const void *ptr); + /** * kmsan_map_kernel_range_noflush() - Notify KMSAN about a vmap. * @start: start of vmapped range. @@ -139,6 +178,24 @@ static inline void kmsan_copy_page_meta(struct page *dst, struct page *src) { } +static inline void kmsan_slab_alloc(struct kmem_cache *s, void *object, + gfp_t flags) +{ +} + +static inline void kmsan_slab_free(struct kmem_cache *s, void *object) +{ +} + +static inline void kmsan_kmalloc_large(const void *ptr, size_t size, + gfp_t flags) +{ +} + +static inline void kmsan_kfree_large(const void *ptr) +{ +} + static inline void kmsan_vmap_pages_range_noflush(unsigned long start, unsigned long end, pgprot_t prot, diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 5d886df57adca..e7c3ff48ed5cd 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -26,6 +26,86 @@ * skipping effects of functions like memset() inside instrumented code. */ +void kmsan_slab_alloc(struct kmem_cache *s, void *object, gfp_t flags) +{ + if (unlikely(object == NULL)) + return; + if (!kmsan_enabled || kmsan_in_runtime()) + return; + /* + * There's a ctor or this is an RCU cache - do nothing. The memory + * status hasn't changed since last use. + */ + if (s->ctor || (s->flags & SLAB_TYPESAFE_BY_RCU)) + return; + + kmsan_enter_runtime(); + if (flags & __GFP_ZERO) + kmsan_internal_unpoison_memory(object, s->object_size, + KMSAN_POISON_CHECK); + else + kmsan_internal_poison_memory(object, s->object_size, flags, + KMSAN_POISON_CHECK); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_slab_alloc); + +void kmsan_slab_free(struct kmem_cache *s, void *object) +{ + if (!kmsan_enabled || kmsan_in_runtime()) + return; + + /* RCU slabs could be legally used after free within the RCU period */ + if (unlikely(s->flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON))) + return; + /* + * If there's a constructor, freed memory must remain in the same state + * until the next allocation. We cannot save its state to detect + * use-after-free bugs, instead we just keep it unpoisoned. + */ + if (s->ctor) + return; + kmsan_enter_runtime(); + kmsan_internal_poison_memory(object, s->object_size, GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_slab_free); + +void kmsan_kmalloc_large(const void *ptr, size_t size, gfp_t flags) +{ + if (unlikely(ptr == NULL)) + return; + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + if (flags & __GFP_ZERO) + kmsan_internal_unpoison_memory((void *)ptr, size, + /*checked*/ true); + else + kmsan_internal_poison_memory((void *)ptr, size, flags, + KMSAN_POISON_CHECK); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_kmalloc_large); + +void kmsan_kfree_large(const void *ptr) +{ + struct page *page; + + if (!kmsan_enabled || kmsan_in_runtime()) + return; + kmsan_enter_runtime(); + page = virt_to_head_page((void *)ptr); + KMSAN_WARN_ON(ptr != page_address(page)); + kmsan_internal_poison_memory((void *)ptr, + PAGE_SIZE << compound_order(page), + GFP_KERNEL, + KMSAN_POISON_CHECK | KMSAN_POISON_FREE); + kmsan_leave_runtime(); +} +EXPORT_SYMBOL(kmsan_kfree_large); + static unsigned long vmalloc_shadow(unsigned long addr) { return (unsigned long)kmsan_get_metadata((void *)addr, diff --git a/mm/slab.h b/mm/slab.h index c7f2abc2b154c..c2538d856ec45 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -734,6 +734,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, memset(p[i], 0, s->object_size); kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags); + kmsan_slab_alloc(s, p[i], flags); } memcg_slab_post_alloc_hook(s, objcg, flags, size, p); diff --git a/mm/slub.c b/mm/slub.c index 261474092e43e..9b266f6b384b9 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include @@ -357,18 +358,28 @@ static void prefetch_freepointer(const struct kmem_cache *s, void *object) prefetchw(object + s->offset); } +/* + * When running under KMSAN, get_freepointer_safe() may return an uninitialized + * pointer value in the case the current thread loses the race for the next + * memory chunk in the freelist. In that case this_cpu_cmpxchg_double() in + * slab_alloc_node() will fail, so the uninitialized value won't be used, but + * KMSAN will still check all arguments of cmpxchg because of imperfect + * handling of inline assembly. + * To work around this problem, use kmsan_init() to force initialize the + * return value of get_freepointer_safe(). + */ static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) { unsigned long freepointer_addr; void *p; if (!debug_pagealloc_enabled_static()) - return get_freepointer(s, object); + return kmsan_init(get_freepointer(s, object)); object = kasan_reset_tag(object); freepointer_addr = (unsigned long)object + s->offset; copy_from_kernel_nofault(&p, (void **)freepointer_addr, sizeof(p)); - return freelist_ptr(s, p, freepointer_addr); + return kmsan_init(freelist_ptr(s, p, freepointer_addr)); } static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp) @@ -1683,6 +1694,7 @@ static inline void *kmalloc_large_node_hook(void *ptr, size_t size, gfp_t flags) ptr = kasan_kmalloc_large(ptr, size, flags); /* As ptr might get tagged, call kmemleak hook after KASAN. */ kmemleak_alloc(ptr, size, 1, flags); + kmsan_kmalloc_large(ptr, size, flags); return ptr; } @@ -1690,12 +1702,14 @@ static __always_inline void kfree_hook(void *x) { kmemleak_free(x); kasan_kfree_large(x); + kmsan_kfree_large(x); } static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x, bool init) { kmemleak_free_recursive(x, s->flags); + kmsan_slab_free(s, x); debug_check_no_locks_freed(x, s->object_size); @@ -3729,6 +3743,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size, */ slab_post_alloc_hook(s, objcg, flags, size, p, slab_want_init_on_alloc(flags, s)); + return i; error: slub_put_cpu_ptr(s->cpu_slab); @@ -5910,6 +5925,7 @@ static char *create_unique_id(struct kmem_cache *s) p += sprintf(p, "%07u", s->size); BUG_ON(p > name + ID_STR_LENGTH - 1); + kmsan_unpoison_memory(name, p - name); return name; } @@ -6011,6 +6027,7 @@ static int sysfs_slab_alias(struct kmem_cache *s, const char *name) al->name = name; al->next = alias_list; alias_list = al; + kmsan_unpoison_memory(al, sizeof(struct saved_alias)); return 0; } -- 2.35.1.1021.g381101b075-goog