Received: by 2002:a05:6358:3188:b0:123:57c1:9b43 with SMTP id q8csp28996019rwd; Wed, 5 Jul 2023 05:57:08 -0700 (PDT) X-Google-Smtp-Source: APBJJlERMbQtOmVyAF48nBsSeISqK8V+2sM/OXwOINf4DTSWN+aiU/URgkqtd5Gl+pEFe1gDPBeR X-Received: by 2002:a17:903:481:b0:1b8:36a8:faf9 with SMTP id jj1-20020a170903048100b001b836a8faf9mr14329948plb.38.1688561827710; Wed, 05 Jul 2023 05:57:07 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1688561827; cv=none; d=google.com; s=arc-20160816; b=Zc81xuLd0DUZFZwioVSNvu4IXjskiJzezeE2QAQbIjM9O1p8z9Pi1vEW2iwGsHLflv sYApclOnAh2cO3b8rdb+P4yIBYtxRvKd+JFUCVfI+1emmlnNGIc2mn/NXjSIc7xK37Y4 6XM0HdfdXwJ3FO2OSH+Fv2hxo84SOv3GKFV0V1TDMoz/bw5uLcZsRcUMHIZ5TKztJFN5 6Ml9rX/5GF51F/RmyQZULi5fACeEXTO5R8+b1zl6qh/wvM3Hy3tQTHAOz0VmZ3FD5OV4 0eHnxNoCTdy/1TlMM+dBliqwzQaxF7d5TEm4mbtL0LgBFoxBYnB6hbV6QFoqMkbs4XXS NPFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :message-id:date:subject:cc:to:from:dkim-signature; bh=OvYom9X/S7wayVeQUgTcDXl5jN2F2BPuKd+7kG4C4mU=; fh=Z7U9iMdsC69cB1H0ASX4Oy4U3mNnZLPnMIDPEgDFekk=; b=JJr8jSt3nGmUkcmISNGYoO4oWVqA1yyEunvr9uWpc0qlWiwStXUYQKObIfjHD+w5Ty V/zhZdzOHviheHAGY5QEUzGrDqwTPL5HAyjuVRERtQ66BzKyimItYyax+Ez68qci91yH iW5s9TbysEZ+vdKZD6OA03ErEYx7pGmP0B6/ZnTEW8ZUcb5vzkMm7INdw5+Nx0Zpcyl5 SkkNfoF36qk8PgnTYjtvXcM2evmS25BTwINF8yLwZisE5FijRUO2KDTCDNX1xssN3quT zp5ePyXcShCEURFkAWb0J8jLRd61sETzBTv7S5ob2/MihyORDIDYm3gSBAb5AuGu/YqV oTGg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=L8cZm9Lx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id lk13-20020a17090308cd00b001b815039794si17444565plb.222.2023.07.05.05.56.52; Wed, 05 Jul 2023 05:57:07 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@linux.dev header.s=key1 header.b=L8cZm9Lx; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linux.dev Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231903AbjGEMoN (ORCPT + 99 others); Wed, 5 Jul 2023 08:44:13 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50140 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230268AbjGEMoM (ORCPT ); Wed, 5 Jul 2023 08:44:12 -0400 Received: from out-1.mta1.migadu.com (out-1.mta1.migadu.com [95.215.58.1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7D3E810D5 for ; Wed, 5 Jul 2023 05:44:11 -0700 (PDT) X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1688561049; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=OvYom9X/S7wayVeQUgTcDXl5jN2F2BPuKd+7kG4C4mU=; b=L8cZm9LxdvRBYMRNAvSEu9H0IxP7taSx5EvR7BcZ8G88ca7lnB4ReJaYDVG16WBkTlpRN7 24GG0wNfx+8UM1CyKbWj+H72mwRnnArUTCvZRaQVPbU+ySUdM/44CzzLQw+XaOIEncUFVt 8VsKnaiXIIbjxVhTkWZ85lf6z419tXo= From: andrey.konovalov@linux.dev To: Marco Elver , Mark Rutland Cc: Andrey Konovalov , Alexander Potapenko , Dmitry Vyukov , Andrey Ryabinin , Vincenzo Frascino , kasan-dev@googlegroups.com, Andrew Morton , linux-mm@kvack.org, Catalin Marinas , Peter Collingbourne , Feng Tang , stable@vger.kernel.org, Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-kernel@vger.kernel.org, Andrey Konovalov Subject: [PATCH] kasan, slub: fix HW_TAGS zeroing with slub_debug Date: Wed, 5 Jul 2023 14:44:02 +0200 Message-Id: <678ac92ab790dba9198f9ca14f405651b97c8502.1688561016.git.andreyknvl@google.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Migadu-Flow: FLOW_OUT X-Spam-Status: No, score=-2.1 required=5.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,SPF_HELO_NONE,SPF_PASS, T_SCC_BODY_TEXT_LINE,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Andrey Konovalov Commit 946fa0dbf2d8 ("mm/slub: extend redzone check to extra allocated kmalloc space than requested") added precise kmalloc redzone poisoning to the slub_debug functionality. However, this commit didn't account for HW_TAGS KASAN fully initializing the object via its built-in memory initialization feature. Even though HW_TAGS KASAN memory initialization contains special memory initialization handling for when slub_debug is enabled, it does not account for in-object slub_debug redzones. As a result, HW_TAGS KASAN can overwrite these redzones and cause false-positive slub_debug reports. To fix the issue, avoid HW_TAGS KASAN memory initialization when slub_debug is enabled altogether. Implement this by moving the __slub_debug_enabled check to slab_post_alloc_hook. Common slab code seems like a more appropriate place for a slub_debug check anyway. Fixes: 946fa0dbf2d8 ("mm/slub: extend redzone check to extra allocated kmalloc space than requested") Cc: Reported-by: Mark Rutland Signed-off-by: Andrey Konovalov --- mm/kasan/kasan.h | 12 ------------ mm/slab.h | 16 ++++++++++++++-- 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h index b799f11e45dc..2e973b36fe07 100644 --- a/mm/kasan/kasan.h +++ b/mm/kasan/kasan.h @@ -466,18 +466,6 @@ static inline void kasan_unpoison(const void *addr, size_t size, bool init) if (WARN_ON((unsigned long)addr & KASAN_GRANULE_MASK)) return; - /* - * Explicitly initialize the memory with the precise object size to - * avoid overwriting the slab redzone. This disables initialization in - * the arch code and may thus lead to performance penalty. This penalty - * does not affect production builds, as slab redzones are not enabled - * there. - */ - if (__slub_debug_enabled() && - init && ((unsigned long)size & KASAN_GRANULE_MASK)) { - init = false; - memzero_explicit((void *)addr, size); - } size = round_up(size, KASAN_GRANULE_SIZE); hw_set_mem_tag_range((void *)addr, size, tag, init); diff --git a/mm/slab.h b/mm/slab.h index 6a5633b25eb5..9c0e09d0f81f 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -723,6 +723,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, unsigned int orig_size) { unsigned int zero_size = s->object_size; + bool kasan_init = init; size_t i; flags &= gfp_allowed_mask; @@ -739,6 +740,17 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, (s->flags & SLAB_KMALLOC)) zero_size = orig_size; + /* + * When slub_debug is enabled, avoid memory initialization integrated + * into KASAN and instead zero out the memory via the memset below with + * the proper size. Otherwise, KASAN might overwrite SLUB redzones and + * cause false-positive reports. This does not lead to a performance + * penalty on production builds, as slub_debug is not intended to be + * enabled there. + */ + if (__slub_debug_enabled()) + kasan_init = false; + /* * As memory initialization might be integrated into KASAN, * kasan_slab_alloc and initialization memset must be @@ -747,8 +759,8 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, * As p[i] might get tagged, memset and kmemleak hook come after KASAN. */ for (i = 0; i < size; i++) { - p[i] = kasan_slab_alloc(s, p[i], flags, init); - if (p[i] && init && !kasan_has_integrated_init()) + p[i] = kasan_slab_alloc(s, p[i], flags, kasan_init); + if (p[i] && init && (!kasan_init || !kasan_has_integrated_init())) memset(p[i], 0, zero_size); kmemleak_alloc_recursive(p[i], s->object_size, 1, s->flags, flags); -- 2.25.1