Received: by 2002:ac0:e34a:0:0:0:0:0 with SMTP id g10csp99871imn; Wed, 27 Jul 2022 00:23:43 -0700 (PDT) X-Google-Smtp-Source: AGRyM1vARcc3zKxPgzMvkmF+9x+qhnKtP+ppSHt52TseM1MgHxup9E1GxL4CYakzh7pATCs8g2XW X-Received: by 2002:a05:6402:495:b0:43a:a211:4c86 with SMTP id k21-20020a056402049500b0043aa2114c86mr21860896edv.294.1658906623768; Wed, 27 Jul 2022 00:23:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1658906623; cv=none; d=google.com; s=arc-20160816; b=YpMniKIGy26YcWGq+HVBzfiKslHeGwK+4GhbGJaKlClcUd+YRI/ny22Xe5lmuXtJCF TMnSehHJjwiFy9EPV7M8qfhKDH1vTa2yaECpiIKh7xrnLr53FAiA9KFZ4XzC5Ss3ebTQ dTBn6cdHXmsLg5kQ3SRETNkzIxaiP1+akCgKrbcFCK0mRuGQIwqkDlh7EFogupi2NZPW MgrhML8qmpywFvEPEf1I9rUCrQRMBw6RIuREwkFF/cy7L+p5LzSeU60Y7oqNrEzQKnGB kuAh+OKxX5NqYqgoV89R+6F3afaRekW43IT+VMWGtIRSKP47v3129Ssl+gprMr6HyVHe r3tg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=yxKi1anUxD7n+42UBLlNvp3kQerGUqLu7DRiZIqhZnE=; b=dlozBIecCjuNFDYqbBMXqdpays+atpCK/wPm7eaIpC/8rGfPlEqePRIR07hjJQPgor SQQEeLUGVW7LgkjNOcSAmjyjWoXEug4mJ4cGa0raqg33asKCIcy1h61EWr0Jas9pHLlx FKjxHryN+PJoKtlxzC35I6755z8nYy46UEqh0vYgAvIS442NK/FtXS8FYPLVvgPhZSRa 4o51XEjBQewFAEBbfsKMlyjlh2rPPU3RFVXvbS0tdRQ5Ffl5t3PgyZZlFJq8yAa1IZKn pPr89774PfNztauPAf5ZAuWCNQlHwyeSK4ubcj+NHXZfaDc8PtqBbR8Y/loWF7Ks1PzL wNiQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=D0OKeBL8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id hp33-20020a1709073e2100b0072b870ca25esi17247789ejc.996.2022.07.27.00.23.18; Wed, 27 Jul 2022 00:23:43 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@intel.com header.s=Intel header.b=D0OKeBL8; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230169AbiG0HLF (ORCPT + 99 others); Wed, 27 Jul 2022 03:11:05 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49286 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230087AbiG0HKw (ORCPT ); Wed, 27 Jul 2022 03:10:52 -0400 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83F943CBF1 for ; Wed, 27 Jul 2022 00:10:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1658905850; x=1690441850; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=U2DUaLw+/DEgR8kITGEVz4V5R9QpdataO3f86cOl0eM=; b=D0OKeBL8OuUQCR9V1lE1u9lPs/5+BTKQRbGnfoEurmpnS0dXfraUWNDs edUJuBQ880158gaFH5wmfQygdKfN9lt+3nYWz9ymRQ6niZ+8kXUPEYN34 PrK0SGaNKeOXZsi4JtlsSfFuIZXekgDK4fNz73NUh45oby5OBaDtfu7WX Gk03JsWjxzjzSg84WNYlR/vFgznlnKsasNKx2sgkwwV4HrzDNlI68daf2 zMaDagpwVClQa3ARzyKhVy6wFi0mftmw25UG5JwHhV9ZS/6QOQZCQFkqi mLm48tFrebWxLIMTGGRc6qpfy0kzb+a/AprwP0BYlzzw3M/sI2i19enVd A==; X-IronPort-AV: E=McAfee;i="6400,9594,10420"; a="267931733" X-IronPort-AV: E=Sophos;i="5.93,195,1654585200"; d="scan'208";a="267931733" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 27 Jul 2022 00:10:50 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,195,1654585200"; d="scan'208";a="550737956" Received: from shbuild999.sh.intel.com ([10.239.146.138]) by orsmga003.jf.intel.com with ESMTP; 27 Jul 2022 00:10:46 -0700 From: Feng Tang To: Andrew Morton , Vlastimil Babka , Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Dave Hansen , Robin Murphy , John Garry , Kefeng Wang , Feng Tang Subject: [PATCH v3 3/3] mm/slub: extend redzone check to cover extra allocated kmalloc space than requested Date: Wed, 27 Jul 2022 15:10:42 +0800 Message-Id: <20220727071042.8796-4-feng.tang@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220727071042.8796-1-feng.tang@intel.com> References: <20220727071042.8796-1-feng.tang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-5.0 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_MED, SPF_HELO_PASS,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org kmalloc will round up the request size to a fixed size (mostly power of 2), so there could be a extra space than what is requested, whose size is the actual buffer size minus original request size. To better detect out of bound access or abuse of this space, add redzone sanity check for it. And in current kernel, some kmalloc user already knows the existence of the space and utilizes it after calling 'ksize()' to know the real size of the allocated buffer. So we skip the sanity check for objects which have been called with ksize(), as treating them as legitimate users. Suggested-by: Vlastimil Babka Signed-off-by: Feng Tang --- mm/slub.c | 52 +++++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 49 insertions(+), 3 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 946919066a4b..added2653bb0 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -836,6 +836,11 @@ static inline void set_orig_size(struct kmem_cache *s, *(unsigned int *)p = orig_size; } +static inline void skip_orig_size_check(struct kmem_cache *s, const void *object) +{ + set_orig_size(s, (void *)object, s->object_size); +} + static unsigned int get_orig_size(struct kmem_cache *s, void *object) { void *p = kasan_reset_tag(object); @@ -967,13 +972,35 @@ static __printf(3, 4) void slab_err(struct kmem_cache *s, struct slab *slab, static void init_object(struct kmem_cache *s, void *object, u8 val) { u8 *p = kasan_reset_tag(object); + unsigned int orig_size = s->object_size; - if (s->flags & SLAB_RED_ZONE) + if (s->flags & SLAB_RED_ZONE) { memset(p - s->red_left_pad, val, s->red_left_pad); + if (slub_debug_orig_size(s) && val == SLUB_RED_ACTIVE) { + unsigned int zone_start; + + orig_size = get_orig_size(s, object); + zone_start = orig_size; + + if (!freeptr_outside_object(s)) + zone_start = max_t(unsigned int, orig_size, + s->offset + sizeof(void *)); + + /* + * Redzone the extra allocated space by kmalloc + * than requested. + */ + if (zone_start < s->object_size) + memset(p + zone_start, val, + s->object_size - zone_start); + } + + } + if (s->flags & __OBJECT_POISON) { - memset(p, POISON_FREE, s->object_size - 1); - p[s->object_size - 1] = POISON_END; + memset(p, POISON_FREE, orig_size - 1); + p[orig_size - 1] = POISON_END; } if (s->flags & SLAB_RED_ZONE) @@ -1120,6 +1147,7 @@ static int check_object(struct kmem_cache *s, struct slab *slab, { u8 *p = object; u8 *endobject = object + s->object_size; + unsigned int orig_size; if (s->flags & SLAB_RED_ZONE) { if (!check_bytes_and_report(s, slab, object, "Left Redzone", @@ -1129,6 +1157,20 @@ static int check_object(struct kmem_cache *s, struct slab *slab, if (!check_bytes_and_report(s, slab, object, "Right Redzone", endobject, val, s->inuse - s->object_size)) return 0; + + if (slub_debug_orig_size(s) && val == SLUB_RED_ACTIVE) { + orig_size = get_orig_size(s, object); + + if (!freeptr_outside_object(s)) + orig_size = max_t(unsigned int, orig_size, + s->offset + sizeof(void *)); + if (s->object_size > orig_size && + !check_bytes_and_report(s, slab, object, + "kmalloc Redzone", p + orig_size, + val, s->object_size - orig_size)) { + return 0; + } + } } else { if ((s->flags & SLAB_POISON) && s->object_size < s->inuse) { check_bytes_and_report(s, slab, p, "Alignment padding", @@ -4588,6 +4630,10 @@ size_t __ksize(const void *object) if (unlikely(!folio_test_slab(folio))) return folio_size(folio); +#ifdef CONFIG_SLUB_DEBUG + skip_orig_size_check(folio_slab(folio)->slab_cache, object); +#endif + return slab_ksize(folio_slab(folio)->slab_cache); } EXPORT_SYMBOL(__ksize); -- 2.27.0