Received: by 2002:a05:7412:b995:b0:f9:9502:5bb8 with SMTP id it21csp390863rdb; Thu, 21 Dec 2023 12:08:57 -0800 (PST) X-Google-Smtp-Source: AGHT+IFR2Eyj+bdHlgGUOKYaUJG/XQoc1juyaXT2q81DUEiF6Iwr3FwOBoD8uO8Kuo41hpHMyvVT X-Received: by 2002:a17:90b:4d86:b0:28b:e961:c416 with SMTP id oj6-20020a17090b4d8600b0028be961c416mr1422486pjb.10.1703189336798; Thu, 21 Dec 2023 12:08:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1703189336; cv=none; d=google.com; s=arc-20160816; b=a7W7l8RDXd3p+/xbpNbnZ5zyrr8JHkOBDgu9RfX6gxaQPSVTS+QEkIh9nlK6abnuWg E+2Dl+Q4XjHps50czji8VFpWttIXst+yv8gnSwUQlHsAtrT6Ed5vMNB800E0CYevxsx0 PFwUs0MkxKGM2rE6OZ5PRD8R3mhryj0aQmOQYkQdvq9NGQ39Yot3evGnOwR7f6oBHov5 yqQTBTqbaTG9wsCzsyZlZZo918m3GNvlcpnwYULPQgcyQIBDwE8UM8y8bwRejQFeUI3i KcCfkOqUJcZqvM/ig4VciQOxX0A8oEOD19oKtv7uEWSGZ20tUMCbjmG14G+tKUojmQFf uhXQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:list-unsubscribe:list-subscribe:list-id:precedence :dkim-signature; bh=CI7GB6u39MLbqZUFzJg2hzBKBi/8Q4CxwH4rHsWQ8yc=; fh=Ahy40MSztyYZMnTonwM1XBgEig8egyoCalPOkrHuKkk=; b=mnKIvnV8634Ra7juc+QOiL6Z4FjUfVrYvYXluDwKn+3JtxfreG/9OFWrflGmqM+f9H Ac3bDsxqGwUN9uJtqrJ53QA1hpTYBIfNVl54V7ZBvyoCfhkm4ke0K3/WVM83qONU7hMI /nFG/4fkNy/AYb2loodzJHtzuaXe+ftS7BCI9BkHTQ3V6eAhWMwdNKfIogEzauYkSRic e0IU5UlGePR9ivVcM6Q0cNLyfPnuu3XtT9QQVFeY91ypBIQg1jq5BGgRWQ83YNUZAzHu zC0xN3G3KBcOdDvIE6YqZ7VbtFbHRNDH9taFVV5B9ZaoEODZkLQ2qv2w0qEXSLo1kIML rWcA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=XkMFVnWA; spf=pass (google.com: domain of linux-kernel+bounces-9032-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-9032-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org. [2604:1380:45e3:2400::1]) by mx.google.com with ESMTPS id x2-20020a17090aa38200b0028ba37adbfasi5255234pjp.55.2023.12.21.12.08.56 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 21 Dec 2023 12:08:56 -0800 (PST) Received-SPF: pass (google.com: domain of linux-kernel+bounces-9032-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) client-ip=2604:1380:45e3:2400::1; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20230601 header.b=XkMFVnWA; spf=pass (google.com: domain of linux-kernel+bounces-9032-linux.lists.archive=gmail.com@vger.kernel.org designates 2604:1380:45e3:2400::1 as permitted sender) smtp.mailfrom="linux-kernel+bounces-9032-linux.lists.archive=gmail.com@vger.kernel.org"; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id 6D30528533A for ; Thu, 21 Dec 2023 20:08:56 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 2946573192; Thu, 21 Dec 2023 20:08:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="XkMFVnWA" X-Original-To: linux-kernel@vger.kernel.org Received: from mail-lf1-f51.google.com (mail-lf1-f51.google.com [209.85.167.51]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A4AE1651B3 for ; Thu, 21 Dec 2023 20:08:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=google.com Received: by mail-lf1-f51.google.com with SMTP id 2adb3069b0e04-50e23a4df33so1507304e87.2 for ; Thu, 21 Dec 2023 12:08:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1703189325; x=1703794125; darn=vger.kernel.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=CI7GB6u39MLbqZUFzJg2hzBKBi/8Q4CxwH4rHsWQ8yc=; b=XkMFVnWA8hmzxD3/nVeogBGCSyPvN/vERWlov+nMwjqyrQkXGO/L79t7XkSHEup4Xd 1hgh+IlEJMg67gV8fhtCqnoqAiw2xZnoes2tLVYO7JFFnUd4QZGogmc+H88LrKDQxG5A YB/awJjE+rcDGvpw1sZHjtv+K/mp/FRBgzmZaibi/btawm6++kA7vuLuPHG0RvOhzm4B m3l3hU5KoUtXE3VfetOD0trDCcBfTDjV3xTWB+U3dHHfm1qOcVeczEZL1XreKRKv4KFP ViHZI8RTgMDUNGeuPR43nWisybsj0Eyx+pJO1+Gg6TdUtxfcJuKNCZYMIa9SfZzigXfe bcqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703189325; x=1703794125; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=CI7GB6u39MLbqZUFzJg2hzBKBi/8Q4CxwH4rHsWQ8yc=; b=cWvu0iySgAHcOympG5wsIxeJCuwzaB+H9LsoztUl8SFTjtSjKwW1/aAk+nutmJBZiH i0Ji6IYw2KRrhmFB5o03JBeV00S1/bPVC0yWWB3/qUw+e/cWGCAL0ONPFUs7fhk2pe5z oQt0DSiY6JVlezx3y7GyaMq7Kk3TrcJOp+YYfvSZ5cOCbLyJwfm3PmLasEzbFOz/2+u6 QZWies4R9LUfikIHQPsJGRkbd0yvewpM9SxesiFV7inrHZCXczMDdozkPL8cdUcdGjoO nuKYb4U6QmnKGr3dhLt+agrSJLMsAgty/VG/xdbsKD4JiG/8/setSUNw1XolZL+XGOyD QY6g== X-Gm-Message-State: AOJu0YzyqHXTLbVAEP5wwLHh9t4NSlHrgmf0rA7YFw1Jt9xyDmp/kESb O6SFlwCSCtCR4aphdgDbU4g4EmRV6rH21qRTvxRFS37W6h7z X-Received: by 2002:a05:6512:23a2:b0:50e:3777:f779 with SMTP id c34-20020a05651223a200b0050e3777f779mr116972lfv.31.1703189325320; Thu, 21 Dec 2023 12:08:45 -0800 (PST) Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 References: <20231221183540.168428-1-andrey.konovalov@linux.dev> In-Reply-To: <20231221183540.168428-1-andrey.konovalov@linux.dev> From: Marco Elver Date: Thu, 21 Dec 2023 21:08:04 +0100 Message-ID: Subject: Re: [PATCH mm 1/4] kasan: clean up kasan_cache_create To: andrey.konovalov@linux.dev Cc: Andrew Morton , Juntong Deng , Andrey Konovalov , Alexander Potapenko , Dmitry Vyukov , Andrey Ryabinin , kasan-dev@googlegroups.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrey Konovalov Content-Type: text/plain; charset="UTF-8" On Thu, 21 Dec 2023 at 19:35, wrote: > > From: Andrey Konovalov > > Reorganize the code to avoid nested if/else checks to improve the > readability. > > Also drop the confusing comments about KMALLOC_MAX_SIZE checks: they > are relevant for both SLUB and SLAB (originally, the comments likely > confused KMALLOC_MAX_SIZE with KMALLOC_MAX_CACHE_SIZE). > > Fixes: a5989d4ed40c ("kasan: improve free meta storage in Generic KASAN") > Signed-off-by: Andrey Konovalov Reviewed-by: Marco Elver > --- > mm/kasan/generic.c | 67 +++++++++++++++++++++++++++------------------- > 1 file changed, 39 insertions(+), 28 deletions(-) > > diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c > index 54e20b2bc3e1..769e43e05d0b 100644 > --- a/mm/kasan/generic.c > +++ b/mm/kasan/generic.c > @@ -381,16 +381,11 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, > > ok_size = *size; > > - /* Add alloc meta into redzone. */ > + /* Add alloc meta into the redzone. */ > cache->kasan_info.alloc_meta_offset = *size; > *size += sizeof(struct kasan_alloc_meta); > > - /* > - * If alloc meta doesn't fit, don't add it. > - * This can only happen with SLAB, as it has KMALLOC_MAX_SIZE equal > - * to KMALLOC_MAX_CACHE_SIZE and doesn't fall back to page_alloc for > - * larger sizes. > - */ > + /* If alloc meta doesn't fit, don't add it. */ > if (*size > KMALLOC_MAX_SIZE) { > cache->kasan_info.alloc_meta_offset = 0; > *size = ok_size; > @@ -401,36 +396,52 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, > orig_alloc_meta_offset = cache->kasan_info.alloc_meta_offset; > > /* > - * Add free meta into redzone when it's not possible to store > + * Store free meta in the redzone when it's not possible to store > * it in the object. This is the case when: > * 1. Object is SLAB_TYPESAFE_BY_RCU, which means that it can > * be touched after it was freed, or > * 2. Object has a constructor, which means it's expected to > - * retain its content until the next allocation, or > - * 3. Object is too small and SLUB DEBUG is enabled. Avoid > - * free meta that exceeds the object size corrupts the > - * SLUB DEBUG metadata. > - * Otherwise cache->kasan_info.free_meta_offset = 0 is implied. > - * If the object is smaller than the free meta and SLUB DEBUG > - * is not enabled, it is still possible to store part of the > - * free meta in the object. > + * retain its content until the next allocation. > */ > if ((cache->flags & SLAB_TYPESAFE_BY_RCU) || cache->ctor) { > cache->kasan_info.free_meta_offset = *size; > *size += sizeof(struct kasan_free_meta); > - } else if (cache->object_size < sizeof(struct kasan_free_meta)) { > - if (__slub_debug_enabled()) { > - cache->kasan_info.free_meta_offset = *size; > - *size += sizeof(struct kasan_free_meta); > - } else { > - rem_free_meta_size = sizeof(struct kasan_free_meta) - > - cache->object_size; > - *size += rem_free_meta_size; > - if (cache->kasan_info.alloc_meta_offset != 0) > - cache->kasan_info.alloc_meta_offset += rem_free_meta_size; > - } > + goto free_meta_added; > + } > + > + /* > + * Otherwise, if the object is large enough to contain free meta, > + * store it within the object. > + */ > + if (sizeof(struct kasan_free_meta) <= cache->object_size) { > + /* cache->kasan_info.free_meta_offset = 0 is implied. */ > + goto free_meta_added; > } > > + /* > + * For smaller objects, store the beginning of free meta within the > + * object and the end in the redzone. And thus shift the location of > + * alloc meta to free up space for free meta. > + * This is only possible when slub_debug is disabled, as otherwise > + * the end of free meta will overlap with slub_debug metadata. > + */ > + if (!__slub_debug_enabled()) { > + rem_free_meta_size = sizeof(struct kasan_free_meta) - > + cache->object_size; > + *size += rem_free_meta_size; > + if (cache->kasan_info.alloc_meta_offset != 0) > + cache->kasan_info.alloc_meta_offset += rem_free_meta_size; > + goto free_meta_added; > + } > + > + /* > + * If the object is small and slub_debug is enabled, store free meta > + * in the redzone after alloc meta. > + */ > + cache->kasan_info.free_meta_offset = *size; > + *size += sizeof(struct kasan_free_meta); > + > +free_meta_added: > /* If free meta doesn't fit, don't add it. */ > if (*size > KMALLOC_MAX_SIZE) { > cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META; > @@ -440,7 +451,7 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size, > > /* Calculate size with optimal redzone. */ > optimal_size = cache->object_size + optimal_redzone(cache->object_size); > - /* Limit it with KMALLOC_MAX_SIZE (relevant for SLAB only). */ > + /* Limit it with KMALLOC_MAX_SIZE. */ > if (optimal_size > KMALLOC_MAX_SIZE) > optimal_size = KMALLOC_MAX_SIZE; > /* Use optimal size if the size with added metas is not large enough. */ > -- > 2.25.1 >