Received: by 2002:a05:6358:1087:b0:cb:c9d3:cd90 with SMTP id j7csp577719rwi; Thu, 20 Oct 2022 02:32:08 -0700 (PDT) X-Google-Smtp-Source: AMsMyM4BT1+oUYdM/8U9K4S6Hx01KS/QvXUWJUI5JP9eA/OB8M3IcKR3RHvYpSQvMKvT45zdF8kJ X-Received: by 2002:a05:6402:1853:b0:459:4e34:2ea5 with SMTP id v19-20020a056402185300b004594e342ea5mr11659533edy.191.1666258328160; Thu, 20 Oct 2022 02:32:08 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666258328; cv=none; d=google.com; s=arc-20160816; b=Gi7C3gySH1TCQI15BtPjreXdZE2n3ucdtjbBECv//TPAdejO4ZASODGc0Njax5oXQI BUklY1khDWNhRS3lYNoBRQOq62Z9F0pH26+nuNnfaik82aRhQ70icj4kDXieAF8qrwp7 6G3l/n47NOZnczDjnpVKcIc08A7qLNeFdMKOV2I3mYENjJ4zjwlDXzOyw7ceP8yCavSZ QkS38Tq1ELe4y0imr/8gbJXz1Kcgu/L/Etpe+4OU8LaSuS22NvsEGey9DVsKBvl+8LUQ Lfcwnvzpyqgwib/GHBRFVtFMyTvSI8KGRGbOe0bffOwTeUZlmASm3Be84wCYh4f72Ufd el8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:date:cc:to:from:subject :message-id:dkim-signature; bh=j1iLSA1qAVvUAJ90BnDlPL9tsmmgvjQsGUXqLRV/pk0=; b=FRfIamAvg6kv/VDlZOF8XQDaAQRcM1CyNQyaoZdRrrq7EcMhxycEPRVfpva8H6pR3E eDXyk4Ol3/XAA1fmeQVz+vSCYFYs29sqP6UAkm+uH0R1bz0qJt0+2jrFoUlpwhL9m6Gy taNW32Yer3+VJoZw6J9XGmb0wbnMYRG1n6P8uDW3Du/ra6yABaIaM/RrVvMM4Steo5lI beEllfn84muRI8Nd12lSOQf2JUKiPQkkvEsmaHRYYGN7igo3Tz1HaqYgYanSOqSQZTjB FQ+MN0ld3wthjN+A/DZyu5Y1v4l8yHOyMzyqbpgZ4/hH5oFlxfwSRfYfSoWdk1Y0U6hz OEDg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=FNf5RFT0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id s5-20020a17090699c500b0078d878d8fb7si17044686ejn.920.2022.10.20.02.31.42; Thu, 20 Oct 2022 02:32:08 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@redhat.com header.s=mimecast20190719 header.b=FNf5RFT0; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230392AbiJTIm5 (ORCPT + 99 others); Thu, 20 Oct 2022 04:42:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229794AbiJTImz (ORCPT ); Thu, 20 Oct 2022 04:42:55 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4CAB3161FC0 for ; Thu, 20 Oct 2022 01:42:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1666255373; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=j1iLSA1qAVvUAJ90BnDlPL9tsmmgvjQsGUXqLRV/pk0=; b=FNf5RFT0hCZK1ZO6F8Jr6zOAJXFI81l0EIbIE3G2gvzgAPubTHY6ETVOQgSdlpwnocfDGA 22tsRYovEMbFbHtSjBb9tX8mDcgPkeH3MwipIrjrafcy0FxK+KT0k+4/4qELz65ywGAzqb zocMX91ahtAYc1uv32EMJCZEmvUaBso= Received: from mail-qv1-f71.google.com (mail-qv1-f71.google.com [209.85.219.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_128_GCM_SHA256) id us-mta-116-oWdZ-bpwOaqqfVhdMxpZBA-1; Thu, 20 Oct 2022 04:42:52 -0400 X-MC-Unique: oWdZ-bpwOaqqfVhdMxpZBA-1 Received: by mail-qv1-f71.google.com with SMTP id mz8-20020a0562142d0800b004b18a95b180so12166149qvb.8 for ; Thu, 20 Oct 2022 01:42:52 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:user-agent:references :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=j1iLSA1qAVvUAJ90BnDlPL9tsmmgvjQsGUXqLRV/pk0=; b=I13ul+sJY2URd4ZbHRMHasKjRNhKVwmhVKIp/4nrDUtLOXzBivsTFLRiz2j+fR3XJX pB6ep8Vb6/V1dRQFI+w2E863xn9A3Glsp+UbO1m+xgO34yysGqqUB0mAHQVm7e8vI9W/ 7kFDM+BCQ74QA+FdFlAUjVXN6gXnOqnipk52Cb16ejC4j8LI1tNw4bZiMkBKBNcwtgZ7 xEfZy8mnzdnUVrZTv/QFJ1O8zBXmRYo+7maMVjqqAJljqWi8rpZ2lQDw0icnDfeKKAuX pJ5x14jFfJCjcA7FF1qZVImlY4jTM9yMjQnUc8u6+aPVPGsfWA+FeL2+WrfkjmyWXK4r DWUQ== X-Gm-Message-State: ACrzQf2YptIfLuZmbFWDwabrvf3DVYXLQuM5VPlT0XbSfCAWEfLyxxBt EtGapcdX4QkXpLzJ03sACdHcYXw0uayYEEsTO+HACCV5f8ymgQM0C9Td2jh2KF5QAkvbZKeiB2J 7Q4jaXDUiKTzm3BdfLvTnlBta X-Received: by 2002:a05:6214:4015:b0:4b1:78f2:8dfd with SMTP id kd21-20020a056214401500b004b178f28dfdmr10406524qvb.81.1666255371682; Thu, 20 Oct 2022 01:42:51 -0700 (PDT) X-Received: by 2002:a05:6214:4015:b0:4b1:78f2:8dfd with SMTP id kd21-20020a056214401500b004b178f28dfdmr10406506qvb.81.1666255371453; Thu, 20 Oct 2022 01:42:51 -0700 (PDT) Received: from gerbillo.redhat.com (146-241-103-235.dyn.eolo.it. [146.241.103.235]) by smtp.gmail.com with ESMTPSA id ew5-20020a05622a514500b0039cc9d24843sm5705563qtb.66.2022.10.20.01.42.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 20 Oct 2022 01:42:50 -0700 (PDT) Message-ID: <0ea1fc165a6c6117f982f4f135093e69cb884930.camel@redhat.com> Subject: Re: [PATCH v3][next] skbuff: Proactively round up to kmalloc bucket size From: Paolo Abeni To: Kees Cook , "David S. Miller" Cc: Eric Dumazet , Jakub Kicinski , netdev@vger.kernel.org, Greg Kroah-Hartman , Nick Desaulniers , David Rientjes , Vlastimil Babka , Pavel Begunkov , Menglong Dong , linux-kernel@vger.kernel.org, linux-hardening@vger.kernel.org Date: Thu, 20 Oct 2022 10:42:47 +0200 In-Reply-To: <20221018093005.give.246-kees@kernel.org> References: <20221018093005.give.246-kees@kernel.org> Content-Type: text/plain; charset="UTF-8" User-Agent: Evolution 3.42.4 (3.42.4-2.fc35) MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Spam-Status: No, score=-2.4 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2,SPF_HELO_NONE,SPF_NONE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hello, On Tue, 2022-10-18 at 02:33 -0700, Kees Cook wrote: > Instead of discovering the kmalloc bucket size _after_ allocation, round > up proactively so the allocation is explicitly made for the full size, > allowing the compiler to correctly reason about the resulting size of > the buffer through the existing __alloc_size() hint. > > This will allow for kernels built with CONFIG_UBSAN_BOUNDS or the > coming dynamic bounds checking under CONFIG_FORTIFY_SOURCE to gain > back the __alloc_size() hints that were temporarily reverted in commit > 93dd04ab0b2b ("slab: remove __alloc_size attribute from __kmalloc_track_caller") > > Cc: "David S. Miller" > Cc: Eric Dumazet > Cc: Jakub Kicinski > Cc: Paolo Abeni > Cc: netdev@vger.kernel.org > Cc: Greg Kroah-Hartman > Cc: Nick Desaulniers > Cc: David Rientjes > Cc: Vlastimil Babka > Signed-off-by: Kees Cook > --- > v3: refactor again to pass allocation size more cleanly to callers > v2: https://lore.kernel.org/lkml/20220923202822.2667581-4-keescook@chromium.org/ > --- > net/core/skbuff.c | 41 ++++++++++++++++++++++------------------- > 1 file changed, 22 insertions(+), 19 deletions(-) > > diff --git a/net/core/skbuff.c b/net/core/skbuff.c > index 1d9719e72f9d..3ea1032d03ec 100644 > --- a/net/core/skbuff.c > +++ b/net/core/skbuff.c > @@ -425,11 +425,12 @@ EXPORT_SYMBOL(napi_build_skb); > * memory is free > */ > static void *kmalloc_reserve(size_t size, gfp_t flags, int node, > - bool *pfmemalloc) > + bool *pfmemalloc, size_t *alloc_size) > { > void *obj; > bool ret_pfmemalloc = false; > > + size = kmalloc_size_roundup(size); > /* > * Try a regular allocation, when that fails and we're not entitled > * to the reserves, fail. > @@ -448,6 +449,7 @@ static void *kmalloc_reserve(size_t size, gfp_t flags, int node, > if (pfmemalloc) > *pfmemalloc = ret_pfmemalloc; > > + *alloc_size = size; > return obj; > } > > @@ -479,7 +481,7 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, > { > struct kmem_cache *cache; > struct sk_buff *skb; > - unsigned int osize; > + size_t alloc_size; > bool pfmemalloc; > u8 *data; > > @@ -506,15 +508,15 @@ struct sk_buff *__alloc_skb(unsigned int size, gfp_t gfp_mask, > */ > size = SKB_DATA_ALIGN(size); > size += SKB_DATA_ALIGN(sizeof(struct skb_shared_info)); > - data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc); > - if (unlikely(!data)) > - goto nodata; I'm sorry for not noticing the above in the previous iteration, but I think this revision will produce worse code than the V1, as kmalloc_reserve() now pollutes an additional register. Why did you prefer adding an additional parameter to kmalloc_reserve()? I think computing the alloc_size in the caller is even more readable. Additionally, as a matter of personal preference, I would not introduce an additional variable for alloc_size, just: // ... size = kmalloc_size_roundup(size); data = kmalloc_reserve(size, gfp_mask, node, &pfmemalloc); The rationale is smaller diff, and consistent style with the existing code where 'size' is already adjusted multiple times icrementally. Cheers, Paolo