Received: by 2002:a05:6358:1087:b0:cb:c9d3:cd90 with SMTP id j7csp72511rwi; Tue, 18 Oct 2022 14:17:25 -0700 (PDT) X-Google-Smtp-Source: AMsMyM6a6WBXEyxqSdQQ8LUKszc+JnmwUxr1sELfoCJRC7U8ogPWIfmgPa5psMMOGdmYScwg08k/ X-Received: by 2002:a17:907:761b:b0:78d:4990:3f3e with SMTP id jx27-20020a170907761b00b0078d49903f3emr4003048ejc.228.1666127845420; Tue, 18 Oct 2022 14:17:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1666127845; cv=none; d=google.com; s=arc-20160816; b=CJ0XVBUKBguT22atNYVRZdj9/1pvzKdJlkHLJlnL7NLPmcgsW8gIZxHo6/vQqqHW6E eGT3YKgrKpR8Qbc6oTSQFeTV3dtuSGnkFGfphc8NGyqIDFIwUXSh//sVmVQ9fY78m8TX 9U90dtZXfcJFQWI4YHqRLNZac0ds+MkblPRdqNMkQfOd7MNCONgYTJrB1nxfUsShli3a WEtRwV5Kt/0uubrAvFN4xwHZVBk5JRLhdecNjCQuPuxynd3oHnzY5Z5nvuD5deLgsLuE ZDlAI7aIHpBSFJLYnPPN17fJIDrwJV6+CP1SbWOwsAbdQFICdH3rbS7oh2vHQJTkOlOi dT4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:dkim-signature; bh=e++9QBKymCk6GbONADV6RM6byWvm5vQbN9+OXTXTuWA=; b=X/a3EmT61ePL3co+RR0uEDbw8jD5DETAVWK1pI180J9nVRS46dNPYbBvLeNX+Q73WK p5VN9kq6x9tAAF/uyUvLRNrm6LcHUUebr7R7GpzqX5OxLEvoH6l3JP8f5uFNDMJg2FAv th8IN6D9hAb1Q1w8ZnE9AJo3ObXavkaHAvXufXwoLlsYLq1+ICDZmohtOPxztkfozWzI ZCuTSxOT/fukPvh7gZtNaqnpqpUhSq/63hKa1FrPtkA8e5oMj0Czct1hVxovykaj+Z6R dHNkAo5GnYIiChLzrlGOS86pzEryC6X/KYclq9VylwMzlLVgdiUuk0RKbhsaMEG0feUu JZLQ== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=EVCur8an; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from out1.vger.email (out1.vger.email. [2620:137:e000::1:20]) by mx.google.com with ESMTP id i5-20020a17090639c500b007824786a7easi10581769eje.724.2022.10.18.14.16.08; Tue, 18 Oct 2022 14:17:25 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) client-ip=2620:137:e000::1:20; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20210112 header.b=EVCur8an; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 2620:137:e000::1:20 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230050AbiJRUTh (ORCPT + 99 others); Tue, 18 Oct 2022 16:19:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48948 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230084AbiJRUTe (ORCPT ); Tue, 18 Oct 2022 16:19:34 -0400 Received: from mail-pf1-x44a.google.com (mail-pf1-x44a.google.com [IPv6:2607:f8b0:4864:20::44a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6763A659F for ; Tue, 18 Oct 2022 13:19:32 -0700 (PDT) Received: by mail-pf1-x44a.google.com with SMTP id j20-20020a056a00235400b00565af23c8c2so8266809pfj.7 for ; Tue, 18 Oct 2022 13:19:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=e++9QBKymCk6GbONADV6RM6byWvm5vQbN9+OXTXTuWA=; b=EVCur8anPPEc+yk+78UuB1RqYq1WqPLpwC16itWhkhkFmgcJwVEyjIF2F2DGRqioMy fYF/4cDsjWlBFnSuowj7JivNj5n3lzSERjgOljqe7GmXaA3dvz4bJjlesQK9KYyJN3s6 SODIeTzl8+ziv/AU9+cJaQRubDKykPMvOS/2iDXOKE67t2JqJyMKPiWEOjZioDOxfs4C +Xire59bMAHoHwZ+7jgBLleUJDXiQkNSNxEoh8qNpbdBs3/5oZg39TlJrjMki58koDRF hWWQ6EQ82NwywiobH70t0Fj30duGddl3TPHZC8KnnWrIgEtEvZ1QFRrhfwdENett2HjW +UYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=e++9QBKymCk6GbONADV6RM6byWvm5vQbN9+OXTXTuWA=; b=QouEjW9qC1czBiBp3p4+puKJwcXEMBFdlJPpzH0rwPmg4MkWS/RggqBiJ6w0oX/RW8 OXEcNixQpOV9fLQT/RSdS0JjC/qRCe0M4lxN65xFX3Qf7LR+r8wptwbThDAZ5H6HFZMn d7YBg0z03+I4nOA1OxiUn7Nf/pUwiY0L1UPBXQI/WkRqBuR3yPhTxFklgUzbCoYOBpuM lx6qOGXC6RfPdDoM6UkmRlTKtkcg56zNy3QzuAVR/gkPx8qzbynhe1PrllSRPhYRoU8e AOh8dBjqnYwv4Z9atHKR0LFhJY+VZYH8aEsQmnGpphQb24iweJVDXOrh3aaeCGDfMs/J GWjw== X-Gm-Message-State: ACrzQf1gK3mKB+Disv3IvTNeWMaHEUKmXyK6A4cOEUCCql/csuDtiEQT MKkP3/MUeA5HJCTdwZLm+BzdEEw= X-Received: from sdf.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5935]) (user=sdf job=sendgmr) by 2002:a17:90a:5408:b0:20a:d6b1:a2a7 with SMTP id z8-20020a17090a540800b0020ad6b1a2a7mr1856349pjh.2.1666124370756; Tue, 18 Oct 2022 13:19:30 -0700 (PDT) Date: Tue, 18 Oct 2022 13:19:29 -0700 In-Reply-To: <20221018090205.never.090-kees@kernel.org> Mime-Version: 1.0 References: <20221018090205.never.090-kees@kernel.org> Message-ID: Subject: Re: [PATCH] bpf, test_run: Track allocation size of data From: sdf@google.com To: Kees Cook Cc: Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Hao Luo , Jiri Olsa , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Jesper Dangaard Brouer , bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org, linux-hardening@vger.kernel.org Content-Type: text/plain; charset="UTF-8"; format=flowed; delsp=yes X-Spam-Status: No, score=-9.6 required=5.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 10/18, Kees Cook wrote: > In preparation for requiring that build_skb() have a non-zero size > argument, track the data allocation size explicitly and pass it into > build_skb(). To retain the original result of using the ksize() > side-effect on the skb size, explicitly round up the size during > allocation. Can you share more on the end goal? Is the plan to remove ksize(data) from build_skb and pass it via size argument? > Cc: Alexei Starovoitov > Cc: Daniel Borkmann > Cc: Andrii Nakryiko > Cc: Martin KaFai Lau > Cc: Song Liu > Cc: Yonghong Song > Cc: John Fastabend > Cc: KP Singh > Cc: Stanislav Fomichev > Cc: Hao Luo > Cc: Jiri Olsa > Cc: "David S. Miller" > Cc: Eric Dumazet > Cc: Jakub Kicinski > Cc: Paolo Abeni > Cc: Jesper Dangaard Brouer > Cc: bpf@vger.kernel.org > Cc: netdev@vger.kernel.org > Signed-off-by: Kees Cook > --- > net/bpf/test_run.c | 84 +++++++++++++++++++++++++--------------------- > 1 file changed, 46 insertions(+), 38 deletions(-) > diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c > index 13d578ce2a09..299ff102f516 100644 > --- a/net/bpf/test_run.c > +++ b/net/bpf/test_run.c > @@ -762,28 +762,38 @@ BTF_ID_FLAGS(func, bpf_kfunc_call_test_ref, > KF_TRUSTED_ARGS) > BTF_ID_FLAGS(func, bpf_kfunc_call_test_destructive, KF_DESTRUCTIVE) > BTF_SET8_END(test_sk_check_kfunc_ids) > -static void *bpf_test_init(const union bpf_attr *kattr, u32 user_size, > - u32 size, u32 headroom, u32 tailroom) > +struct bpfalloc { > + size_t len; > + void *data; > +}; > + > +static int bpf_test_init(struct bpfalloc *alloc, > + const union bpf_attr *kattr, u32 user_size, > + u32 size, u32 headroom, u32 tailroom) > { > void __user *data_in = u64_to_user_ptr(kattr->test.data_in); > - void *data; > if (size < ETH_HLEN || size > PAGE_SIZE - headroom - tailroom) > - return ERR_PTR(-EINVAL); > + return -EINVAL; > if (user_size > size) > - return ERR_PTR(-EMSGSIZE); > + return -EMSGSIZE; > - data = kzalloc(size + headroom + tailroom, GFP_USER); > - if (!data) > - return ERR_PTR(-ENOMEM); [..] > + alloc->len = kmalloc_size_roundup(size + headroom + tailroom); > + alloc->data = kzalloc(alloc->len, GFP_USER); I still probably miss something. Here, why do we need to do kmalloc_size_roundup+kzalloc vs doing kzalloc+ksize? data = bpf_test_init(kattr, kattr->test.data_size_in, size, NET_SKB_PAD + NET_IP_ALIGN, SKB_DATA_ALIGN(sizeof(struct skb_shared_info))); skb = build_skb(data, ksize(data)); > + if (!alloc->data) { > + alloc->len = 0; > + return -ENOMEM; > + } > - if (copy_from_user(data + headroom, data_in, user_size)) { > - kfree(data); > - return ERR_PTR(-EFAULT); > + if (copy_from_user(alloc->data + headroom, data_in, user_size)) { > + kfree(alloc->data); > + alloc->data = NULL; > + alloc->len = 0; > + return -EFAULT; > } > - return data; > + return 0; > } > int bpf_prog_test_run_tracing(struct bpf_prog *prog, > @@ -1086,25 +1096,25 @@ int bpf_prog_test_run_skb(struct bpf_prog *prog, > const union bpf_attr *kattr, > u32 size = kattr->test.data_size_in; > u32 repeat = kattr->test.repeat; > struct __sk_buff *ctx = NULL; > + struct bpfalloc alloc = { }; > u32 retval, duration; > int hh_len = ETH_HLEN; > struct sk_buff *skb; > struct sock *sk; > - void *data; > int ret; > if (kattr->test.flags || kattr->test.cpu || kattr->test.batch_size) > return -EINVAL; > - data = bpf_test_init(kattr, kattr->test.data_size_in, > - size, NET_SKB_PAD + NET_IP_ALIGN, > - SKB_DATA_ALIGN(sizeof(struct skb_shared_info))); > - if (IS_ERR(data)) > - return PTR_ERR(data); > + ret = bpf_test_init(&alloc, kattr, kattr->test.data_size_in, > + size, NET_SKB_PAD + NET_IP_ALIGN, > + SKB_DATA_ALIGN(sizeof(struct skb_shared_info))); > + if (ret) > + return ret; > ctx = bpf_ctx_init(kattr, sizeof(struct __sk_buff)); > if (IS_ERR(ctx)) { > - kfree(data); > + kfree(alloc.data); > return PTR_ERR(ctx); > } > @@ -1124,15 +1134,15 @@ int bpf_prog_test_run_skb(struct bpf_prog *prog, > const union bpf_attr *kattr, > sk = sk_alloc(net, AF_UNSPEC, GFP_USER, &bpf_dummy_proto, 1); > if (!sk) { > - kfree(data); > + kfree(alloc.data); > kfree(ctx); > return -ENOMEM; > } > sock_init_data(NULL, sk); > - skb = build_skb(data, 0); > + skb = build_skb(alloc.data, alloc.len); > if (!skb) { > - kfree(data); > + kfree(alloc.data); > kfree(ctx); > sk_free(sk); > return -ENOMEM; > @@ -1283,10 +1293,10 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, > const union bpf_attr *kattr, > u32 repeat = kattr->test.repeat; > struct netdev_rx_queue *rxqueue; > struct skb_shared_info *sinfo; > + struct bpfalloc alloc = {}; > struct xdp_buff xdp = {}; > int i, ret = -EINVAL; > struct xdp_md *ctx; > - void *data; > if (prog->expected_attach_type == BPF_XDP_DEVMAP || > prog->expected_attach_type == BPF_XDP_CPUMAP) > @@ -1329,16 +1339,14 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, > const union bpf_attr *kattr, > size = max_data_sz; > } > - data = bpf_test_init(kattr, size, max_data_sz, headroom, tailroom); > - if (IS_ERR(data)) { > - ret = PTR_ERR(data); > + ret = bpf_test_init(&alloc, kattr, size, max_data_sz, headroom, > tailroom); > + if (ret) > goto free_ctx; > - } > rxqueue = __netif_get_rx_queue(current->nsproxy->net_ns->loopback_dev, > 0); > rxqueue->xdp_rxq.frag_size = headroom + max_data_sz + tailroom; > xdp_init_buff(&xdp, rxqueue->xdp_rxq.frag_size, &rxqueue->xdp_rxq); > - xdp_prepare_buff(&xdp, data, headroom, size, true); > + xdp_prepare_buff(&xdp, alloc.data, headroom, size, true); > sinfo = xdp_get_shared_info_from_buff(&xdp); > ret = xdp_convert_md_to_buff(ctx, &xdp); > @@ -1410,7 +1418,7 @@ int bpf_prog_test_run_xdp(struct bpf_prog *prog, > const union bpf_attr *kattr, > free_data: > for (i = 0; i < sinfo->nr_frags; i++) > __free_page(skb_frag_page(&sinfo->frags[i])); > - kfree(data); > + kfree(alloc.data); > free_ctx: > kfree(ctx); > return ret; > @@ -1441,10 +1449,10 @@ int bpf_prog_test_run_flow_dissector(struct > bpf_prog *prog, > u32 repeat = kattr->test.repeat; > struct bpf_flow_keys *user_ctx; > struct bpf_flow_keys flow_keys; > + struct bpfalloc alloc = {}; > const struct ethhdr *eth; > unsigned int flags = 0; > u32 retval, duration; > - void *data; > int ret; > if (kattr->test.flags || kattr->test.cpu || kattr->test.batch_size) > @@ -1453,18 +1461,18 @@ int bpf_prog_test_run_flow_dissector(struct > bpf_prog *prog, > if (size < ETH_HLEN) > return -EINVAL; > - data = bpf_test_init(kattr, kattr->test.data_size_in, size, 0, 0); > - if (IS_ERR(data)) > - return PTR_ERR(data); > + ret = bpf_test_init(&alloc, kattr, kattr->test.data_size_in, size, 0, > 0); > + if (ret) > + return ret; > - eth = (struct ethhdr *)data; > + eth = (struct ethhdr *)alloc.data; > if (!repeat) > repeat = 1; > user_ctx = bpf_ctx_init(kattr, sizeof(struct bpf_flow_keys)); > if (IS_ERR(user_ctx)) { > - kfree(data); > + kfree(alloc.data); > return PTR_ERR(user_ctx); > } > if (user_ctx) { > @@ -1475,8 +1483,8 @@ int bpf_prog_test_run_flow_dissector(struct > bpf_prog *prog, > } > ctx.flow_keys = &flow_keys; > - ctx.data = data; > - ctx.data_end = (__u8 *)data + size; > + ctx.data = alloc.data; > + ctx.data_end = (__u8 *)alloc.data + size; > bpf_test_timer_enter(&t); > do { > @@ -1496,7 +1504,7 @@ int bpf_prog_test_run_flow_dissector(struct > bpf_prog *prog, > out: > kfree(user_ctx); > - kfree(data); > + kfree(alloc.data); > return ret; > } > -- > 2.34.1