Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp367719imm; Thu, 26 Jul 2018 21:22:48 -0700 (PDT) X-Google-Smtp-Source: AAOMgpcA300IpDnBR3B4xFp3sYDDLmjZPsM/x7UybB29zAhiioolU6wGnDu9AAjFVvNHPhry90jL X-Received: by 2002:a63:eb0e:: with SMTP id t14-v6mr4487001pgh.198.1532665368644; Thu, 26 Jul 2018 21:22:48 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1532665368; cv=none; d=google.com; s=arc-20160816; b=hYwfqSZ7A8i4NHIBJDMWz1nfuGBWjweVbWHh80GE7CcL1CQ+PvaJbr2Me1u/omrgd4 EQ4pggAazpYdoBkBMdOnW+lRaysFB5aSyklirCkK5c76PBqA6EIXrLfX0uYjjZo3d3mi jWBm/VycLWLk4txvxpbRNY2HprTw61TBWMg69cfe4DkzXh9u3Q2lOslk9Q06T4Zbz0VW PeIJvhZZGwTjB/9XumyNh2sartBx7/ZLgZE+LfmZ06UGLdcfzPIC9IndYoBFtAui4NAJ ahlUS79Qj9mOQJBl5gHjmak1JhMTvudzC/uW7zOEQZKk46SRulVqDQKYAc/5yuao9dbv utzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=jF2YmxxQkyp+pV8quk7dTE5v27FbnKkucj55mcOrQoE=; b=XN/bZI+jt8+OkTsTB13xOdbaHkBktAIwT0GT4p3XO7bHYKKL9J7a2Z7LqSls/hCk8t z9xJ7Eiu9MD55o7FGTTiwg0jVvDyaWuRnsOTXF+4PJ5vEDwyb9NqoqVA64W98MKrUM0w Yq6rkBc1JGIm7bGPOvj2ON/PvWb7Ma8KPyjO+cahbj1zwto/d1x5iYJ8qXpBmDur5Wsl Q/0rPQw8XBuEGfQ9KFu2O1DLjHFGwk50LTX0fJiyWTN8Jd83eYj/2cmkFXKosgfcF5bM tWnnegGTU9GOE+KVak0Dx1NydfFkvEBzQOU9dkkvmY+qpjVw7UdExvSi5j6fHwgXXLLf GF+A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id y186-v6si3019245pgb.395.2018.07.26.21.22.33; Thu, 26 Jul 2018 21:22:48 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727829AbeG0Fll (ORCPT + 99 others); Fri, 27 Jul 2018 01:41:41 -0400 Received: from www62.your-server.de ([213.133.104.62]:45360 "EHLO www62.your-server.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725829AbeG0Fll (ORCPT ); Fri, 27 Jul 2018 01:41:41 -0400 Received: from [78.46.172.2] (helo=sslproxy05.your-server.de) by www62.your-server.de with esmtpsa (TLSv1.2:DHE-RSA-AES256-GCM-SHA384:256) (Exim 4.85_2) (envelope-from ) id 1fiuG5-0000OY-97; Fri, 27 Jul 2018 06:21:41 +0200 Received: from [99.0.85.34] (helo=localhost.localdomain) by sslproxy05.your-server.de with esmtpsa (TLSv1.2:ECDHE-RSA-AES256-GCM-SHA384:256) (Exim 4.89) (envelope-from ) id 1fiuG4-000ULR-Vh; Fri, 27 Jul 2018 06:21:41 +0200 Subject: Re: [PATCH v3 bpf-next 04/14] bpf: allocate cgroup storage entries on attaching bpf programs To: Roman Gushchin , netdev@vger.kernel.org Cc: linux-kernel@vger.kernel.org, kernel-team@fb.com, Alexei Starovoitov References: <20180720174558.5829-1-guro@fb.com> <20180720174558.5829-5-guro@fb.com> From: Daniel Borkmann Message-ID: <60f1e04e-eff9-543f-9f0e-b0cd50bbe71e@iogearbox.net> Date: Fri, 27 Jul 2018 06:21:38 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.3.0 MIME-Version: 1.0 In-Reply-To: <20180720174558.5829-5-guro@fb.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Authenticated-Sender: daniel@iogearbox.net X-Virus-Scanned: Clear (ClamAV 0.100.0/24786/Fri Jul 27 02:41:39 2018) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 07/20/2018 07:45 PM, Roman Gushchin wrote: > If a bpf program is using cgroup local storage, allocate > a bpf_cgroup_storage structure automatically on attaching the program > to a cgroup and save the pointer into the corresponding bpf_prog_list > entry. > Analogically, release the cgroup local storage on detaching > of the bpf program. > > Signed-off-by: Roman Gushchin > Cc: Alexei Starovoitov > Cc: Daniel Borkmann > Acked-by: Martin KaFai Lau > --- > include/linux/bpf-cgroup.h | 1 + > kernel/bpf/cgroup.c | 28 ++++++++++++++++++++++++++-- > 2 files changed, 27 insertions(+), 2 deletions(-) > > diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h > index 1b1b4e94d77d..f37347331fdb 100644 > --- a/include/linux/bpf-cgroup.h > +++ b/include/linux/bpf-cgroup.h > @@ -42,6 +42,7 @@ struct bpf_cgroup_storage { > struct bpf_prog_list { > struct list_head node; > struct bpf_prog *prog; > + struct bpf_cgroup_storage *storage; > }; > > struct bpf_prog_array; > diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c > index badabb0b435c..986ff18ef92e 100644 > --- a/kernel/bpf/cgroup.c > +++ b/kernel/bpf/cgroup.c > @@ -34,6 +34,8 @@ void cgroup_bpf_put(struct cgroup *cgrp) > list_for_each_entry_safe(pl, tmp, progs, node) { > list_del(&pl->node); > bpf_prog_put(pl->prog); > + bpf_cgroup_storage_unlink(pl->storage); > + bpf_cgroup_storage_free(pl->storage); > kfree(pl); > static_branch_dec(&cgroup_bpf_enabled_key); > } > @@ -188,6 +190,7 @@ int __cgroup_bpf_attach(struct cgroup *cgrp, struct bpf_prog *prog, > { > struct list_head *progs = &cgrp->bpf.progs[type]; > struct bpf_prog *old_prog = NULL; > + struct bpf_cgroup_storage *storage, *old_storage = NULL; > struct cgroup_subsys_state *css; > struct bpf_prog_list *pl; > bool pl_was_allocated; > @@ -210,6 +213,10 @@ int __cgroup_bpf_attach(struct cgroup *cgrp, struct bpf_prog *prog, > if (prog_list_length(progs) >= BPF_CGROUP_MAX_PROGS) > return -E2BIG; > > + storage = bpf_cgroup_storage_alloc(prog); > + if (IS_ERR(storage)) > + return -ENOMEM; > + > if (flags & BPF_F_ALLOW_MULTI) { > list_for_each_entry(pl, progs, node) > if (pl->prog == prog) > @@ -217,24 +224,33 @@ int __cgroup_bpf_attach(struct cgroup *cgrp, struct bpf_prog *prog, > return -EINVAL; > > pl = kmalloc(sizeof(*pl), GFP_KERNEL); > - if (!pl) > + if (!pl) { > + bpf_cgroup_storage_free(storage); > return -ENOMEM; > + } Above code is: storage = bpf_cgroup_storage_alloc(prog); if (IS_ERR(storage)) return -ENOMEM; if (flags & BPF_F_ALLOW_MULTI) { list_for_each_entry(pl, progs, node) if (pl->prog == prog) /* disallow attaching the same prog twice */ return -EINVAL; pl = kmalloc(sizeof(*pl), GFP_KERNEL); if (!pl) { bpf_cgroup_storage_free(storage); return -ENOMEM; } Given bpf_cgroup_storage_alloc() only changes the mem and prepares (kmallocs) the local storage buffer, we would leak it above where we attach the same prog twice, no? > + > pl_was_allocated = true; > pl->prog = prog; > + pl->storage = storage; > list_add_tail(&pl->node, progs); > } else { > if (list_empty(progs)) { > pl = kmalloc(sizeof(*pl), GFP_KERNEL); > - if (!pl) > + if (!pl) { > + bpf_cgroup_storage_free(storage); > return -ENOMEM; > + } > pl_was_allocated = true; > list_add_tail(&pl->node, progs); > } else { > pl = list_first_entry(progs, typeof(*pl), node); > old_prog = pl->prog; > + old_storage = pl->storage; > + bpf_cgroup_storage_unlink(old_storage); > pl_was_allocated = false; > } > pl->prog = prog; > + pl->storage = storage; > } > > cgrp->bpf.flags[type] = flags; > @@ -257,10 +273,13 @@ int __cgroup_bpf_attach(struct cgroup *cgrp, struct bpf_prog *prog, > } > > static_branch_inc(&cgroup_bpf_enabled_key); > + if (old_storage) > + bpf_cgroup_storage_free(old_storage); > if (old_prog) { > bpf_prog_put(old_prog); > static_branch_dec(&cgroup_bpf_enabled_key); > } > + bpf_cgroup_storage_link(storage, cgrp, type); > return 0; > > cleanup: > @@ -276,6 +295,9 @@ int __cgroup_bpf_attach(struct cgroup *cgrp, struct bpf_prog *prog, > > /* and cleanup the prog list */ > pl->prog = old_prog; > + bpf_cgroup_storage_free(pl->storage); > + pl->storage = old_storage; > + bpf_cgroup_storage_link(old_storage, cgrp, type); > if (pl_was_allocated) { > list_del(&pl->node); > kfree(pl); > @@ -356,6 +378,8 @@ int __cgroup_bpf_detach(struct cgroup *cgrp, struct bpf_prog *prog, > > /* now can actually delete it from this cgroup list */ > list_del(&pl->node); > + bpf_cgroup_storage_unlink(pl->storage); > + bpf_cgroup_storage_free(pl->storage); > kfree(pl); > if (list_empty(progs)) > /* last program was detached, reset flags to zero */ >