Received: by 2002:ac0:a594:0:0:0:0:0 with SMTP id m20-v6csp336076imm; Mon, 21 May 2018 06:53:06 -0700 (PDT) X-Google-Smtp-Source: AB8JxZpqYH5SADdHranmi/kRlsxcH6uhV5d9UbDkuTPAyHveRpYSLw/stqAeynS8GQddRPeBhEcM X-Received: by 2002:a65:5386:: with SMTP id x6-v6mr4235282pgq.188.1526910786574; Mon, 21 May 2018 06:53:06 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1526910786; cv=none; d=google.com; s=arc-20160816; b=JKNAZxHMM+m3GAjOaSpe3hec3QP+JHxVePMyjO9KHNqyXeJKbVoqTigHqpag4co0/O 4MSWjEvzWEW27AFElywxm9G3Q8K94FaHHZBodfbnyU9IG05xEc1lFEDk9Oxt8H2TtmlS 13F486fmtuN6dX02lTSN2Tn/sIgaphHVrTzexNSqYvl++DlTbm3c61Ab8TK2a57MWxBU BKL1HBAQB3MYlQOzQUfwHe2NPJBwZtX9xCJP5l6MR2faz83Nw9+QVKlZSilffDP9wOgN ZbCeNPW7QkPaFjqg7WpIWQre3ie5bUar3K/+4xZOtn1vH8ABBA6BezwJlJIV84CubkCr X6rw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:dkim-signature :arc-authentication-results; bh=AtTNjVbF4gH5HysoRyPdOcu2RoZONtJXRgljGQ7f7rM=; b=brLKvwIPUXUob2ZJB0ilcxmCP6Ie7fSrHMwvJlcQsbrgHmNNbC1gHpz3UUii8eD86d l4cYPzccCCFID3MCURHS8r1ppl8vCBSVteSh7wtDhxeFNyn/kKuVvC4lrWoQbnDRSRmN iu1QBWmOkr68AuaZSxaWElrxTVwPzw3uH798iam2gRkZr9Q6MjfXJ9PgHrqRkliAxdJg h/o0Xoz6z8lyl4W0rd2TdNc+/eVsf8wFKQY5Mj76Es6YI5flZ4DtJYR55TjbMki3aNLR BqfzSsU/XboWh0meeWvIvZsYCTQu5YbVvnQ0dWT4fsxMl0GqD3gClfmCBd6VOwifyGUN fDMA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kinvolk.io header.s=google header.b=kvG+gX3n; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x25-v6si14825899pfj.347.2018.05.21.06.52.52; Mon, 21 May 2018 06:53:06 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kinvolk.io header.s=google header.b=kvG+gX3n; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752895AbeEUNwp (ORCPT + 99 others); Mon, 21 May 2018 09:52:45 -0400 Received: from mail-pf0-f196.google.com ([209.85.192.196]:46103 "EHLO mail-pf0-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752855AbeEUNwU (ORCPT ); Mon, 21 May 2018 09:52:20 -0400 Received: by mail-pf0-f196.google.com with SMTP id p12-v6so7114410pff.13 for ; Mon, 21 May 2018 06:52:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kinvolk.io; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=AtTNjVbF4gH5HysoRyPdOcu2RoZONtJXRgljGQ7f7rM=; b=kvG+gX3nMvOwfsOyHPbNmFnoVrX1cFgFJW0x7l324t0Sx3bGpmFRtdbI/EZE0wXB87 WqmMT+eoU3xm5EKT64yVjg2XDmi4gSXjGyaLfMGjRs8l2ZpJHYMbxUqXJFoGWr0b4Lzr DZgG+NOYE6mEtFmx7LTlnqLkYmLL7ktg08CYg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=AtTNjVbF4gH5HysoRyPdOcu2RoZONtJXRgljGQ7f7rM=; b=nF1mBC+Q28OPXWZnXqMJb+72EVr0aYRTMtztYVfsP49BE7efFCBG0DG/ltFssRnqvQ tpfCtPLqwk1SkPOYYgFx1vPxOAfhm3UzFMs5Pfq/FqWy+h6naebPsmIY9kCkJSlBiec6 MSJ1fE49C7BhN5IgBu3cjLK61Zf/OO5UWjsWxH0gxTS3+wV6Ub/18Kdg7RVzlupWZmuT BXg3a1dM8XKhmAT+NtqfSwblgHdH7umrvY09CnJDHxW7MQfObvn8dWihaRFMH+VvxKZf /i+aTLlmx0p5LdTuUQW/i/iTMWyKDnegOYpcY3MYBkJO5ZbJUpx/Of0NlQffQrwsY1Do oxoQ== X-Gm-Message-State: ALKqPwe7y8T9elynlmXH+sF94BmOrgpyyd9Fe26oJ8aPoyAuqyMr2Wsq zlQsvNMrl64z0Bzs7PNsVCtntNDvatBME0w2Z6omDw== X-Received: by 2002:a63:84c6:: with SMTP id k189-v6mr15367725pgd.298.1526910738587; Mon, 21 May 2018 06:52:18 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:a17:90a:b78a:0:0:0:0 with HTTP; Mon, 21 May 2018 06:52:17 -0700 (PDT) In-Reply-To: References: <20180513173318.21680-1-alban@kinvolk.io> From: Alban Crequy Date: Mon, 21 May 2018 15:52:17 +0200 Message-ID: Subject: Re: [PATCH] [RFC] bpf: tracing: new helper bpf_get_current_cgroup_ino To: Y Song Cc: Alban Crequy , netdev , LKML , Linux Containers , cgroups@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, May 14, 2018 at 9:38 PM, Y Song wrote: > > On Sun, May 13, 2018 at 10:33 AM, Alban Crequy wrote: > > From: Alban Crequy > > > > bpf_get_current_cgroup_ino() allows BPF trace programs to get the inode > > of the cgroup where the current process resides. > > > > My use case is to get statistics about syscalls done by a specific > > Kubernetes container. I have a tracepoint on raw_syscalls/sys_enter and > > a BPF map containing the cgroup inode that I want to trace. I use > > bpf_get_current_cgroup_ino() and I quickly return from the tracepoint if > > the inode is not in the BPF hash map. > > Alternatively, the kernel already has bpf_current_task_under_cgroup helper > which uses a cgroup array to store cgroup fd's. If the current task is > in the hierarchy of a particular cgroup, the helper will return true. > > One difference between your helper and bpf_current_task_under_cgroup() is > that your helper tests against a particular cgroup, not including its > children, but > bpf_current_task_under_cgroup() will return true even the task is in a > nested cgroup. > > Maybe this will work for you? I like the behaviour that it checks for children cgroups. But with the cgroup array, I can test only one cgroup at a time. I would like to be able to enable my tracer for a few Kubernetes containers or all by adding the inodes of a few cgroups in a hash map. So I could keep separate stats for each. With bpf_current_task_under_cgroup(), I would need to iterate over the list of cgroups, which is difficult with BPF. Also, Kubernetes is cgroup-v1 only and bpf_current_task_under_cgroup() is cgroup-v2 only. In Kubernetes, the processes remain in the root of the v2 hierarchy. I'd like to be able to select the cgroup hierarchy in my helper so it'd work for both v1 and v2. > > Without this BPF helper, I would need to keep track of all pids in the > > container. The Netlink proc connector can be used to follow process > > creation and destruction but it is racy. > > > > This patch only looks at the memory cgroup, which was enough for me > > since each Kubernetes container is placed in a different mem cgroup. > > For a generic implementation, I'm not sure how to proceed: it seems I > > would need to use 'for_each_root(root)' (see example in > > proc_cgroup_show() from kernel/cgroup/cgroup.c) but I don't know if > > taking the cgroup mutex is possible in the BPF helper function. It might > > be ok in the tracepoint raw_syscalls/sys_enter but could the mutex > > already be taken in some other tracepoints? > > mutex is not allowed in a helper since it can block. Ok. I don't know how to implement my helper properly then. Maybe I could just use the 1st cgroup-v1 hierarchy (the name=systemd one) so I don't have to iterate over the hierarchies. But would that be acceptable? Cheers, Alban > > Signed-off-by: Alban Crequy > > --- > > include/uapi/linux/bpf.h | 11 ++++++++++- > > kernel/trace/bpf_trace.c | 25 +++++++++++++++++++++++++ > > 2 files changed, 35 insertions(+), 1 deletion(-) > > > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h > > index c5ec89732a8d..38ac3959cdf3 100644 > > --- a/include/uapi/linux/bpf.h > > +++ b/include/uapi/linux/bpf.h > > @@ -755,6 +755,14 @@ union bpf_attr { > > * @addr: pointer to struct sockaddr to bind socket to > > * @addr_len: length of sockaddr structure > > * Return: 0 on success or negative error code > > + * > > + * u64 bpf_get_current_cgroup_ino(hierarchy, flags) > > + * Get the cgroup{1,2} inode of current task under the specified hierarchy. > > + * @hierarchy: cgroup hierarchy > > Not sure what is the value to specify hierarchy here. > A cgroup directory fd? > > > + * @flags: reserved for future use > > + * Return: > > + * == 0 error > > looks like < 0 means error. > > > + * > 0 inode of the cgroup > >= 0 means good? > > */ > > #define __BPF_FUNC_MAPPER(FN) \ > > FN(unspec), \ > > @@ -821,7 +829,8 @@ union bpf_attr { > > FN(msg_apply_bytes), \ > > FN(msg_cork_bytes), \ > > FN(msg_pull_data), \ > > - FN(bind), > > + FN(bind), \ > > + FN(get_current_cgroup_ino), > > > > /* integer value in 'imm' field of BPF_CALL instruction selects which helper > > * function eBPF program intends to call > > diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c > > index 56ba0f2a01db..9bf92a786639 100644 > > --- a/kernel/trace/bpf_trace.c > > +++ b/kernel/trace/bpf_trace.c > > @@ -524,6 +524,29 @@ static const struct bpf_func_proto bpf_probe_read_str_proto = { > > .arg3_type = ARG_ANYTHING, > > }; > > > > +BPF_CALL_2(bpf_get_current_cgroup_ino, u32, hierarchy, u64, flags) > > +{ > > + // TODO: pick the correct hierarchy instead of the mem controller > > + struct cgroup *cgrp = task_cgroup(current, memory_cgrp_id); > > + > > + if (unlikely(!cgrp)) > > + return -EINVAL; > > + if (unlikely(hierarchy)) > > + return -EINVAL; > > + if (unlikely(flags)) > > + return -EINVAL; > > + > > + return cgrp->kn->id.ino; > > +} > > + > > +static const struct bpf_func_proto bpf_get_current_cgroup_ino_proto = { > > + .func = bpf_get_current_cgroup_ino, > > + .gpl_only = false, > > + .ret_type = RET_INTEGER, > > + .arg1_type = ARG_DONTCARE, > > + .arg2_type = ARG_DONTCARE, > > +}; > > + > > static const struct bpf_func_proto * > > tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) > > { > > @@ -564,6 +587,8 @@ tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog) > > return &bpf_get_prandom_u32_proto; > > case BPF_FUNC_probe_read_str: > > return &bpf_probe_read_str_proto; > > + case BPF_FUNC_get_current_cgroup_ino: > > + return &bpf_get_current_cgroup_ino_proto; > > default: > > return NULL; > > } > > -- > > 2.14.3 > >