2021-05-12 10:00:29

by xufeng zhang

[permalink] [raw]
Subject: [RFC] [PATCH bpf-next 0/1] Implement getting cgroup path bpf helper

In order to protect the running application containers by utilizing
bpf LSM, we need to upload the security rules into bpf maps in container
granularity, however, there is no effective bpf helper to identify the
container, especially for cgroup v1. Generally, the only thing which the
user side can get is container ID, and the cgroup path for this running
container is fixed if we know the container ID, therefore, bpf programs
also need to get the cgroup path for the running task in order to apply
security rules stored in bpf maps.

This patch add a bpf helper - bpf_get_current_cpuset_cgroup_path(), which
return the cpuset cgroup path for the current task, since cgroup_path_ns()
can sleep, this helper is only allowed for sleepable LSM hooks.

Concern:
Since cgroup_path_ns() takes 'cgroup_mutex' and 'css_set_lock' lock,
I'm not sure if there is any dead lock code path in LSM hooks.

Xufeng Zhang (1):
bpf: Add a BPF helper for getting the cgroup path of current task
---
include/uapi/linux/bpf.h | 13 +++++++++++++
kernel/bpf/bpf_lsm.c | 28 ++++++++++++++++++++++++++++
tools/include/uapi/linux/bpf.h | 13 +++++++++++++
3 files changed, 54 insertions(+)


2021-05-12 10:01:21

by xufeng zhang

[permalink] [raw]
Subject: [RFC] [PATCH bpf-next 1/1] bpf: Add a BPF helper for getting the cgroup path of current task

To implement security rules for application containers by utilizing
bpf LSM, the container to which the current running task belongs need
to be known in bpf context. Think about this scenario: kubernetes
schedules a pod into one host, before the application container can run,
the security rules for this application need to be loaded into bpf
maps firstly, so that LSM bpf programs can make decisions based on
this rule maps.

However, there is no effective bpf helper to achieve this goal,
especially for cgroup v1. In the above case, the only available information
from user side is container-id, and the cgroup path for this container
is certain based on container-id, so in order to make a bridge between
user side and bpf programs, bpf programs also need to know the current
cgroup path of running task.

This change add a new bpf helper: bpf_get_current_cpuset_cgroup_path(),
since cgroup_path_ns() can sleep, this helper is only allowed for
sleepable LSM hooks.

Signed-off-by: Xufeng Zhang <[email protected]>
---
include/uapi/linux/bpf.h | 13 +++++++++++++
kernel/bpf/bpf_lsm.c | 28 ++++++++++++++++++++++++++++
tools/include/uapi/linux/bpf.h | 13 +++++++++++++
3 files changed, 54 insertions(+)

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index ec6d85a81744..e8295101b865 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -4735,6 +4735,18 @@ union bpf_attr {
* be zero-terminated except when **str_size** is 0.
*
* Or **-EBUSY** if the per-CPU memory copy buffer is busy.
+ *
+ * int bpf_get_current_cpuset_cgroup_path(char *buf, u32 buf_len)
+ * Description
+ * Get the cpuset cgroup path of current task from kernel memory,
+ * this path can be used to identify in which container is the
+ * current task running.
+ * *buf* memory is pre-allocated, and *buf_len* indicates the size
+ * of this memory.
+ *
+ * Return
+ * The cpuset cgroup path is copied into *buf* on success,
+ * or a negative integer error in case of failure.
*/
#define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \
@@ -4903,6 +4915,7 @@ union bpf_attr {
FN(check_mtu), \
FN(for_each_map_elem), \
FN(snprintf), \
+ FN(get_current_cpuset_cgroup_path), \
/* */

/* integer value in 'imm' field of BPF_CALL instruction selects which helper
diff --git a/kernel/bpf/bpf_lsm.c b/kernel/bpf/bpf_lsm.c
index 5efb2b24012c..5e62e3875df1 100644
--- a/kernel/bpf/bpf_lsm.c
+++ b/kernel/bpf/bpf_lsm.c
@@ -99,6 +99,30 @@ static const struct bpf_func_proto bpf_ima_inode_hash_proto = {
.allowed = bpf_ima_inode_hash_allowed,
};

+#ifdef CONFIG_CGROUPS
+BPF_CALL_2(bpf_get_current_cpuset_cgroup_path, char *, buf, u32, buf_len)
+{
+ struct cgroup_subsys_state *css;
+ int retval;
+
+ css = task_get_css(current, cpuset_cgrp_id);
+ retval = cgroup_path_ns(css->cgroup, buf, buf_len, &init_cgroup_ns);
+ css_put(css);
+ if (retval >= buf_len)
+ retval = -ENAMETOOLONG;
+ return retval;
+}
+
+static const struct bpf_func_proto bpf_get_current_cpuset_cgroup_path_proto = {
+ .func = bpf_get_current_cpuset_cgroup_path,
+ .gpl_only = false,
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_PTR_TO_UNINIT_MEM,
+ .arg2_type = ARG_CONST_SIZE,
+ .allowed = bpf_ima_inode_hash_allowed,
+};
+#endif
+
static const struct bpf_func_proto *
bpf_lsm_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
{
@@ -119,6 +143,10 @@ bpf_lsm_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
return &bpf_bprm_opts_set_proto;
case BPF_FUNC_ima_inode_hash:
return prog->aux->sleepable ? &bpf_ima_inode_hash_proto : NULL;
+#ifdef CONFIG_CGROUPS
+ case BPF_FUNC_get_current_cpuset_cgroup_path:
+ return prog->aux->sleepable ? &bpf_get_current_cpuset_cgroup_path_proto : NULL;
+#endif
default:
return tracing_prog_func_proto(func_id, prog);
}
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index ec6d85a81744..fe31252d92e3 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -4735,6 +4735,18 @@ union bpf_attr {
* be zero-terminated except when **str_size** is 0.
*
* Or **-EBUSY** if the per-CPU memory copy buffer is busy.
+ *
+ * int bpf_get_current_cpuset_cgroup_path(char *buf, u32 buf_len)
+ * Description
+ * Get the cpuset cgroup path of current task from kernel memory,
+ * this path can be used to identify in which container is the
+ * current task running.
+ * *buf* memory is pre-allocated, and *buf_len* indicates the size
+ * of this memory.
+ *
+ * Return
+ * The cpuset cgroup path is copied into *buf* on success,
+ * or a negative integer error in case of failure.
*/
#define __BPF_FUNC_MAPPER(FN) \
FN(unspec), \
@@ -4903,6 +4915,7 @@ union bpf_attr {
FN(check_mtu), \
FN(for_each_map_elem), \
FN(snprintf), \
+ FN(get_current_cpuset_cgroup_path), \
/* */

/* integer value in 'imm' field of BPF_CALL instruction selects which helper
--
2.20.1 (Apple Git-117)

2021-05-12 23:44:02

by Alexei Starovoitov

[permalink] [raw]
Subject: Re: [RFC] [PATCH bpf-next 1/1] bpf: Add a BPF helper for getting the cgroup path of current task

On Wed, May 12, 2021 at 05:58:23PM +0800, Xufeng Zhang wrote:
> To implement security rules for application containers by utilizing
> bpf LSM, the container to which the current running task belongs need
> to be known in bpf context. Think about this scenario: kubernetes
> schedules a pod into one host, before the application container can run,
> the security rules for this application need to be loaded into bpf
> maps firstly, so that LSM bpf programs can make decisions based on
> this rule maps.
>
> However, there is no effective bpf helper to achieve this goal,
> especially for cgroup v1. In the above case, the only available information
> from user side is container-id, and the cgroup path for this container
> is certain based on container-id, so in order to make a bridge between
> user side and bpf programs, bpf programs also need to know the current
> cgroup path of running task.
...
> +#ifdef CONFIG_CGROUPS
> +BPF_CALL_2(bpf_get_current_cpuset_cgroup_path, char *, buf, u32, buf_len)
> +{
> + struct cgroup_subsys_state *css;
> + int retval;
> +
> + css = task_get_css(current, cpuset_cgrp_id);
> + retval = cgroup_path_ns(css->cgroup, buf, buf_len, &init_cgroup_ns);
> + css_put(css);
> + if (retval >= buf_len)
> + retval = -ENAMETOOLONG;

Manipulating string path to check the hierarchy will be difficult to do
inside bpf prog. It seems to me this helper will be useful only for
simplest cgroup setups where there is no additional cgroup nesting
within containers.
Have you looked at *ancestor_cgroup_id and *cgroup_id helpers?
They're a bit more flexible when dealing with hierarchy and
can be used to achieve the same correlation between kernel and user cgroup ids.

2021-05-13 09:50:01

by xufeng zhang

[permalink] [raw]
Subject: Re: [RFC] [PATCH bpf-next 1/1] bpf: Add a BPF helper for getting the cgroup path of current task

?? 2021/5/13 ????6:55, Alexei Starovoitov д??:

> On Wed, May 12, 2021 at 05:58:23PM +0800, Xufeng Zhang wrote:
>> To implement security rules for application containers by utilizing
>> bpf LSM, the container to which the current running task belongs need
>> to be known in bpf context. Think about this scenario: kubernetes
>> schedules a pod into one host, before the application container can run,
>> the security rules for this application need to be loaded into bpf
>> maps firstly, so that LSM bpf programs can make decisions based on
>> this rule maps.
>>
>> However, there is no effective bpf helper to achieve this goal,
>> especially for cgroup v1. In the above case, the only available information
>> from user side is container-id, and the cgroup path for this container
>> is certain based on container-id, so in order to make a bridge between
>> user side and bpf programs, bpf programs also need to know the current
>> cgroup path of running task.
> ...
>> +#ifdef CONFIG_CGROUPS
>> +BPF_CALL_2(bpf_get_current_cpuset_cgroup_path, char *, buf, u32, buf_len)
>> +{
>> + struct cgroup_subsys_state *css;
>> + int retval;
>> +
>> + css = task_get_css(current, cpuset_cgrp_id);
>> + retval = cgroup_path_ns(css->cgroup, buf, buf_len, &init_cgroup_ns);
>> + css_put(css);
>> + if (retval >= buf_len)
>> + retval = -ENAMETOOLONG;
> Manipulating string path to check the hierarchy will be difficult to do
> inside bpf prog. It seems to me this helper will be useful only for
> simplest cgroup setups where there is no additional cgroup nesting
> within containers.
> Have you looked at *ancestor_cgroup_id and *cgroup_id helpers?
> They're a bit more flexible when dealing with hierarchy and
> can be used to achieve the same correlation between kernel and user cgroup ids.


Thanks for your timely reply, Alexei!

Yes, this helper is not so common, it does not works for nesting cgroup
within containers.

About your suggestion, the *cgroup_id helpers only works for cgroup v2,
however, we're still using cgroup v1 in product??and even for cgroup v2,
I'm not sure if there is any way for user space to get this cgroup id
timely(after container created, but before container start to run)??

So if there is any effective way works for cgroup v1?


Many thanks!

Xufeng



2021-05-14 11:22:57

by KP Singh

[permalink] [raw]
Subject: Re: [RFC] [PATCH bpf-next 1/1] bpf: Add a BPF helper for getting the cgroup path of current task

On Fri, May 14, 2021 at 6:06 AM xufeng zhang
<[email protected]> wrote:
>
>
> 在 2021/5/13 上午6:55, Alexei Starovoitov 写道:
> > On Wed, May 12, 2021 at 05:58:23PM +0800, Xufeng Zhang wrote:
> >> To implement security rules for application containers by utilizing
> >> bpf LSM, the container to which the current running task belongs need
> >> to be known in bpf context. Think about this scenario: kubernetes
> >> schedules a pod into one host, before the application container can run,
> >> the security rules for this application need to be loaded into bpf
> >> maps firstly, so that LSM bpf programs can make decisions based on
> >> this rule maps.
> >>
> >> However, there is no effective bpf helper to achieve this goal,
> >> especially for cgroup v1. In the above case, the only available information
> >> from user side is container-id, and the cgroup path for this container
> >> is certain based on container-id, so in order to make a bridge between
> >> user side and bpf programs, bpf programs also need to know the current
> >> cgroup path of running task.
> > ...
> >> +#ifdef CONFIG_CGROUPS
> >> +BPF_CALL_2(bpf_get_current_cpuset_cgroup_path, char *, buf, u32, buf_len)
> >> +{
> >> + struct cgroup_subsys_state *css;
> >> + int retval;
> >> +
> >> + css = task_get_css(current, cpuset_cgrp_id);
> >> + retval = cgroup_path_ns(css->cgroup, buf, buf_len, &init_cgroup_ns);
> >> + css_put(css);
> >> + if (retval >= buf_len)
> >> + retval = -ENAMETOOLONG;
> > Manipulating string path to check the hierarchy will be difficult to do
> > inside bpf prog. It seems to me this helper will be useful only for
> > simplest cgroup setups where there is no additional cgroup nesting
> > within containers.
> > Have you looked at *ancestor_cgroup_id and *cgroup_id helpers?
> > They're a bit more flexible when dealing with hierarchy and
> > can be used to achieve the same correlation between kernel and user cgroup ids.
>
>
> KP,
>
> do you have any suggestion?

I haven't really tried this yet, but have you considered using task local
storage to identify the container?

- Add a task local storage with container ID somewhere in the container
manager
- Propagate this ID to all the tasks within a container using task security
blob management hooks (like task_alloc and task_free) etc.

>
> what I am thinking is the internal kernel object(cgroup id or ns.inum)
> is not so user friendly, we can get the container-context from them for
> tracing scenario, but not for LSM blocking cases, I'm not sure how
> Google internally resolve similar issue.
>
>
> Thanks!
>
> Xufeng
>

2021-05-14 13:43:48

by xufeng zhang

[permalink] [raw]
Subject: Re: [RFC] [PATCH bpf-next 1/1] bpf: Add a BPF helper for getting the cgroup path of current task


?? 2021/5/13 ????6:55, Alexei Starovoitov д??:
> On Wed, May 12, 2021 at 05:58:23PM +0800, Xufeng Zhang wrote:
>> To implement security rules for application containers by utilizing
>> bpf LSM, the container to which the current running task belongs need
>> to be known in bpf context. Think about this scenario: kubernetes
>> schedules a pod into one host, before the application container can run,
>> the security rules for this application need to be loaded into bpf
>> maps firstly, so that LSM bpf programs can make decisions based on
>> this rule maps.
>>
>> However, there is no effective bpf helper to achieve this goal,
>> especially for cgroup v1. In the above case, the only available information
>> from user side is container-id, and the cgroup path for this container
>> is certain based on container-id, so in order to make a bridge between
>> user side and bpf programs, bpf programs also need to know the current
>> cgroup path of running task.
> ...
>> +#ifdef CONFIG_CGROUPS
>> +BPF_CALL_2(bpf_get_current_cpuset_cgroup_path, char *, buf, u32, buf_len)
>> +{
>> + struct cgroup_subsys_state *css;
>> + int retval;
>> +
>> + css = task_get_css(current, cpuset_cgrp_id);
>> + retval = cgroup_path_ns(css->cgroup, buf, buf_len, &init_cgroup_ns);
>> + css_put(css);
>> + if (retval >= buf_len)
>> + retval = -ENAMETOOLONG;
> Manipulating string path to check the hierarchy will be difficult to do
> inside bpf prog. It seems to me this helper will be useful only for
> simplest cgroup setups where there is no additional cgroup nesting
> within containers.
> Have you looked at *ancestor_cgroup_id and *cgroup_id helpers?
> They're a bit more flexible when dealing with hierarchy and
> can be used to achieve the same correlation between kernel and user cgroup ids.


KP,

do you have any suggestion?

what I am thinking is the internal kernel object(cgroup id or ns.inum)
is not so user friendly, we can get the container-context from them for
tracing scenario, but not for LSM blocking cases, I'm not sure how
Google internally resolve similar issue.


Thanks!

Xufeng


2021-05-14 13:43:50

by Alexei Starovoitov

[permalink] [raw]
Subject: Re: [RFC] [PATCH bpf-next 1/1] bpf: Add a BPF helper for getting the cgroup path of current task

On Thu, May 13, 2021 at 1:57 AM xufeng zhang
<[email protected]> wrote:
>
> 在 2021/5/13 上午6:55, Alexei Starovoitov 写道:
>
> > On Wed, May 12, 2021 at 05:58:23PM +0800, Xufeng Zhang wrote:
> >> To implement security rules for application containers by utilizing
> >> bpf LSM, the container to which the current running task belongs need
> >> to be known in bpf context. Think about this scenario: kubernetes
> >> schedules a pod into one host, before the application container can run,
> >> the security rules for this application need to be loaded into bpf
> >> maps firstly, so that LSM bpf programs can make decisions based on
> >> this rule maps.
> >>
> >> However, there is no effective bpf helper to achieve this goal,
> >> especially for cgroup v1. In the above case, the only available information
> >> from user side is container-id, and the cgroup path for this container
> >> is certain based on container-id, so in order to make a bridge between
> >> user side and bpf programs, bpf programs also need to know the current
> >> cgroup path of running task.
> > ...
> >> +#ifdef CONFIG_CGROUPS
> >> +BPF_CALL_2(bpf_get_current_cpuset_cgroup_path, char *, buf, u32, buf_len)
> >> +{
> >> + struct cgroup_subsys_state *css;
> >> + int retval;
> >> +
> >> + css = task_get_css(current, cpuset_cgrp_id);
> >> + retval = cgroup_path_ns(css->cgroup, buf, buf_len, &init_cgroup_ns);
> >> + css_put(css);
> >> + if (retval >= buf_len)
> >> + retval = -ENAMETOOLONG;
> > Manipulating string path to check the hierarchy will be difficult to do
> > inside bpf prog. It seems to me this helper will be useful only for
> > simplest cgroup setups where there is no additional cgroup nesting
> > within containers.
> > Have you looked at *ancestor_cgroup_id and *cgroup_id helpers?
> > They're a bit more flexible when dealing with hierarchy and
> > can be used to achieve the same correlation between kernel and user cgroup ids.
>
>
> Thanks for your timely reply, Alexei!
>
> Yes, this helper is not so common, it does not works for nesting cgroup
> within containers.
>
> About your suggestion, the *cgroup_id helpers only works for cgroup v2,
> however, we're still using cgroup v1 in product,and even for cgroup v2,
> I'm not sure if there is any way for user space to get this cgroup id
> timely(after container created, but before container start to run)。
>
> So if there is any effective way works for cgroup v1?

https://github.com/systemd/systemd/blob/main/NEWS#L379

2021-05-14 17:25:46

by KP Singh

[permalink] [raw]
Subject: Re: [RFC] [PATCH bpf-next 1/1] bpf: Add a BPF helper for getting the cgroup path of current task

> > About your suggestion, the *cgroup_id helpers only works for cgroup v2,
> > however, we're still using cgroup v1 in product,and even for cgroup v2,
> > I'm not sure if there is any way for user space to get this cgroup id
> > timely(after container created, but before container start to run)。
> >
> > So if there is any effective way works for cgroup v1?
>
> https://github.com/systemd/systemd/blob/main/NEWS#L379

I agree that we should not focus on cgroup v1 if we do add a helper.