Hey Tejun,
This is v5 of the promised series to enable spawning processes into a
target cgroup different from the parent's cgroup.
/* v1 */
Link: https://lore.kernel.org/r/[email protected]
/* v2 */
Link: https://lore.kernel.org/r/[email protected]
Rework locking and remove unneeded helper functions. Please see
individual patch changelogs for details.
With this I've been able to run the cgroup selftests and stress tests in
loops for a long time without any regressions or deadlocks; lockdep and
kasan did not complain either.
/* v3 */
Link: https://lore.kernel.org/r/[email protected]
Split preliminary work into separate patches.
See changelog of individual commits.
/* v4 */
Link: https://lore.kernel.org/r/[email protected]
Verify that we have write access to the target cgroup. This is usually
done by the vfs but since we aren't going through the vfs with
CLONE_INTO_CGROUP we need to do it ourselves.
/* v5 */
Don't pass down the parent task_struct as argument, just use current
directly. Put kargs->cset on error.
With this cgroup migration will be a lot easier, and accounting will be
more exact. It also allows for nice features such as creating a frozen
process by spawning it into a frozen cgroup.
The code simplifies container creation and exec logic quite a bit as
well.
I've tried to contain all core changes for this features in
kernel/cgroup/* to avoid exposing cgroup internals. This has mostly
worked.
When a new process is supposed to be spawned in a cgroup different from
the parent's then we briefly acquire the cgroup mutex right before
fork()'s point of no return and drop it once the child process has been
attached to the tasklist and to its css_set. This is done to ensure that
the cgroup isn't removed behind our back. The cgroup mutex is _only_
held in this case; the usual case, where the child is created in the
same cgroup as the parent does not acquire it since the cgroup can't be
removed.
The series already comes with proper testing. Once we've decided that
this approach is good I'll expand the test-suite even more.
The branch can be found in the following locations:
[1]: kernel.org: https://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git/log/?h=clone_into_cgroup
[2]: github.com: https://github.com/brauner/linux/tree/clone_into_cgroup
[3]: gitlab.com: https://gitlab.com/brauner/linux/commits/clone_into_cgroup
Thanks!
Christian
Christian Brauner (6):
cgroup: unify attach permission checking
cgroup: add cgroup_get_from_file() helper
cgroup: refactor fork helpers
cgroup: add cgroup_may_write() helper
clone3: allow spawning processes into cgroups
selftests/cgroup: add tests for cloning into cgroups
include/linux/cgroup-defs.h | 6 +-
include/linux/cgroup.h | 20 +-
include/linux/sched/task.h | 4 +
include/uapi/linux/sched.h | 5 +
kernel/cgroup/cgroup.c | 297 ++++++++++++++----
kernel/cgroup/pids.c | 15 +-
kernel/fork.c | 19 +-
tools/testing/selftests/cgroup/Makefile | 6 +-
tools/testing/selftests/cgroup/cgroup_util.c | 126 ++++++++
tools/testing/selftests/cgroup/cgroup_util.h | 4 +
tools/testing/selftests/cgroup/test_core.c | 64 ++++
.../selftests/clone3/clone3_selftests.h | 19 +-
12 files changed, 501 insertions(+), 84 deletions(-)
base-commit: b3a987b0264d3ddbb24293ebff10eddfc472f653
--
2.25.0
The core codepaths to check whether a process can be attached to a
cgroup are the same for threads and thread-group leaders. Only a small
piece of code verifying that source and destination cgroup are in the
same domain differentiates the thread permission checking from
thread-group leader permission checking.
Since cgroup_migrate_vet_dst() only matters cgroup2 - it is a noop on
cgroup1 - we can move it out of cgroup_attach_task().
All checks can now be consolidated into a new helper
cgroup_attach_permissions() callable from both cgroup_procs_write() and
cgroup_threads_write().
Cc: Tejun Heo <[email protected]>
Cc: Li Zefan <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: [email protected]
Signed-off-by: Christian Brauner <[email protected]>
---
/* v1 */
Link: https://lore.kernel.org/r/[email protected]
/* v2 */
Link: https://lore.kernel.org/r/[email protected]
- Christian Brauner <[email protected]>:
- Fix return value of cgroup_attach_permissions. It used to return 0
when it should've returned -EOPNOTSUPP.
- Fix call to cgroup_attach_permissions() in cgroup_procs_write(). It
accidently specified that a thread was moved causing an additional
check for domain-group equality to be executed that is not needed.
/* v3 */
Link: https://lore.kernel.org/r/[email protected]
unchanged
/* v4 */
Link: https://lore.kernel.org/r/[email protected]
unchanged
/* v5 */
unchanged
---
kernel/cgroup/cgroup.c | 39 +++++++++++++++++++++++++--------------
1 file changed, 25 insertions(+), 14 deletions(-)
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 735af8f15f95..7b98cc389dae 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -2719,11 +2719,7 @@ int cgroup_attach_task(struct cgroup *dst_cgrp, struct task_struct *leader,
{
DEFINE_CGROUP_MGCTX(mgctx);
struct task_struct *task;
- int ret;
-
- ret = cgroup_migrate_vet_dst(dst_cgrp);
- if (ret)
- return ret;
+ int ret = 0;
/* look up all src csets */
spin_lock_irq(&css_set_lock);
@@ -4690,6 +4686,26 @@ static int cgroup_procs_write_permission(struct cgroup *src_cgrp,
return 0;
}
+static int cgroup_attach_permissions(struct cgroup *src_cgrp,
+ struct cgroup *dst_cgrp,
+ struct super_block *sb, bool thread)
+{
+ int ret = 0;
+
+ ret = cgroup_procs_write_permission(src_cgrp, dst_cgrp, sb);
+ if (ret)
+ return ret;
+
+ ret = cgroup_migrate_vet_dst(dst_cgrp);
+ if (ret)
+ return ret;
+
+ if (thread && (src_cgrp->dom_cgrp != dst_cgrp->dom_cgrp))
+ ret = -EOPNOTSUPP;
+
+ return ret;
+}
+
static ssize_t cgroup_procs_write(struct kernfs_open_file *of,
char *buf, size_t nbytes, loff_t off)
{
@@ -4712,8 +4728,8 @@ static ssize_t cgroup_procs_write(struct kernfs_open_file *of,
src_cgrp = task_cgroup_from_root(task, &cgrp_dfl_root);
spin_unlock_irq(&css_set_lock);
- ret = cgroup_procs_write_permission(src_cgrp, dst_cgrp,
- of->file->f_path.dentry->d_sb);
+ ret = cgroup_attach_permissions(src_cgrp, dst_cgrp,
+ of->file->f_path.dentry->d_sb, false);
if (ret)
goto out_finish;
@@ -4757,16 +4773,11 @@ static ssize_t cgroup_threads_write(struct kernfs_open_file *of,
spin_unlock_irq(&css_set_lock);
/* thread migrations follow the cgroup.procs delegation rule */
- ret = cgroup_procs_write_permission(src_cgrp, dst_cgrp,
- of->file->f_path.dentry->d_sb);
+ ret = cgroup_attach_permissions(src_cgrp, dst_cgrp,
+ of->file->f_path.dentry->d_sb, true);
if (ret)
goto out_finish;
- /* and must be contained in the same domain */
- ret = -EOPNOTSUPP;
- if (src_cgrp->dom_cgrp != dst_cgrp->dom_cgrp)
- goto out_finish;
-
ret = cgroup_attach_task(dst_cgrp, task, false);
out_finish:
--
2.25.0
Add a helper cgroup_get_from_file(). The helper will be used in
subsequent patches to retrieve a cgroup while holding a reference to the
struct file it was taken from.
Cc: Tejun Heo <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Li Zefan <[email protected]>
Cc: [email protected]
Signed-off-by: Christian Brauner <[email protected]>
---
/* v1 */
patch not present
/* v2 */
patch not present
/* v3 */
Link: https://lore.kernel.org/r/[email protected]
patch introduced
- Tejun Heo <[email protected]>:
- split cgroup_get_from_file() changes into separate commmit
/* v4 */
Link: https://lore.kernel.org/r/[email protected]
unchanged
/* v5 */
unchanged
---
kernel/cgroup/cgroup.c | 30 +++++++++++++++++++-----------
1 file changed, 19 insertions(+), 11 deletions(-)
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 7b98cc389dae..9b3241d67592 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -5875,6 +5875,24 @@ void cgroup_fork(struct task_struct *child)
INIT_LIST_HEAD(&child->cg_list);
}
+static struct cgroup *cgroup_get_from_file(struct file *f)
+{
+ struct cgroup_subsys_state *css;
+ struct cgroup *cgrp;
+
+ css = css_tryget_online_from_dir(f->f_path.dentry, NULL);
+ if (IS_ERR(css))
+ return ERR_CAST(css);
+
+ cgrp = css->cgroup;
+ if (!cgroup_on_dfl(cgrp)) {
+ cgroup_put(cgrp);
+ return ERR_PTR(-EBADF);
+ }
+
+ return cgrp;
+}
+
/**
* cgroup_can_fork - called on a new task before the process is exposed
* @child: the task in question.
@@ -6163,7 +6181,6 @@ EXPORT_SYMBOL_GPL(cgroup_get_from_path);
*/
struct cgroup *cgroup_get_from_fd(int fd)
{
- struct cgroup_subsys_state *css;
struct cgroup *cgrp;
struct file *f;
@@ -6171,17 +6188,8 @@ struct cgroup *cgroup_get_from_fd(int fd)
if (!f)
return ERR_PTR(-EBADF);
- css = css_tryget_online_from_dir(f->f_path.dentry, NULL);
+ cgrp = cgroup_get_from_file(f);
fput(f);
- if (IS_ERR(css))
- return ERR_CAST(css);
-
- cgrp = css->cgroup;
- if (!cgroup_on_dfl(cgrp)) {
- cgroup_put(cgrp);
- return ERR_PTR(-EBADF);
- }
-
return cgrp;
}
EXPORT_SYMBOL_GPL(cgroup_get_from_fd);
--
2.25.0
This refactors the fork helpers so they can be easily modified in the
next patches. The patch just moves the cgroup threadgroup rwsem grab and
release into the helpers. They don't need to be directly exposed in fork.c.
Cc: Tejun Heo <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Li Zefan <[email protected]>
Cc: [email protected]
Signed-off-by: Christian Brauner <[email protected]>
---
/* v1 */
patch not present
/* v2 */
patch not present
/* v3 */
Link: https://lore.kernel.org/r/[email protected]
patch introduced
- Tejun Heo <[email protected]>:
- split into separate commmit
/* v4 */
Link: https://lore.kernel.org/r/[email protected]
unchanged
/* v5 */
- Oleg Nesterov <[email protected]>:
- remove struct task_struct *parent argument from clone helpers in favor of
using current directly
- Christian Brauner <[email protected]>:
- fix typo in commit message
---
kernel/cgroup/cgroup.c | 47 ++++++++++++++++++++++++++----------------
kernel/fork.c | 6 +-----
2 files changed, 30 insertions(+), 23 deletions(-)
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 9b3241d67592..ce2d5b8aa19f 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -5895,17 +5895,21 @@ static struct cgroup *cgroup_get_from_file(struct file *f)
/**
* cgroup_can_fork - called on a new task before the process is exposed
- * @child: the task in question.
+ * @child: the child process
+ * @kargs: the arguments passed to create the child process
*
- * This calls the subsystem can_fork() callbacks. If the can_fork() callback
- * returns an error, the fork aborts with that error code. This allows for
- * a cgroup subsystem to conditionally allow or deny new forks.
+ * This calls the subsystem can_fork() callbacks. If the cgroup_can_fork()
+ * callback returns an error, the fork aborts with that error code. This
+ * allows for a cgroup subsystem to conditionally allow or deny new forks.
*/
int cgroup_can_fork(struct task_struct *child)
+ __acquires(&cgroup_threadgroup_rwsem) __releases(&cgroup_threadgroup_rwsem)
{
struct cgroup_subsys *ss;
int i, j, ret;
+ cgroup_threadgroup_change_begin(current);
+
do_each_subsys_mask(ss, i, have_canfork_callback) {
ret = ss->can_fork(child);
if (ret)
@@ -5922,17 +5926,21 @@ int cgroup_can_fork(struct task_struct *child)
ss->cancel_fork(child);
}
+ cgroup_threadgroup_change_end(current);
+
return ret;
}
/**
- * cgroup_cancel_fork - called if a fork failed after cgroup_can_fork()
- * @child: the task in question
- *
- * This calls the cancel_fork() callbacks if a fork failed *after*
- * cgroup_can_fork() succeded.
- */
+ * cgroup_cancel_fork - called if a fork failed after cgroup_can_fork()
+ * @child: the child process
+ * @kargs: the arguments passed to create the child process
+ *
+ * This calls the cancel_fork() callbacks if a fork failed *after*
+ * cgroup_can_fork() succeded.
+ */
void cgroup_cancel_fork(struct task_struct *child)
+ __releases(&cgroup_threadgroup_rwsem)
{
struct cgroup_subsys *ss;
int i;
@@ -5940,19 +5948,20 @@ void cgroup_cancel_fork(struct task_struct *child)
for_each_subsys(ss, i)
if (ss->cancel_fork)
ss->cancel_fork(child);
+
+ cgroup_threadgroup_change_end(current);
}
/**
- * cgroup_post_fork - called on a new task after adding it to the task list
- * @child: the task in question
- *
- * Adds the task to the list running through its css_set if necessary and
- * call the subsystem fork() callbacks. Has to be after the task is
- * visible on the task list in case we race with the first call to
- * cgroup_task_iter_start() - to guarantee that the new task ends up on its
- * list.
+ * cgroup_post_fork - finalize cgroup setup for the child process
+ * @child: the child process
+ * @kargs: the arguments passed to create the child process
+ *
+ * Attach the child process to its css_set calling the subsystem fork()
+ * callbacks.
*/
void cgroup_post_fork(struct task_struct *child)
+ __releases(&cgroup_threadgroup_rwsem)
{
struct cgroup_subsys *ss;
struct css_set *cset;
@@ -5995,6 +6004,8 @@ void cgroup_post_fork(struct task_struct *child)
do_each_subsys_mask(ss, i, have_fork_callback) {
ss->fork(child);
} while_each_subsys_mask();
+
+ cgroup_threadgroup_change_end(current);
}
/**
diff --git a/kernel/fork.c b/kernel/fork.c
index 080809560072..ca5de25690c8 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2165,7 +2165,6 @@ static __latent_entropy struct task_struct *copy_process(
INIT_LIST_HEAD(&p->thread_group);
p->task_works = NULL;
- cgroup_threadgroup_change_begin(current);
/*
* Ensure that the cgroup subsystem policies allow the new process to be
* forked. It should be noted the the new process's css_set can be changed
@@ -2174,7 +2173,7 @@ static __latent_entropy struct task_struct *copy_process(
*/
retval = cgroup_can_fork(p);
if (retval)
- goto bad_fork_cgroup_threadgroup_change_end;
+ goto bad_fork_put_pidfd;
/*
* From this point on we must avoid any synchronous user-space
@@ -2280,7 +2279,6 @@ static __latent_entropy struct task_struct *copy_process(
proc_fork_connector(p);
cgroup_post_fork(p);
- cgroup_threadgroup_change_end(current);
perf_event_fork(p);
trace_task_newtask(p, clone_flags);
@@ -2292,8 +2290,6 @@ static __latent_entropy struct task_struct *copy_process(
spin_unlock(¤t->sighand->siglock);
write_unlock_irq(&tasklist_lock);
cgroup_cancel_fork(p);
-bad_fork_cgroup_threadgroup_change_end:
- cgroup_threadgroup_change_end(current);
bad_fork_put_pidfd:
if (clone_flags & CLONE_PIDFD) {
fput(pidfile);
--
2.25.0
Expand the cgroup test-suite to include tests for CLONE_INTO_CGROUP.
This adds the following tests:
- CLONE_INTO_CGROUP manages to clone a process directly into a correctly
delegated cgroup
- CLONE_INTO_CGROUP fails to clone a process into a cgroup that has been
removed after we've opened an fd to it
- CLONE_INTO_CGROUP fails to clone a process into an invalid domain
cgroup
- CLONE_INTO_CGROUP adheres to the no internal process constraint
- CLONE_INTO_CGROUP works with the freezer feature
Cc: Tejun Heo <[email protected]>
Cc: Shuah Khan <[email protected]>
Cc: [email protected]
Cc: [email protected]
Acked-by: Roman Gushchin <[email protected]>
Signed-off-by: Christian Brauner <[email protected]>
---
/* v1 */
Link: https://lore.kernel.org/r/[email protected]
/* v2 */
Link: https://lore.kernel.org/r/[email protected]
unchanged
/* v3 */
Link: https://lore.kernel.org/r/[email protected]
unchanged
/* v4 */
Link: https://lore.kernel.org/r/[email protected]
unchanged
/* v5 */
unchanged
- Christian Brauner <[email protected]>:
- add Acked-by: Roman Gushchin <[email protected]>
Signed-off-by: Christian Brauner <[email protected]>
---
tools/testing/selftests/cgroup/Makefile | 6 +-
tools/testing/selftests/cgroup/cgroup_util.c | 126 ++++++++++++++++++
tools/testing/selftests/cgroup/cgroup_util.h | 4 +
tools/testing/selftests/cgroup/test_core.c | 64 +++++++++
.../selftests/clone3/clone3_selftests.h | 19 ++-
5 files changed, 214 insertions(+), 5 deletions(-)
diff --git a/tools/testing/selftests/cgroup/Makefile b/tools/testing/selftests/cgroup/Makefile
index 66aafe1f5746..967f268fde74 100644
--- a/tools/testing/selftests/cgroup/Makefile
+++ b/tools/testing/selftests/cgroup/Makefile
@@ -11,6 +11,6 @@ TEST_GEN_PROGS += test_freezer
include ../lib.mk
-$(OUTPUT)/test_memcontrol: cgroup_util.c
-$(OUTPUT)/test_core: cgroup_util.c
-$(OUTPUT)/test_freezer: cgroup_util.c
+$(OUTPUT)/test_memcontrol: cgroup_util.c ../clone3/clone3_selftests.h
+$(OUTPUT)/test_core: cgroup_util.c ../clone3/clone3_selftests.h
+$(OUTPUT)/test_freezer: cgroup_util.c ../clone3/clone3_selftests.h
diff --git a/tools/testing/selftests/cgroup/cgroup_util.c b/tools/testing/selftests/cgroup/cgroup_util.c
index 8f7131dcf1ff..8a637ca7d73a 100644
--- a/tools/testing/selftests/cgroup/cgroup_util.c
+++ b/tools/testing/selftests/cgroup/cgroup_util.c
@@ -15,6 +15,7 @@
#include <unistd.h>
#include "cgroup_util.h"
+#include "../clone3/clone3_selftests.h"
static ssize_t read_text(const char *path, char *buf, size_t max_len)
{
@@ -331,12 +332,112 @@ int cg_run(const char *cgroup,
}
}
+pid_t clone_into_cgroup(int cgroup_fd)
+{
+#ifdef CLONE_ARGS_SIZE_VER2
+ pid_t pid;
+
+ struct clone_args args = {
+ .flags = CLONE_INTO_CGROUP,
+ .exit_signal = SIGCHLD,
+ .cgroup = cgroup_fd,
+ };
+
+ pid = sys_clone3(&args, sizeof(struct clone_args));
+ /*
+ * Verify that this is a genuine test failure:
+ * ENOSYS -> clone3() not available
+ * E2BIG -> CLONE_INTO_CGROUP not available
+ */
+ if (pid < 0 && (errno == ENOSYS || errno == E2BIG))
+ goto pretend_enosys;
+
+ return pid;
+
+pretend_enosys:
+#endif
+ errno = ENOSYS;
+ return -ENOSYS;
+}
+
+int clone_reap(pid_t pid, int options)
+{
+ int ret;
+ siginfo_t info = {
+ .si_signo = 0,
+ };
+
+again:
+ ret = waitid(P_PID, pid, &info, options | __WALL | __WNOTHREAD);
+ if (ret < 0) {
+ if (errno == EINTR)
+ goto again;
+ return -1;
+ }
+
+ if (options & WEXITED) {
+ if (WIFEXITED(info.si_status))
+ return WEXITSTATUS(info.si_status);
+ }
+
+ if (options & WSTOPPED) {
+ if (WIFSTOPPED(info.si_status))
+ return WSTOPSIG(info.si_status);
+ }
+
+ if (options & WCONTINUED) {
+ if (WIFCONTINUED(info.si_status))
+ return 0;
+ }
+
+ return -1;
+}
+
+int dirfd_open_opath(const char *dir)
+{
+ return open(dir, O_DIRECTORY | O_CLOEXEC | O_NOFOLLOW | O_PATH);
+}
+
+#define close_prot_errno(fd) \
+ if (fd >= 0) { \
+ int _e_ = errno; \
+ close(fd); \
+ errno = _e_; \
+ }
+
+static int clone_into_cgroup_run_nowait(const char *cgroup,
+ int (*fn)(const char *cgroup, void *arg),
+ void *arg)
+{
+ int cgroup_fd;
+ pid_t pid;
+
+ cgroup_fd = dirfd_open_opath(cgroup);
+ if (cgroup_fd < 0)
+ return -1;
+
+ pid = clone_into_cgroup(cgroup_fd);
+ close_prot_errno(cgroup_fd);
+ if (pid == 0)
+ exit(fn(cgroup, arg));
+
+ return pid;
+}
+
int cg_run_nowait(const char *cgroup,
int (*fn)(const char *cgroup, void *arg),
void *arg)
{
int pid;
+ pid = clone_into_cgroup_run_nowait(cgroup, fn, arg);
+ if (pid > 0)
+ return pid;
+
+ /* Genuine test failure. */
+ if (pid < 0 && errno != ENOSYS)
+ return -1;
+
pid = fork();
if (pid == 0) {
char buf[64];
@@ -450,3 +551,28 @@ int proc_read_strstr(int pid, bool thread, const char *item, const char *needle)
return strstr(buf, needle) ? 0 : -1;
}
+
+int clone_into_cgroup_run_wait(const char *cgroup)
+{
+ int cgroup_fd;
+ pid_t pid;
+
+ cgroup_fd = dirfd_open_opath(cgroup);
+ if (cgroup_fd < 0)
+ return -1;
+
+ pid = clone_into_cgroup(cgroup_fd);
+ close_prot_errno(cgroup_fd);
+ if (pid < 0)
+ return -1;
+
+ if (pid == 0)
+ exit(EXIT_SUCCESS);
+
+ /*
+ * We don't care whether this fails. We only care whether the initial
+ * clone succeeded.
+ */
+ (void)clone_reap(pid, WEXITED);
+ return 0;
+}
diff --git a/tools/testing/selftests/cgroup/cgroup_util.h b/tools/testing/selftests/cgroup/cgroup_util.h
index 49c54fbdb229..5a1305dd1f0b 100644
--- a/tools/testing/selftests/cgroup/cgroup_util.h
+++ b/tools/testing/selftests/cgroup/cgroup_util.h
@@ -50,3 +50,7 @@ extern int cg_wait_for_proc_count(const char *cgroup, int count);
extern int cg_killall(const char *cgroup);
extern ssize_t proc_read_text(int pid, bool thread, const char *item, char *buf, size_t size);
extern int proc_read_strstr(int pid, bool thread, const char *item, const char *needle);
+extern pid_t clone_into_cgroup(int cgroup_fd);
+extern int clone_reap(pid_t pid, int options);
+extern int clone_into_cgroup_run_wait(const char *cgroup);
+extern int dirfd_open_opath(const char *dir);
diff --git a/tools/testing/selftests/cgroup/test_core.c b/tools/testing/selftests/cgroup/test_core.c
index c5ca669feb2b..96e016ccafe0 100644
--- a/tools/testing/selftests/cgroup/test_core.c
+++ b/tools/testing/selftests/cgroup/test_core.c
@@ -25,8 +25,11 @@
static int test_cgcore_populated(const char *root)
{
int ret = KSFT_FAIL;
+ int err;
char *cg_test_a = NULL, *cg_test_b = NULL;
char *cg_test_c = NULL, *cg_test_d = NULL;
+ int cgroup_fd = -EBADF;
+ pid_t pid;
cg_test_a = cg_name(root, "cg_test_a");
cg_test_b = cg_name(root, "cg_test_a/cg_test_b");
@@ -78,6 +81,52 @@ static int test_cgcore_populated(const char *root)
if (cg_read_strcmp(cg_test_d, "cgroup.events", "populated 0\n"))
goto cleanup;
+ /* Test that we can directly clone into a new cgroup. */
+ cgroup_fd = dirfd_open_opath(cg_test_d);
+ if (cgroup_fd < 0)
+ goto cleanup;
+
+ pid = clone_into_cgroup(cgroup_fd);
+ if (pid < 0) {
+ if (errno == ENOSYS)
+ goto cleanup_pass;
+ goto cleanup;
+ }
+
+ if (pid == 0) {
+ if (raise(SIGSTOP))
+ exit(EXIT_FAILURE);
+ exit(EXIT_SUCCESS);
+ }
+
+ err = cg_read_strcmp(cg_test_d, "cgroup.events", "populated 1\n");
+
+ (void)clone_reap(pid, WSTOPPED);
+ (void)kill(pid, SIGCONT);
+ (void)clone_reap(pid, WEXITED);
+
+ if (err)
+ goto cleanup;
+
+ if (cg_read_strcmp(cg_test_d, "cgroup.events", "populated 0\n"))
+ goto cleanup;
+
+ /* Remove cgroup. */
+ if (cg_test_d) {
+ cg_destroy(cg_test_d);
+ free(cg_test_d);
+ cg_test_d = NULL;
+ }
+
+ pid = clone_into_cgroup(cgroup_fd);
+ if (pid < 0)
+ goto cleanup_pass;
+ if (pid == 0)
+ exit(EXIT_SUCCESS);
+ (void)clone_reap(pid, WEXITED);
+ goto cleanup;
+
+cleanup_pass:
ret = KSFT_PASS;
cleanup:
@@ -93,6 +142,8 @@ static int test_cgcore_populated(const char *root)
free(cg_test_c);
free(cg_test_b);
free(cg_test_a);
+ if (cgroup_fd >= 0)
+ close(cgroup_fd);
return ret;
}
@@ -136,6 +187,16 @@ static int test_cgcore_invalid_domain(const char *root)
if (errno != EOPNOTSUPP)
goto cleanup;
+ if (!clone_into_cgroup_run_wait(child))
+ goto cleanup;
+
+ if (errno == ENOSYS)
+ goto cleanup_pass;
+
+ if (errno != EOPNOTSUPP)
+ goto cleanup;
+
+cleanup_pass:
ret = KSFT_PASS;
cleanup:
@@ -345,6 +406,9 @@ static int test_cgcore_internal_process_constraint(const char *root)
if (!cg_enter_current(parent))
goto cleanup;
+ if (!clone_into_cgroup_run_wait(parent))
+ goto cleanup;
+
ret = KSFT_PASS;
cleanup:
diff --git a/tools/testing/selftests/clone3/clone3_selftests.h b/tools/testing/selftests/clone3/clone3_selftests.h
index a3f2c8ad8bcc..91c1a78ddb39 100644
--- a/tools/testing/selftests/clone3/clone3_selftests.h
+++ b/tools/testing/selftests/clone3/clone3_selftests.h
@@ -5,12 +5,24 @@
#define _GNU_SOURCE
#include <sched.h>
+#include <linux/sched.h>
+#include <linux/types.h>
#include <stdint.h>
#include <syscall.h>
-#include <linux/types.h>
+#include <sys/wait.h>
+
+#include "../kselftest.h"
#define ptr_to_u64(ptr) ((__u64)((uintptr_t)(ptr)))
+#ifndef CLONE_INTO_CGROUP
+#define CLONE_INTO_CGROUP 0x200000000ULL /* Clone into a specific cgroup given the right permissions. */
+#endif
+
+#ifndef CLONE_ARGS_SIZE_VER0
+#define CLONE_ARGS_SIZE_VER0 64
+#endif
+
#ifndef __NR_clone3
#define __NR_clone3 -1
struct clone_args {
@@ -22,10 +34,13 @@ struct clone_args {
__aligned_u64 stack;
__aligned_u64 stack_size;
__aligned_u64 tls;
+#define CLONE_ARGS_SIZE_VER1 80
__aligned_u64 set_tid;
__aligned_u64 set_tid_size;
+#define CLONE_ARGS_SIZE_VER2 88
+ __aligned_u64 cgroup;
};
-#endif
+#endif /* __NR_clone3 */
static pid_t sys_clone3(struct clone_args *args, size_t size)
{
--
2.25.0
Add a cgroup_may_write() helper which we can use in the
CLONE_INTO_CGROUP patch series to verify that we can write to the
destination cgroup.
Cc: Tejun Heo <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Li Zefan <[email protected]>
Cc: [email protected]
Signed-off-by: Christian Brauner <[email protected]>
---
/* v1 */
patch not present
/* v2 */
patch not present
/* v3 */
patch not present
/* v4 */
Link: https://lore.kernel.org/r/[email protected]
patch introduced
/* v5 */
unchanged
---
kernel/cgroup/cgroup.c | 24 +++++++++++++++++-------
1 file changed, 17 insertions(+), 7 deletions(-)
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index ce2d5b8aa19f..636fe3d46d2d 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -4649,13 +4649,28 @@ static int cgroup_procs_show(struct seq_file *s, void *v)
return 0;
}
+static int cgroup_may_write(const struct cgroup *cgrp, struct super_block *sb)
+{
+ int ret;
+ struct inode *inode;
+
+ lockdep_assert_held(&cgroup_mutex);
+
+ inode = kernfs_get_inode(sb, cgrp->procs_file.kn);
+ if (!inode)
+ return -ENOMEM;
+
+ ret = inode_permission(inode, MAY_WRITE);
+ iput(inode);
+ return ret;
+}
+
static int cgroup_procs_write_permission(struct cgroup *src_cgrp,
struct cgroup *dst_cgrp,
struct super_block *sb)
{
struct cgroup_namespace *ns = current->nsproxy->cgroup_ns;
struct cgroup *com_cgrp = src_cgrp;
- struct inode *inode;
int ret;
lockdep_assert_held(&cgroup_mutex);
@@ -4665,12 +4680,7 @@ static int cgroup_procs_write_permission(struct cgroup *src_cgrp,
com_cgrp = cgroup_parent(com_cgrp);
/* %current should be authorized to migrate to the common ancestor */
- inode = kernfs_get_inode(sb, com_cgrp->procs_file.kn);
- if (!inode)
- return -ENOMEM;
-
- ret = inode_permission(inode, MAY_WRITE);
- iput(inode);
+ ret = cgroup_may_write(com_cgrp, sb);
if (ret)
return ret;
--
2.25.0
This adds support for creating a process in a different cgroup than its
parent. Callers can limit and account processes and threads right from
the moment they are spawned:
- A service manager can directly spawn new services into dedicated
cgroups.
- A process can be directly created in a frozen cgroup and will be
frozen as well.
- The initial accounting jitter experienced by process supervisors and
daemons is eliminated with this.
- Threaded applications or even thread implementations can choose to
create a specific cgroup layout where each thread is spawned
directly into a dedicated cgroup.
This feature is limited to the unified hierarchy. Callers need to pass
an directory file descriptor for the target cgroup. The caller can
choose to pass an O_PATH file descriptor. All usual migration
restrictions apply, i.e. there can be no processes in inner nodes. In
general, creating a process directly in a target cgroup adheres to all
migration restrictions.
Cc: Tejun Heo <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Oleg Nesterov <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Li Zefan <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: [email protected]
Signed-off-by: Christian Brauner <[email protected]>
---
/* v1 */
Link: https://lore.kernel.org/r/[email protected]
/* v2 */
Link: https://lore.kernel.org/r/[email protected]
- Oleg Nesterov <[email protected]>:
- prevent deadlock from wrong locking order
- Christian Brauner <[email protected]>:
- Rework locking. In the previous patch version we would have already
acquired the cgroup_threadgroup_rwsem before we grabbed cgroup mutex
we need to hold when CLONE_INTO_CGROUP is specified. This meant we
could deadlock with other codepaths that all require it to be done
the other way around. Fix this by first grabbing cgroup mutex when
CLONE_INTO_CGROUP is specified and then grabbing
cgroup_threadgroup_rwsem unconditionally after. This way we don't
require the cgroup mutex be held in codepaths that don't need it.
- Switch from mutex_lock() to mutex_lock_killable().
/* v3 */
Link: https://lore.kernel.org/r/[email protected]
- Tejun Heo <[email protected]>:
- s/mutex_lock_killable()/mutex_lock()/ because it should only ever
be held for a short time:
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index a9fedcfeae4b..d68d3fb6af1d 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -5927,11 +5927,8 @@ static int cgroup_css_set_fork(struct task_struct *parent,
struct super_block *sb;
struct file *f;
- if (kargs->flags & CLONE_INTO_CGROUP) {
- ret = mutex_lock_killable(&cgroup_mutex);
- if (ret)
- return ret;
- }
+ if (kargs->flags & CLONE_INTO_CGROUP)
+ mutex_lock(&cgroup_mutex);
cgroup_threadgroup_change_begin(parent);
- s/task_cgroup_from_root/cset->dfl_cgrp/:
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index d68d3fb6af1d..3ceef006d144 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -5922,7 +5922,7 @@ static int cgroup_css_set_fork(struct task_struct *parent,
__acquires(&cgroup_mutex) __acquires(&cgroup_threadgroup_rwsem)
{
int ret;
- struct cgroup *dst_cgrp = NULL, *src_cgrp;
+ struct cgroup *dst_cgrp = NULL;
struct css_set *cset;
struct super_block *sb;
struct file *f;
@@ -5956,11 +5956,7 @@ static int cgroup_css_set_fork(struct task_struct *parent,
goto err;
}
- spin_lock_irq(&css_set_lock);
- src_cgrp = task_cgroup_from_root(parent, &cgrp_dfl_cgrp);
- spin_unlock_irq(&css_set_lock);
-
- ret = cgroup_attach_permissions(src_cgrp, dst_cgrp, sb,
+ ret = cgroup_attach_permissions(cset->dfl_cgrp, dst_cgrp, sb,
!!(kargs->flags & CLONE_THREAD));
if (ret)
goto err;
- pass struct css_set instead of struct kernel_clone_args into cgroup
fork subsystem callbacks:
diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
index cd848c6bac4a..058bb16d073f 100644
--- a/include/linux/cgroup-defs.h
+++ b/include/linux/cgroup-defs.h
@@ -630,9 +630,8 @@ struct cgroup_subsys {
void (*attach)(struct cgroup_taskset *tset);
void (*post_attach)(void);
int (*can_fork)(struct task_struct *parent, struct task_struct *child,
- struct kernel_clone_args *kargs);
- void (*cancel_fork)(struct task_struct *child,
- struct kernel_clone_args *kargs);
+ struct css_set *cset);
+ void (*cancel_fork)(struct task_struct *child, struct css_set *cset);
void (*fork)(struct task_struct *task);
void (*exit)(struct task_struct *task);
void (*release)(struct task_struct *task);
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 3ceef006d144..2ac1c37a3fcb 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -6044,7 +6044,7 @@ int cgroup_can_fork(struct task_struct *parent, struct task_struct *child,
return ret;
do_each_subsys_mask(ss, i, have_canfork_callback) {
- ret = ss->can_fork(parent, child, kargs);
+ ret = ss->can_fork(parent, child, kargs->cset);
if (ret)
goto out_revert;
} while_each_subsys_mask();
@@ -6056,7 +6056,7 @@ int cgroup_can_fork(struct task_struct *parent, struct task_struct *child,
if (j >= i)
break;
if (ss->cancel_fork)
- ss->cancel_fork(child, kargs);
+ ss->cancel_fork(child, kargs->cset);
}
cgroup_css_set_put_fork(parent, kargs);
@@ -6082,7 +6082,7 @@ void cgroup_cancel_fork(struct task_struct *parent, struct task_struct *child,
for_each_subsys(ss, i)
if (ss->cancel_fork)
- ss->cancel_fork(child, kargs);
+ ss->cancel_fork(child, kargs->cset);
cgroup_css_set_put_fork(parent, kargs);
}
diff --git a/kernel/cgroup/pids.c b/kernel/cgroup/pids.c
index e5955bc1fb00..4e7c8819c8df 100644
--- a/kernel/cgroup/pids.c
+++ b/kernel/cgroup/pids.c
@@ -216,20 +216,16 @@ static void pids_cancel_attach(struct cgroup_taskset *tset)
* on cgroup_threadgroup_change_begin() held by the copy_process().
*/
static int pids_can_fork(struct task_struct *parent, struct task_struct *child,
- struct kernel_clone_args *args)
+ struct css_set *cset)
{
- struct css_set *new_cset = NULL;
struct cgroup_subsys_state *css;
struct pids_cgroup *pids;
int err;
- if (args)
- new_cset = args->cset;
-
- if (!new_cset)
- css = task_css_check(current, pids_cgrp_id, true);
+ if (cset)
+ css = cset->subsys[pids_cgrp_id];
else
- css = new_cset->subsys[pids_cgrp_id];
+ css = task_css_check(current, pids_cgrp_id, true);
pids = css_pids(css);
err = pids_try_charge(pids, 1);
if (err) {
@@ -244,20 +240,15 @@ static int pids_can_fork(struct task_struct *parent, struct task_struct *child,
return err;
}
-static void pids_cancel_fork(struct task_struct *task,
- struct kernel_clone_args *args)
+static void pids_cancel_fork(struct task_struct *task, struct css_set *cset)
{
- struct css_set *new_cset = NULL;
struct cgroup_subsys_state *css;
struct pids_cgroup *pids;
- if (args)
- new_cset = args->cset;
-
- if (!new_cset)
- css = task_css_check(current, pids_cgrp_id, true);
+ if (cset)
+ css = cset->subsys[pids_cgrp_id];
else
- css = new_cset->subsys[pids_cgrp_id];
+ css = task_css_check(current, pids_cgrp_id, true);
pids = css_pids(css);
pids_uncharge(pids, 1);
}
- Michal Koutný <[email protected]>:
- update comment for cgroup_fork()
- if CLONE_NEWCGROUP and CLONE_INTO_CGROUP is requested, set the
root_cset of the new cgroup namespace to the child's cset
/* v4 */
Link: https://lore.kernel.org/r/[email protected]
- Tejun Heo <[email protected]>:
- verify that we can write to the target cgroup since we're not going through
the vfs layer which would do it for us
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 61d1a6cd0059..6b38b2545667 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -5966,6 +5966,15 @@ static int cgroup_css_set_fork(struct task_struct *parent,
goto err;
}
+ /*
+ * Verify that we can the target cgroup is writable for us. This is
+ * usally done by the vfs layer but since we're not going through the
+ * vfs layer here we need to do it.
+ */
+ ret = cgroup_may_write(dst_cgrp, sb);
+ if (ret)
+ goto err;
+
ret = cgroup_attach_permissions(cset->dfl_cgrp, dst_cgrp, sb,
!!(kargs->flags & CLONE_THREAD));
if (ret)
/* v5 */
- Oleg Nesterov <[email protected]>:
- remove struct task_struct *parent argument from clone helpers in favor of
using current directly
- remove cgroup_same_domain_helper()
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index f4379401327a..4d36255ef25f 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -4696,12 +4696,6 @@ static int cgroup_procs_write_permission(struct cgroup *src_cgrp,
return 0;
}
-static inline bool cgroup_same_domain(const struct cgroup *src_cgrp,
- const struct cgroup *dst_cgrp)
-{
- return src_cgrp->dom_cgrp == dst_cgrp->dom_cgrp;
-}
-
static int cgroup_attach_permissions(struct cgroup *src_cgrp,
struct cgroup *dst_cgrp,
struct super_block *sb, bool thread)
@@ -4716,8 +4710,7 @@ static int cgroup_attach_permissions(struct cgroup *src_cgrp,
if (ret)
return ret;
- if (thread &&
- !cgroup_same_domain(src_cgrp->dom_cgrp, dst_cgrp->dom_cgrp))
+ if (thread && (src_cgrp->dom_cgrp != dst_cgrp->dom_cgrp))
ret = -EOPNOTSUPP;
return ret;
- put kargs->cset on failure
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 4d36255ef25f..482055d1e64a 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -5994,6 +5994,8 @@ static int cgroup_css_set_fork(struct kernel_clone_args *kargs)
if (dst_cgrp)
cgroup_put(dst_cgrp);
put_css_set(cset);
+ if (kargs->cset)
+ put_css_set(kargs->cset);
return ret;
}
---
include/linux/cgroup-defs.h | 6 +-
include/linux/cgroup.h | 20 ++--
include/linux/sched/task.h | 4 +
include/uapi/linux/sched.h | 5 +
kernel/cgroup/cgroup.c | 189 +++++++++++++++++++++++++++++++-----
kernel/cgroup/pids.c | 15 ++-
kernel/fork.c | 13 ++-
7 files changed, 212 insertions(+), 40 deletions(-)
diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
index 63097cb243cb..89d627abcbd6 100644
--- a/include/linux/cgroup-defs.h
+++ b/include/linux/cgroup-defs.h
@@ -33,6 +33,7 @@ struct kernfs_ops;
struct kernfs_open_file;
struct seq_file;
struct poll_table_struct;
+struct kernel_clone_args;
#define MAX_CGROUP_TYPE_NAMELEN 32
#define MAX_CGROUP_ROOT_NAMELEN 64
@@ -628,8 +629,9 @@ struct cgroup_subsys {
void (*cancel_attach)(struct cgroup_taskset *tset);
void (*attach)(struct cgroup_taskset *tset);
void (*post_attach)(void);
- int (*can_fork)(struct task_struct *task);
- void (*cancel_fork)(struct task_struct *task);
+ int (*can_fork)(struct task_struct *task,
+ struct css_set *cset);
+ void (*cancel_fork)(struct task_struct *task, struct css_set *cset);
void (*fork)(struct task_struct *task);
void (*exit)(struct task_struct *task);
void (*release)(struct task_struct *task);
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index d7ddebd0cdec..fbbaeac9fe29 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -27,6 +27,8 @@
#include <linux/cgroup-defs.h>
+struct kernel_clone_args;
+
#ifdef CONFIG_CGROUPS
/*
@@ -121,9 +123,12 @@ int proc_cgroup_show(struct seq_file *m, struct pid_namespace *ns,
struct pid *pid, struct task_struct *tsk);
void cgroup_fork(struct task_struct *p);
-extern int cgroup_can_fork(struct task_struct *p);
-extern void cgroup_cancel_fork(struct task_struct *p);
-extern void cgroup_post_fork(struct task_struct *p);
+extern int cgroup_can_fork(struct task_struct *p,
+ struct kernel_clone_args *kargs);
+extern void cgroup_cancel_fork(struct task_struct *p,
+ struct kernel_clone_args *kargs);
+extern void cgroup_post_fork(struct task_struct *p,
+ struct kernel_clone_args *kargs);
void cgroup_exit(struct task_struct *p);
void cgroup_release(struct task_struct *p);
void cgroup_free(struct task_struct *p);
@@ -707,9 +712,12 @@ static inline int cgroupstats_build(struct cgroupstats *stats,
struct dentry *dentry) { return -EINVAL; }
static inline void cgroup_fork(struct task_struct *p) {}
-static inline int cgroup_can_fork(struct task_struct *p) { return 0; }
-static inline void cgroup_cancel_fork(struct task_struct *p) {}
-static inline void cgroup_post_fork(struct task_struct *p) {}
+static inline int cgroup_can_fork(struct task_struct *p,
+ struct kernel_clone_args *kargs) { return 0; }
+static inline void cgroup_cancel_fork(struct task_struct *p,
+ struct kernel_clone_args *kargs) {}
+static inline void cgroup_post_fork(struct task_struct *p,
+ struct kernel_clone_args *kargs) {}
static inline void cgroup_exit(struct task_struct *p) {}
static inline void cgroup_release(struct task_struct *p) {}
static inline void cgroup_free(struct task_struct *p) {}
diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
index f1879884238e..38359071236a 100644
--- a/include/linux/sched/task.h
+++ b/include/linux/sched/task.h
@@ -13,6 +13,7 @@
struct task_struct;
struct rusage;
union thread_union;
+struct css_set;
/* All the bits taken by the old clone syscall. */
#define CLONE_LEGACY_FLAGS 0xffffffffULL
@@ -29,6 +30,9 @@ struct kernel_clone_args {
pid_t *set_tid;
/* Number of elements in *set_tid */
size_t set_tid_size;
+ int cgroup;
+ struct cgroup *cgrp;
+ struct css_set *cset;
};
/*
diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h
index 4a0217832464..08620c220f30 100644
--- a/include/uapi/linux/sched.h
+++ b/include/uapi/linux/sched.h
@@ -35,6 +35,7 @@
/* Flags for the clone3() syscall. */
#define CLONE_CLEAR_SIGHAND 0x100000000ULL /* Clear any signal handler and reset to SIG_DFL. */
+#define CLONE_INTO_CGROUP 0x200000000ULL /* Clone into a specific cgroup given the right permissions. */
#ifndef __ASSEMBLY__
/**
@@ -75,6 +76,8 @@
* @set_tid_size: This defines the size of the array referenced
* in @set_tid. This cannot be larger than the
* kernel's limit of nested PID namespaces.
+ * @cgroup: If CLONE_INTO_CGROUP is specified set this to
+ * a file descriptor for the cgroup.
*
* The structure is versioned by size and thus extensible.
* New struct members must go at the end of the struct and
@@ -91,11 +94,13 @@ struct clone_args {
__aligned_u64 tls;
__aligned_u64 set_tid;
__aligned_u64 set_tid_size;
+ __aligned_u64 cgroup;
};
#endif
#define CLONE_ARGS_SIZE_VER0 64 /* sizeof first published struct */
#define CLONE_ARGS_SIZE_VER1 80 /* sizeof second published struct */
+#define CLONE_ARGS_SIZE_VER2 88 /* sizeof third published struct */
/*
* Scheduling policies
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 636fe3d46d2d..6a7f53fd8374 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -5876,8 +5876,7 @@ int proc_cgroup_show(struct seq_file *m, struct pid_namespace *ns,
* @child: pointer to task_struct of forking parent process.
*
* A task is associated with the init_css_set until cgroup_post_fork()
- * attaches it to the parent's css_set. Empty cg_list indicates that
- * @child isn't holding reference to its css_set.
+ * attaches it to the target css_set.
*/
void cgroup_fork(struct task_struct *child)
{
@@ -5903,25 +5902,156 @@ static struct cgroup *cgroup_get_from_file(struct file *f)
return cgrp;
}
+/**
+ * cgroup_css_set_fork - find or create a css_set for a child process
+ * @kargs: the arguments passed to create the child process
+ *
+ * This functions finds or creates a new css_set which the child
+ * process will be attached to in cgroup_post_fork(). By default,
+ * the child process will be given the same css_set as its parent.
+ *
+ * If CLONE_INTO_CGROUP is specified this function will try to find an
+ * existing css_set which includes the requested cgroup and if not create
+ * a new css_set that the child will be attached to later. If this function
+ * succeeds it will hold cgroup_threadgroup_rwsem on return. If
+ * CLONE_INTO_CGROUP is requested this function will grab cgroup mutex
+ * before grabbing cgroup_threadgroup_rwsem and will hold a reference
+ * to the target cgroup.
+ */
+static int cgroup_css_set_fork(struct kernel_clone_args *kargs)
+ __acquires(&cgroup_mutex) __acquires(&cgroup_threadgroup_rwsem)
+{
+ int ret;
+ struct cgroup *dst_cgrp = NULL;
+ struct css_set *cset;
+ struct super_block *sb;
+ struct file *f;
+
+ if (kargs->flags & CLONE_INTO_CGROUP)
+ mutex_lock(&cgroup_mutex);
+
+ cgroup_threadgroup_change_begin(current);
+
+ spin_lock_irq(&css_set_lock);
+ cset = task_css_set(current);
+ get_css_set(cset);
+ spin_unlock_irq(&css_set_lock);
+
+ if (!(kargs->flags & CLONE_INTO_CGROUP)) {
+ kargs->cset = cset;
+ return 0;
+ }
+
+ f = fget_raw(kargs->cgroup);
+ if (!f) {
+ ret = -EBADF;
+ goto err;
+ }
+ sb = f->f_path.dentry->d_sb;
+
+ dst_cgrp = cgroup_get_from_file(f);
+ if (IS_ERR(dst_cgrp)) {
+ ret = PTR_ERR(dst_cgrp);
+ dst_cgrp = NULL;
+ goto err;
+ }
+
+ /*
+ * Verify that we the target cgroup is writable for us. This is
+ * usually done by the vfs layer but since we're not going through
+ * the vfs layer here we need to do it "manually".
+ */
+ ret = cgroup_may_write(dst_cgrp, sb);
+ if (ret)
+ goto err;
+
+ ret = cgroup_attach_permissions(cset->dfl_cgrp, dst_cgrp, sb,
+ !!(kargs->flags & CLONE_THREAD));
+ if (ret)
+ goto err;
+
+ kargs->cset = find_css_set(cset, dst_cgrp);
+ if (!kargs->cset) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ if (cgroup_is_dead(dst_cgrp)) {
+ ret = -ENODEV;
+ goto err;
+ }
+
+ put_css_set(cset);
+ fput(f);
+ kargs->cgrp = dst_cgrp;
+ return ret;
+
+err:
+ cgroup_threadgroup_change_end(current);
+ mutex_unlock(&cgroup_mutex);
+ if (f)
+ fput(f);
+ if (dst_cgrp)
+ cgroup_put(dst_cgrp);
+ put_css_set(cset);
+ if (kargs->cset)
+ put_css_set(kargs->cset);
+ return ret;
+}
+
+/**
+ * cgroup_css_set_put_fork - drop references we took during fork
+ * @kargs: the arguments passed to create the child process
+ *
+ * Drop references to the prepared css_set and target cgroup if
+ * CLONE_INTO_CGROUP was requested. This function can only be
+ * called before fork()'s point of no return.
+ */
+static void cgroup_css_set_put_fork(struct kernel_clone_args *kargs)
+ __releases(&cgroup_threadgroup_rwsem) __releases(&cgroup_mutex)
+{
+ cgroup_threadgroup_change_end(current);
+
+ if (kargs->flags & CLONE_INTO_CGROUP) {
+ struct cgroup *cgrp = kargs->cgrp;
+ struct css_set *cset = kargs->cset;
+
+ mutex_unlock(&cgroup_mutex);
+
+ if (cset) {
+ put_css_set(cset);
+ kargs->cset = NULL;
+ }
+
+ if (cgrp) {
+ cgroup_put(cgrp);
+ kargs->cgrp = NULL;
+ }
+ }
+}
+
/**
* cgroup_can_fork - called on a new task before the process is exposed
* @child: the child process
* @kargs: the arguments passed to create the child process
*
+ * This prepares a new css_set for the child process which the child will
+ * be attached to in cgroup_post_fork().
* This calls the subsystem can_fork() callbacks. If the cgroup_can_fork()
* callback returns an error, the fork aborts with that error code. This
* allows for a cgroup subsystem to conditionally allow or deny new forks.
*/
-int cgroup_can_fork(struct task_struct *child)
- __acquires(&cgroup_threadgroup_rwsem) __releases(&cgroup_threadgroup_rwsem)
+int cgroup_can_fork(struct task_struct *child, struct kernel_clone_args *kargs)
{
struct cgroup_subsys *ss;
int i, j, ret;
- cgroup_threadgroup_change_begin(current);
+ ret = cgroup_css_set_fork(kargs);
+ if (ret)
+ return ret;
do_each_subsys_mask(ss, i, have_canfork_callback) {
- ret = ss->can_fork(child);
+ ret = ss->can_fork(child, kargs->cset);
if (ret)
goto out_revert;
} while_each_subsys_mask();
@@ -5933,33 +6063,34 @@ int cgroup_can_fork(struct task_struct *child)
if (j >= i)
break;
if (ss->cancel_fork)
- ss->cancel_fork(child);
+ ss->cancel_fork(child, kargs->cset);
}
- cgroup_threadgroup_change_end(current);
+ cgroup_css_set_put_fork(kargs);
return ret;
}
/**
- * cgroup_cancel_fork - called if a fork failed after cgroup_can_fork()
- * @child: the child process
- * @kargs: the arguments passed to create the child process
- *
- * This calls the cancel_fork() callbacks if a fork failed *after*
- * cgroup_can_fork() succeded.
- */
-void cgroup_cancel_fork(struct task_struct *child)
- __releases(&cgroup_threadgroup_rwsem)
+ * cgroup_cancel_fork - called if a fork failed after cgroup_can_fork()
+ * @child: the child process
+ * @kargs: the arguments passed to create the child process
+ *
+ * This calls the cancel_fork() callbacks if a fork failed *after*
+ * cgroup_can_fork() succeded and cleans up references we took to
+ * prepare a new css_set for the child process in cgroup_can_fork().
+ */
+void cgroup_cancel_fork(struct task_struct *child,
+ struct kernel_clone_args *kargs)
{
struct cgroup_subsys *ss;
int i;
for_each_subsys(ss, i)
if (ss->cancel_fork)
- ss->cancel_fork(child);
+ ss->cancel_fork(child, kargs->cset);
- cgroup_threadgroup_change_end(current);
+ cgroup_css_set_put_fork(kargs);
}
/**
@@ -5970,18 +6101,17 @@ void cgroup_cancel_fork(struct task_struct *child)
* Attach the child process to its css_set calling the subsystem fork()
* callbacks.
*/
-void cgroup_post_fork(struct task_struct *child)
- __releases(&cgroup_threadgroup_rwsem)
+void cgroup_post_fork(struct task_struct *child,
+ struct kernel_clone_args *kargs)
+ __releases(&cgroup_threadgroup_rwsem) __releases(&cgroup_mutex)
{
struct cgroup_subsys *ss;
- struct css_set *cset;
+ struct css_set *cset = kargs->cset;
int i;
spin_lock_irq(&css_set_lock);
WARN_ON_ONCE(!list_empty(&child->cg_list));
- cset = task_css_set(current); /* current is @child's parent */
- get_css_set(cset);
cset->nr_tasks++;
css_set_move_task(child, NULL, cset, false);
@@ -6016,6 +6146,17 @@ void cgroup_post_fork(struct task_struct *child)
} while_each_subsys_mask();
cgroup_threadgroup_change_end(current);
+
+ if (kargs->flags & CLONE_INTO_CGROUP) {
+ mutex_unlock(&cgroup_mutex);
+
+ cgroup_put(kargs->cgrp);
+ kargs->cgrp = NULL;
+ }
+
+ /* Make the new cset the root_cset of the new cgroup namespace. */
+ if (kargs->flags & CLONE_NEWCGROUP)
+ child->nsproxy->cgroup_ns->root_cset = cset;
}
/**
diff --git a/kernel/cgroup/pids.c b/kernel/cgroup/pids.c
index 138059eb730d..511af87f685e 100644
--- a/kernel/cgroup/pids.c
+++ b/kernel/cgroup/pids.c
@@ -33,6 +33,7 @@
#include <linux/atomic.h>
#include <linux/cgroup.h>
#include <linux/slab.h>
+#include <linux/sched/task.h>
#define PIDS_MAX (PID_MAX_LIMIT + 1ULL)
#define PIDS_MAX_STR "max"
@@ -214,13 +215,16 @@ static void pids_cancel_attach(struct cgroup_taskset *tset)
* task_css_check(true) in pids_can_fork() and pids_cancel_fork() relies
* on cgroup_threadgroup_change_begin() held by the copy_process().
*/
-static int pids_can_fork(struct task_struct *task)
+static int pids_can_fork(struct task_struct *task, struct css_set *cset)
{
struct cgroup_subsys_state *css;
struct pids_cgroup *pids;
int err;
- css = task_css_check(current, pids_cgrp_id, true);
+ if (cset)
+ css = cset->subsys[pids_cgrp_id];
+ else
+ css = task_css_check(current, pids_cgrp_id, true);
pids = css_pids(css);
err = pids_try_charge(pids, 1);
if (err) {
@@ -235,12 +239,15 @@ static int pids_can_fork(struct task_struct *task)
return err;
}
-static void pids_cancel_fork(struct task_struct *task)
+static void pids_cancel_fork(struct task_struct *task, struct css_set *cset)
{
struct cgroup_subsys_state *css;
struct pids_cgroup *pids;
- css = task_css_check(current, pids_cgrp_id, true);
+ if (cset)
+ css = cset->subsys[pids_cgrp_id];
+ else
+ css = task_css_check(current, pids_cgrp_id, true);
pids = css_pids(css);
pids_uncharge(pids, 1);
}
diff --git a/kernel/fork.c b/kernel/fork.c
index ca5de25690c8..2853e258fe1f 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -2171,7 +2171,7 @@ static __latent_entropy struct task_struct *copy_process(
* between here and cgroup_post_fork() if an organisation operation is in
* progress.
*/
- retval = cgroup_can_fork(p);
+ retval = cgroup_can_fork(p, args);
if (retval)
goto bad_fork_put_pidfd;
@@ -2278,7 +2278,7 @@ static __latent_entropy struct task_struct *copy_process(
write_unlock_irq(&tasklist_lock);
proc_fork_connector(p);
- cgroup_post_fork(p);
+ cgroup_post_fork(p, args);
perf_event_fork(p);
trace_task_newtask(p, clone_flags);
@@ -2289,7 +2289,7 @@ static __latent_entropy struct task_struct *copy_process(
bad_fork_cancel_cgroup:
spin_unlock(¤t->sighand->siglock);
write_unlock_irq(&tasklist_lock);
- cgroup_cancel_fork(p);
+ cgroup_cancel_fork(p, args);
bad_fork_put_pidfd:
if (clone_flags & CLONE_PIDFD) {
fput(pidfile);
@@ -2618,6 +2618,9 @@ noinline static int copy_clone_args_from_user(struct kernel_clone_args *kargs,
!valid_signal(args.exit_signal)))
return -EINVAL;
+ if ((args.flags & CLONE_INTO_CGROUP) && args.cgroup < 0)
+ return -EINVAL;
+
*kargs = (struct kernel_clone_args){
.flags = args.flags,
.pidfd = u64_to_user_ptr(args.pidfd),
@@ -2628,6 +2631,7 @@ noinline static int copy_clone_args_from_user(struct kernel_clone_args *kargs,
.stack_size = args.stack_size,
.tls = args.tls,
.set_tid_size = args.set_tid_size,
+ .cgroup = args.cgroup,
};
if (args.set_tid &&
@@ -2671,7 +2675,8 @@ static inline bool clone3_stack_valid(struct kernel_clone_args *kargs)
static bool clone3_args_valid(struct kernel_clone_args *kargs)
{
/* Verify that no unknown flags are passed along. */
- if (kargs->flags & ~(CLONE_LEGACY_FLAGS | CLONE_CLEAR_SIGHAND))
+ if (kargs->flags &
+ ~(CLONE_LEGACY_FLAGS | CLONE_CLEAR_SIGHAND | CLONE_INTO_CGROUP))
return false;
/*
--
2.25.0
On Tue, Jan 21, 2020 at 04:48:39PM +0100, Christian Brauner <[email protected]> wrote:
> +static int cgroup_attach_permissions(struct cgroup *src_cgrp,
> + struct cgroup *dst_cgrp,
> + struct super_block *sb, bool thread)
I suggest inverting the logic of the last argument to make it consistent
with other functions that use threadgroup argument for similar
distinction.
Apart from that
Acked-by: Michal Koutn? <[email protected]>
On Tue, Jan 21, 2020 at 04:48:40PM +0100, Christian Brauner <[email protected]> wrote:
> Add a helper cgroup_get_from_file(). The helper will be used in
> subsequent patches to retrieve a cgroup while holding a reference to the
> struct file it was taken from.
Acked-by: Michal Koutn? <[email protected]>
On Tue, Jan 21, 2020 at 04:48:41PM +0100, Christian Brauner <[email protected]> wrote:
> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
> index 9b3241d67592..ce2d5b8aa19f 100644
> --- a/kernel/cgroup/cgroup.c
> +++ b/kernel/cgroup/cgroup.c
> @@ -5895,17 +5895,21 @@ static struct cgroup *cgroup_get_from_file(struct file *f)
>
> /**
> * cgroup_can_fork - called on a new task before the process is exposed
> - * @child: the task in question.
> + * @child: the child process
> + * @kargs: the arguments passed to create the child process
This comment should only come with the later commmits.
> - * cgroup_cancel_fork - called if a fork failed after cgroup_can_fork()
> - * @child: the task in question
> - *
> - * This calls the cancel_fork() callbacks if a fork failed *after*
> - * cgroup_can_fork() succeded.
> - */
> + * cgroup_cancel_fork - called if a fork failed after cgroup_can_fork()
> + * @child: the child process
> + * @kargs: the arguments passed to create the child process
Ditto
> - * cgroup_post_fork - called on a new task after adding it to the task list
> - * @child: the task in question
> - *
> - * Adds the task to the list running through its css_set if necessary and
> - * call the subsystem fork() callbacks. Has to be after the task is
> - * visible on the task list in case we race with the first call to
> - * cgroup_task_iter_start() - to guarantee that the new task ends up on its
> - * list.
> + * cgroup_post_fork - finalize cgroup setup for the child process
> + * @child: the child process
> + * @kargs: the arguments passed to create the child process
One more.
Besides the misrebased comments
Acked-by: Michal Koutn? <[email protected]>
Hello.
On Tue, Jan 21, 2020 at 04:48:43PM +0100, Christian Brauner <[email protected]> wrote:
> +static int cgroup_css_set_fork(struct kernel_clone_args *kargs)
> + __acquires(&cgroup_mutex) __acquires(&cgroup_threadgroup_rwsem)
> +{
> + int ret;
> + struct cgroup *dst_cgrp = NULL;
> + struct css_set *cset;
> + struct super_block *sb;
> + struct file *f;
> +
> + if (kargs->flags & CLONE_INTO_CGROUP)
> + mutex_lock(&cgroup_mutex);
> +
> + cgroup_threadgroup_change_begin(current);
> +
> + spin_lock_irq(&css_set_lock);
> + cset = task_css_set(current);
> + get_css_set(cset);
> + spin_unlock_irq(&css_set_lock);
> +
> + if (!(kargs->flags & CLONE_INTO_CGROUP)) {
> + kargs->cset = cset;
Where is this css_set put when CLONE_INTO_CGROUP isn't used?
(Aha, it's passed to child's tsk->cgroups but see my other note below.)
> + dst_cgrp = cgroup_get_from_file(f);
> + if (IS_ERR(dst_cgrp)) {
> + ret = PTR_ERR(dst_cgrp);
> + dst_cgrp = NULL;
> + goto err;
> + }
> +
> + /*
> + * Verify that we the target cgroup is writable for us. This is
> + * usually done by the vfs layer but since we're not going through
> + * the vfs layer here we need to do it "manually".
> + */
> + ret = cgroup_may_write(dst_cgrp, sb);
> + if (ret)
> + goto err;
> +
> + ret = cgroup_attach_permissions(cset->dfl_cgrp, dst_cgrp, sb,
> + !!(kargs->flags & CLONE_THREAD));
> + if (ret)
> + goto err;
> +
> + kargs->cset = find_css_set(cset, dst_cgrp);
> + if (!kargs->cset) {
> + ret = -ENOMEM;
> + goto err;
> + }
> +
> + if (cgroup_is_dead(dst_cgrp)) {
> + ret = -ENODEV;
> + goto err;
> + }
I'd move this check right after cgroup_get_from_file. The fork-migration
path is synchrinized via cgroup_mutex with cgroup_destroy_locked and
there's no need checking permissions on cgroup that's going away anyway.
> +static void cgroup_css_set_put_fork(struct kernel_clone_args *kargs)
> + __releases(&cgroup_threadgroup_rwsem) __releases(&cgroup_mutex)
> +{
> + cgroup_threadgroup_change_end(current);
> +
> + if (kargs->flags & CLONE_INTO_CGROUP) {
> + struct cgroup *cgrp = kargs->cgrp;
> + struct css_set *cset = kargs->cset;
> +
> + mutex_unlock(&cgroup_mutex);
> +
> + if (cset) {
> + put_css_set(cset);
> + kargs->cset = NULL;
> + }
> +
> + if (cgrp) {
> + cgroup_put(cgrp);
> + kargs->cgrp = NULL;
> + }
> + }
I don't see any function problem with this ordering, however, I'd
prefer symmetry with the "allocation" path (in cgroup_css_set_fork),
i.e. cgroup_put, put_css_set and lastly mutex_unlock.
> +void cgroup_post_fork(struct task_struct *child,
> + struct kernel_clone_args *kargs)
> + __releases(&cgroup_threadgroup_rwsem) __releases(&cgroup_mutex)
> {
> struct cgroup_subsys *ss;
> - struct css_set *cset;
> + struct css_set *cset = kargs->cset;
> int i;
>
> spin_lock_irq(&css_set_lock);
>
> WARN_ON_ONCE(!list_empty(&child->cg_list));
> - cset = task_css_set(current); /* current is @child's parent */
> - get_css_set(cset);
> cset->nr_tasks++;
> css_set_move_task(child, NULL, cset, false);
So, the reference is passed over from kargs->cset to task->cgroups. I
think it's necessary to zero kargs->cset in order to prevent droping the
reference in cgroup_css_set_put_fork.
Perhaps, a general comment about css_set whereabouts during fork and
kargs passing would be useful.
> @@ -6016,6 +6146,17 @@ void cgroup_post_fork(struct task_struct *child)
> } while_each_subsys_mask();
>
> cgroup_threadgroup_change_end(current);
> +
> + if (kargs->flags & CLONE_INTO_CGROUP) {
> + mutex_unlock(&cgroup_mutex);
> +
> + cgroup_put(kargs->cgrp);
> + kargs->cgrp = NULL;
> + }
> +
> + /* Make the new cset the root_cset of the new cgroup namespace. */
> + if (kargs->flags & CLONE_NEWCGROUP)
> + child->nsproxy->cgroup_ns->root_cset = cset;
root_cset reference (from copy_cgroup_ns) seems leaked here and where is
the additional reference to new cset obtained?
Thanks,
Michal
On Wed, Jan 29, 2020 at 02:27:19PM +0100, Michal Koutný wrote:
> Hello.
>
> On Tue, Jan 21, 2020 at 04:48:43PM +0100, Christian Brauner <[email protected]> wrote:
> > +static int cgroup_css_set_fork(struct kernel_clone_args *kargs)
> > + __acquires(&cgroup_mutex) __acquires(&cgroup_threadgroup_rwsem)
> > +{
> > + int ret;
> > + struct cgroup *dst_cgrp = NULL;
> > + struct css_set *cset;
> > + struct super_block *sb;
> > + struct file *f;
> > +
> > + if (kargs->flags & CLONE_INTO_CGROUP)
> > + mutex_lock(&cgroup_mutex);
> > +
> > + cgroup_threadgroup_change_begin(current);
> > +
> > + spin_lock_irq(&css_set_lock);
> > + cset = task_css_set(current);
> > + get_css_set(cset);
> > + spin_unlock_irq(&css_set_lock);
> > +
> > + if (!(kargs->flags & CLONE_INTO_CGROUP)) {
> > + kargs->cset = cset;
> Where is this css_set put when CLONE_INTO_CGROUP isn't used?
> (Aha, it's passed to child's tsk->cgroups but see my other note below.)
>
> > + dst_cgrp = cgroup_get_from_file(f);
> > + if (IS_ERR(dst_cgrp)) {
> > + ret = PTR_ERR(dst_cgrp);
> > + dst_cgrp = NULL;
> > + goto err;
> > + }
> > +
> > + /*
> > + * Verify that we the target cgroup is writable for us. This is
> > + * usually done by the vfs layer but since we're not going through
> > + * the vfs layer here we need to do it "manually".
> > + */
> > + ret = cgroup_may_write(dst_cgrp, sb);
> > + if (ret)
> > + goto err;
> > +
> > + ret = cgroup_attach_permissions(cset->dfl_cgrp, dst_cgrp, sb,
> > + !!(kargs->flags & CLONE_THREAD));
> > + if (ret)
> > + goto err;
> > +
> > + kargs->cset = find_css_set(cset, dst_cgrp);
> > + if (!kargs->cset) {
> > + ret = -ENOMEM;
> > + goto err;
> > + }
> > +
> > + if (cgroup_is_dead(dst_cgrp)) {
> > + ret = -ENODEV;
> > + goto err;
> > + }
> I'd move this check right after cgroup_get_from_file. The fork-migration
> path is synchrinized via cgroup_mutex with cgroup_destroy_locked and
> there's no need checking permissions on cgroup that's going away anyway.
>
>
> > +static void cgroup_css_set_put_fork(struct kernel_clone_args *kargs)
> > + __releases(&cgroup_threadgroup_rwsem) __releases(&cgroup_mutex)
> > +{
> > + cgroup_threadgroup_change_end(current);
> > +
> > + if (kargs->flags & CLONE_INTO_CGROUP) {
> > + struct cgroup *cgrp = kargs->cgrp;
> > + struct css_set *cset = kargs->cset;
> > +
> > + mutex_unlock(&cgroup_mutex);
> > +
> > + if (cset) {
> > + put_css_set(cset);
> > + kargs->cset = NULL;
> > + }
> > +
> > + if (cgrp) {
> > + cgroup_put(cgrp);
> > + kargs->cgrp = NULL;
> > + }
> > + }
> I don't see any function problem with this ordering, however, I'd
> prefer symmetry with the "allocation" path (in cgroup_css_set_fork),
> i.e. cgroup_put, put_css_set and lastly mutex_unlock.
I prefer to yield the mutex as early as possible.
>
> > +void cgroup_post_fork(struct task_struct *child,
> > + struct kernel_clone_args *kargs)
> > + __releases(&cgroup_threadgroup_rwsem) __releases(&cgroup_mutex)
> > {
> > struct cgroup_subsys *ss;
> > - struct css_set *cset;
> > + struct css_set *cset = kargs->cset;
> > int i;
> >
> > spin_lock_irq(&css_set_lock);
> >
> > WARN_ON_ONCE(!list_empty(&child->cg_list));
> > - cset = task_css_set(current); /* current is @child's parent */
> > - get_css_set(cset);
> > cset->nr_tasks++;
> > css_set_move_task(child, NULL, cset, false);
> So, the reference is passed over from kargs->cset to task->cgroups. I
> think it's necessary to zero kargs->cset in order to prevent droping the
> reference in cgroup_css_set_put_fork.
cgroup_post_fork() is called past the point of no return for fork and
cgroup_css_set_put_fork() is explicitly documented as only being
callable before forks point of no return:
* Drop references to the prepared css_set and target cgroup if
* CLONE_INTO_CGROUP was requested. This function can only be
* called before fork()'s point of no return.
> Perhaps, a general comment about css_set whereabouts during fork and
> kargs passing would be useful.
>
> > @@ -6016,6 +6146,17 @@ void cgroup_post_fork(struct task_struct *child)
> > } while_each_subsys_mask();
> >
> > cgroup_threadgroup_change_end(current);
> > +
> > + if (kargs->flags & CLONE_INTO_CGROUP) {
> > + mutex_unlock(&cgroup_mutex);
> > +
> > + cgroup_put(kargs->cgrp);
> > + kargs->cgrp = NULL;
> > + }
> > +
> > + /* Make the new cset the root_cset of the new cgroup namespace. */
> > + if (kargs->flags & CLONE_NEWCGROUP)
> > + child->nsproxy->cgroup_ns->root_cset = cset;
> root_cset reference (from copy_cgroup_ns) seems leaked here and where is
> the additional reference to new cset obtained?
This should be:
if (kargs->flags & CLONE_NEWCGROUP) {
struct css_set *rcset = child->nsproxy->cgroup_ns->root_cset;
get_css_set(cset);
child->nsproxy->cgroup_ns->root_cset = cset;
put_css_set(rcset);
}
Thanks!
Christian
On Sun, Feb 02, 2020 at 10:37:02AM +0100, Christian Brauner <[email protected]> wrote:
> cgroup_post_fork() is called past the point of no return for fork and
> cgroup_css_set_put_fork() is explicitly documented as only being
> callable before forks point of no return:
I missed this and somehow incorrectly assumed it's called at the end of
fork too. I find the css_set refcounting correct now.
BTW any reason why not to utilize cgroup_css_set_put_fork() for the
regular cleanup in cgroup_post_fork() too?
Thanks,
Michal
On Mon, Feb 03, 2020 at 03:32:28PM +0100, Michal Koutný wrote:
> On Sun, Feb 02, 2020 at 10:37:02AM +0100, Christian Brauner <[email protected]> wrote:
> > cgroup_post_fork() is called past the point of no return for fork and
> > cgroup_css_set_put_fork() is explicitly documented as only being
> > callable before forks point of no return:
> I missed this and somehow incorrectly assumed it's called at the end of
> fork too. I find the css_set refcounting correct now.
>
> BTW any reason why not to utilize cgroup_css_set_put_fork() for the
> regular cleanup in cgroup_post_fork() too?
Hmyeah, should be doable if we do:
kargs->cset = NULL;
cgroup_css_set_put_fork(kargs);
Christian
On Tue, Jan 21, 2020 at 04:48:43PM +0100, Christian Brauner wrote:
> This adds support for creating a process in a different cgroup than its
> parent. Callers can limit and account processes and threads right from
> the moment they are spawned:
> - A service manager can directly spawn new services into dedicated
> cgroups.
> - A process can be directly created in a frozen cgroup and will be
> frozen as well.
> - The initial accounting jitter experienced by process supervisors and
> daemons is eliminated with this.
> - Threaded applications or even thread implementations can choose to
> create a specific cgroup layout where each thread is spawned
> directly into a dedicated cgroup.
>
> This feature is limited to the unified hierarchy. Callers need to pass
> an directory file descriptor for the target cgroup. The caller can
> choose to pass an O_PATH file descriptor. All usual migration
> restrictions apply, i.e. there can be no processes in inner nodes. In
> general, creating a process directly in a target cgroup adheres to all
> migration restrictions.
AFAICT, he *big* win here is avoiding the write side of the
cgroup_threadgroup_rwsem. Or am I mis-reading the patch?
That global lock is what makes moving tasks/threads around super
expensive, avoiding that by use of this clone() variant wins the day.
On Tue, Feb 04, 2020 at 12:53:51PM +0100, Peter Zijlstra wrote:
> On Tue, Jan 21, 2020 at 04:48:43PM +0100, Christian Brauner wrote:
> > This adds support for creating a process in a different cgroup than its
> > parent. Callers can limit and account processes and threads right from
> > the moment they are spawned:
> > - A service manager can directly spawn new services into dedicated
> > cgroups.
> > - A process can be directly created in a frozen cgroup and will be
> > frozen as well.
> > - The initial accounting jitter experienced by process supervisors and
> > daemons is eliminated with this.
> > - Threaded applications or even thread implementations can choose to
> > create a specific cgroup layout where each thread is spawned
> > directly into a dedicated cgroup.
> >
> > This feature is limited to the unified hierarchy. Callers need to pass
> > an directory file descriptor for the target cgroup. The caller can
> > choose to pass an O_PATH file descriptor. All usual migration
> > restrictions apply, i.e. there can be no processes in inner nodes. In
> > general, creating a process directly in a target cgroup adheres to all
> > migration restrictions.
>
> AFAICT, he *big* win here is avoiding the write side of the
> cgroup_threadgroup_rwsem. Or am I mis-reading the patch?
No, you're absolutely right. I just didn't bother putting implementation
specifics in the cover letter and I probably should have. So thanks for
pointing that out!
>
> That global lock is what makes moving tasks/threads around super
> expensive, avoiding that by use of this clone() variant wins the day.
:)
Christian