2015-11-27 18:56:30

by Oleg Nesterov

[permalink] [raw]
Subject: [PATCH 0/3] cgroup: fix race between cgroup_post_fork() and cgroup_migrate()

On 11/26, Oleg Nesterov wrote:
>
> OK. I do not know what exactly do you mean, perhaps if you fix this problem
> the race between fork and attach goes away and in this case the fix I sent
> is not needed?

Otherwise please consider this series.

Slightly tested, seems to work; please review.

Oleg.

include/linux/cgroup-defs.h | 12 ++------
include/linux/cgroup.h | 19 ++++---------
include/linux/cgroup_subsys.h | 18 ------------
kernel/cgroup.c | 30 +++++----------------
kernel/cgroup_freezer.c | 2 +-
kernel/cgroup_pids.c | 58 +++++++---------------------------------
kernel/fork.c | 16 ++++-------
kernel/sched/core.c | 2 +-
8 files changed, 34 insertions(+), 123 deletions(-)


2015-11-27 18:56:47

by Oleg Nesterov

[permalink] [raw]
Subject: [PATCH 1/3] cgroup: pids: fix race between cgroup_post_fork() and cgroup_migrate()

If the new child migrates to another cgroup before cgroup_post_fork() calls
subsys->fork(), then both pids_can_attach() and pids_fork() will do the same
pids_uncharge(old_pids) + pids_charge(pids) sequence twice.

Change copy_process() to call threadgroup_change_begin/threadgroup_change_end
unconditionally. percpu_down_read() is cheap and this allows other cleanups,
see the next changes.

Also, this way we can unify cgroup_threadgroup_rwsem and dup_mmap_sem.

Signed-off-by: Oleg Nesterov <[email protected]>
---
kernel/cgroup_pids.c | 21 ++-------------------
kernel/fork.c | 9 +++------
2 files changed, 5 insertions(+), 25 deletions(-)

diff --git a/kernel/cgroup_pids.c b/kernel/cgroup_pids.c
index cdd8df4..15ef2e4 100644
--- a/kernel/cgroup_pids.c
+++ b/kernel/cgroup_pids.c
@@ -243,27 +243,10 @@ static void pids_cancel_fork(struct task_struct *task, void *priv)

static void pids_fork(struct task_struct *task, void *priv)
{
- struct cgroup_subsys_state *css;
- struct cgroup_subsys_state *old_css = priv;
- struct pids_cgroup *pids;
- struct pids_cgroup *old_pids = css_pids(old_css);
-
- css = task_get_css(task, pids_cgrp_id);
- pids = css_pids(css);
-
- /*
- * If the association has changed, we have to revert and reapply the
- * charge/uncharge on the wrong hierarchy to the current one. Since
- * the association can only change due to an organisation event, its
- * okay for us to ignore the limit in this case.
- */
- if (pids != old_pids) {
- pids_uncharge(old_pids, 1);
- pids_charge(pids, 1);
- }
+ struct cgroup_subsys_state *css = priv;

+ WARN_ON(task_css_check(task, pids_cgrp_id, true) != css);
css_put(css);
- css_put(old_css);
}

static void pids_free(struct task_struct *task)
diff --git a/kernel/fork.c b/kernel/fork.c
index f97f2c4..fce002e 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1368,8 +1368,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
p->real_start_time = ktime_get_boot_ns();
p->io_context = NULL;
p->audit_context = NULL;
- if (clone_flags & CLONE_THREAD)
- threadgroup_change_begin(current);
+ threadgroup_change_begin(current);
cgroup_fork(p);
#ifdef CONFIG_NUMA
p->mempolicy = mpol_dup(p->mempolicy);
@@ -1610,8 +1609,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,

proc_fork_connector(p);
cgroup_post_fork(p, cgrp_ss_priv);
- if (clone_flags & CLONE_THREAD)
- threadgroup_change_end(current);
+ threadgroup_change_end(current);
perf_event_fork(p);

trace_task_newtask(p, clone_flags);
@@ -1652,8 +1650,7 @@ bad_fork_cleanup_policy:
mpol_put(p->mempolicy);
bad_fork_cleanup_threadgroup_lock:
#endif
- if (clone_flags & CLONE_THREAD)
- threadgroup_change_end(current);
+ threadgroup_change_end(current);
delayacct_tsk_free(p);
bad_fork_cleanup_count:
atomic_dec(&p->cred->user->processes);
--
1.5.5.1

2015-11-27 18:56:49

by Oleg Nesterov

[permalink] [raw]
Subject: [PATCH 2/3] cgroup: pids: kill pids_fork(), simplify pids_can_fork() and pids_cancel_fork()

Now that we know that the forking task can't migrate amd the child is always
moved to the same cgroup by cgroup_post_fork()->css_set_move_task() we can
change pids_can_fork() and pids_cancel_fork() to just use task_css(current).
And since we no longer need to pin this css, we can remove pid_fork().

Note: the patch uses task_css_check(true), perhaps it makes sense to add a
helper or change task_css_set_check() to take cgroup_threadgroup_rwsem into
account.

Signed-off-by: Oleg Nesterov <[email protected]>
---
kernel/cgroup_pids.c | 41 ++++++++++-------------------------------
1 files changed, 10 insertions(+), 31 deletions(-)

diff --git a/kernel/cgroup_pids.c b/kernel/cgroup_pids.c
index 15ef2e4..de3359a 100644
--- a/kernel/cgroup_pids.c
+++ b/kernel/cgroup_pids.c
@@ -205,48 +205,28 @@ static void pids_cancel_attach(struct cgroup_subsys_state *css,
}
}

+/*
+ * task_css_check(true) in pids_can_fork() and pids_cancel_fork() relies
+ * on threadgroup_change_begin() held by the copy_process().
+ */
static int pids_can_fork(struct task_struct *task, void **priv_p)
{
struct cgroup_subsys_state *css;
struct pids_cgroup *pids;
- int err;

- /*
- * Use the "current" task_css for the pids subsystem as the tentative
- * css. It is possible we will charge the wrong hierarchy, in which
- * case we will forcefully revert/reapply the charge on the right
- * hierarchy after it is committed to the task proper.
- */
- css = task_get_css(current, pids_cgrp_id);
+ css = task_css_check(current, pids_cgrp_id, true);
pids = css_pids(css);
-
- err = pids_try_charge(pids, 1);
- if (err)
- goto err_css_put;
-
- *priv_p = css;
- return 0;
-
-err_css_put:
- css_put(css);
- return err;
+ return pids_try_charge(pids, 1);
}

static void pids_cancel_fork(struct task_struct *task, void *priv)
{
- struct cgroup_subsys_state *css = priv;
- struct pids_cgroup *pids = css_pids(css);
+ struct cgroup_subsys_state *css;
+ struct pids_cgroup *pids;

+ css = task_css_check(current, pids_cgrp_id, true);
+ pids = css_pids(css);
pids_uncharge(pids, 1);
- css_put(css);
-}
-
-static void pids_fork(struct task_struct *task, void *priv)
-{
- struct cgroup_subsys_state *css = priv;
-
- WARN_ON(task_css_check(task, pids_cgrp_id, true) != css);
- css_put(css);
}

static void pids_free(struct task_struct *task)
@@ -329,7 +309,6 @@ struct cgroup_subsys pids_cgrp_subsys = {
.cancel_attach = pids_cancel_attach,
.can_fork = pids_can_fork,
.cancel_fork = pids_cancel_fork,
- .fork = pids_fork,
.free = pids_free,
.legacy_cftypes = pids_files,
.dfl_cftypes = pids_files,
--
1.5.5.1

2015-11-27 18:57:10

by Oleg Nesterov

[permalink] [raw]
Subject: [PATCH 3/3] cgroup: kill cgrp_ss_priv[CGROUP_CANFORK_COUNT] and friends

Now that nobody use the "priv" arg passed to can_fork/cancel_fork/fork we can
kill CGROUP_CANFORK_COUNT/SUBSYS_TAG/etc and cgrp_ss_priv[] in copy_process().

Signed-off-by: Oleg Nesterov <[email protected]>
---
include/linux/cgroup-defs.h | 12 +++---------
include/linux/cgroup.h | 19 ++++++-------------
include/linux/cgroup_subsys.h | 18 ------------------
kernel/cgroup.c | 30 +++++++-----------------------
kernel/cgroup_freezer.c | 2 +-
kernel/cgroup_pids.c | 4 ++--
kernel/fork.c | 7 +++----
kernel/sched/core.c | 2 +-
8 files changed, 23 insertions(+), 71 deletions(-)

diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
index 60d44b2..ed98f99 100644
--- a/include/linux/cgroup-defs.h
+++ b/include/linux/cgroup-defs.h
@@ -34,17 +34,12 @@ struct seq_file;

/* define the enumeration of all cgroup subsystems */
#define SUBSYS(_x) _x ## _cgrp_id,
-#define SUBSYS_TAG(_t) CGROUP_ ## _t, \
- __unused_tag_ ## _t = CGROUP_ ## _t - 1,
enum cgroup_subsys_id {
#include <linux/cgroup_subsys.h>
CGROUP_SUBSYS_COUNT,
};
-#undef SUBSYS_TAG
#undef SUBSYS

-#define CGROUP_CANFORK_COUNT (CGROUP_CANFORK_END - CGROUP_CANFORK_START)
-
/* bits in struct cgroup_subsys_state flags field */
enum {
CSS_NO_REF = (1 << 0), /* no reference counting for this css */
@@ -432,9 +427,9 @@ struct cgroup_subsys {
struct cgroup_taskset *tset);
void (*attach)(struct cgroup_subsys_state *css,
struct cgroup_taskset *tset);
- int (*can_fork)(struct task_struct *task, void **priv_p);
- void (*cancel_fork)(struct task_struct *task, void *priv);
- void (*fork)(struct task_struct *task, void *priv);
+ int (*can_fork)(struct task_struct *task);
+ void (*cancel_fork)(struct task_struct *task);
+ void (*fork)(struct task_struct *task);
void (*exit)(struct task_struct *task);
void (*free)(struct task_struct *task);
void (*bind)(struct cgroup_subsys_state *root_css);
@@ -520,7 +515,6 @@ static inline void cgroup_threadgroup_change_end(struct task_struct *tsk)

#else /* CONFIG_CGROUPS */

-#define CGROUP_CANFORK_COUNT 0
#define CGROUP_SUBSYS_COUNT 0

static inline void cgroup_threadgroup_change_begin(struct task_struct *tsk) {}
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index 22e3754..f15e86d 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -95,12 +95,9 @@ int proc_cgroup_show(struct seq_file *m, struct pid_namespace *ns,
struct pid *pid, struct task_struct *tsk);

void cgroup_fork(struct task_struct *p);
-extern int cgroup_can_fork(struct task_struct *p,
- void *ss_priv[CGROUP_CANFORK_COUNT]);
-extern void cgroup_cancel_fork(struct task_struct *p,
- void *ss_priv[CGROUP_CANFORK_COUNT]);
-extern void cgroup_post_fork(struct task_struct *p,
- void *old_ss_priv[CGROUP_CANFORK_COUNT]);
+extern int cgroup_can_fork(struct task_struct *p);
+extern void cgroup_cancel_fork(struct task_struct *p);
+extern void cgroup_post_fork(struct task_struct *p);
void cgroup_exit(struct task_struct *p);
void cgroup_free(struct task_struct *p);

@@ -540,13 +537,9 @@ static inline int cgroupstats_build(struct cgroupstats *stats,
struct dentry *dentry) { return -EINVAL; }

static inline void cgroup_fork(struct task_struct *p) {}
-static inline int cgroup_can_fork(struct task_struct *p,
- void *ss_priv[CGROUP_CANFORK_COUNT])
-{ return 0; }
-static inline void cgroup_cancel_fork(struct task_struct *p,
- void *ss_priv[CGROUP_CANFORK_COUNT]) {}
-static inline void cgroup_post_fork(struct task_struct *p,
- void *ss_priv[CGROUP_CANFORK_COUNT]) {}
+static inline int cgroup_can_fork(struct task_struct *p) { return 0; }
+static inline void cgroup_cancel_fork(struct task_struct *p) {}
+static inline void cgroup_post_fork(struct task_struct *p) {}
static inline void cgroup_exit(struct task_struct *p) {}
static inline void cgroup_free(struct task_struct *p) {}

diff --git a/include/linux/cgroup_subsys.h b/include/linux/cgroup_subsys.h
index 1a96fda..0df0336 100644
--- a/include/linux/cgroup_subsys.h
+++ b/include/linux/cgroup_subsys.h
@@ -6,14 +6,8 @@

/*
* This file *must* be included with SUBSYS() defined.
- * SUBSYS_TAG() is a noop if undefined.
*/

-#ifndef SUBSYS_TAG
-#define __TMP_SUBSYS_TAG
-#define SUBSYS_TAG(_x)
-#endif
-
#if IS_ENABLED(CONFIG_CPUSETS)
SUBSYS(cpuset)
#endif
@@ -58,17 +52,10 @@ SUBSYS(net_prio)
SUBSYS(hugetlb)
#endif

-/*
- * Subsystems that implement the can_fork() family of callbacks.
- */
-SUBSYS_TAG(CANFORK_START)
-
#if IS_ENABLED(CONFIG_CGROUP_PIDS)
SUBSYS(pids)
#endif

-SUBSYS_TAG(CANFORK_END)
-
/*
* The following subsystems are not supported on the default hierarchy.
*/
@@ -76,11 +63,6 @@ SUBSYS_TAG(CANFORK_END)
SUBSYS(debug)
#endif

-#ifdef __TMP_SUBSYS_TAG
-#undef __TMP_SUBSYS_TAG
-#undef SUBSYS_TAG
-#endif
-
/*
* DO NOT ADD ANY SUBSYSTEM WITHOUT EXPLICIT ACKS FROM CGROUP MAINTAINERS.
*/
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index f1603c1..7380a37 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -5432,19 +5432,6 @@ static const struct file_operations proc_cgroupstats_operations = {
.release = single_release,
};

-static void **subsys_canfork_priv_p(void *ss_priv[CGROUP_CANFORK_COUNT], int i)
-{
- if (CGROUP_CANFORK_START <= i && i < CGROUP_CANFORK_END)
- return &ss_priv[i - CGROUP_CANFORK_START];
- return NULL;
-}
-
-static void *subsys_canfork_priv(void *ss_priv[CGROUP_CANFORK_COUNT], int i)
-{
- void **private = subsys_canfork_priv_p(ss_priv, i);
- return private ? *private : NULL;
-}
-
/**
* cgroup_fork - initialize cgroup related fields during copy_process()
* @child: pointer to task_struct of forking parent process.
@@ -5467,14 +5454,13 @@ void cgroup_fork(struct task_struct *child)
* returns an error, the fork aborts with that error code. This allows for
* a cgroup subsystem to conditionally allow or deny new forks.
*/
-int cgroup_can_fork(struct task_struct *child,
- void *ss_priv[CGROUP_CANFORK_COUNT])
+int cgroup_can_fork(struct task_struct *child)
{
struct cgroup_subsys *ss;
int i, j, ret;

for_each_subsys_which(ss, i, &have_canfork_callback) {
- ret = ss->can_fork(child, subsys_canfork_priv_p(ss_priv, i));
+ ret = ss->can_fork(child);
if (ret)
goto out_revert;
}
@@ -5486,7 +5472,7 @@ out_revert:
if (j >= i)
break;
if (ss->cancel_fork)
- ss->cancel_fork(child, subsys_canfork_priv(ss_priv, j));
+ ss->cancel_fork(child);
}

return ret;
@@ -5499,15 +5485,14 @@ out_revert:
* This calls the cancel_fork() callbacks if a fork failed *after*
* cgroup_can_fork() succeded.
*/
-void cgroup_cancel_fork(struct task_struct *child,
- void *ss_priv[CGROUP_CANFORK_COUNT])
+void cgroup_cancel_fork(struct task_struct *child)
{
struct cgroup_subsys *ss;
int i;

for_each_subsys(ss, i)
if (ss->cancel_fork)
- ss->cancel_fork(child, subsys_canfork_priv(ss_priv, i));
+ ss->cancel_fork(child);
}

/**
@@ -5520,8 +5505,7 @@ void cgroup_cancel_fork(struct task_struct *child,
* cgroup_task_iter_start() - to guarantee that the new task ends up on its
* list.
*/
-void cgroup_post_fork(struct task_struct *child,
- void *old_ss_priv[CGROUP_CANFORK_COUNT])
+void cgroup_post_fork(struct task_struct *child)
{
struct cgroup_subsys *ss;
int i;
@@ -5565,7 +5549,7 @@ void cgroup_post_fork(struct task_struct *child,
* and addition to css_set.
*/
for_each_subsys_which(ss, i, &have_fork_callback)
- ss->fork(child, subsys_canfork_priv(old_ss_priv, i));
+ ss->fork(child);
}

/**
diff --git a/kernel/cgroup_freezer.c b/kernel/cgroup_freezer.c
index f1b30ad..92b98cc 100644
--- a/kernel/cgroup_freezer.c
+++ b/kernel/cgroup_freezer.c
@@ -203,7 +203,7 @@ static void freezer_attach(struct cgroup_subsys_state *new_css,
* to do anything as freezer_attach() will put @task into the appropriate
* state.
*/
-static void freezer_fork(struct task_struct *task, void *private)
+static void freezer_fork(struct task_struct *task)
{
struct freezer *freezer;

diff --git a/kernel/cgroup_pids.c b/kernel/cgroup_pids.c
index de3359a..7f19e6c 100644
--- a/kernel/cgroup_pids.c
+++ b/kernel/cgroup_pids.c
@@ -209,7 +209,7 @@ static void pids_cancel_attach(struct cgroup_subsys_state *css,
* task_css_check(true) in pids_can_fork() and pids_cancel_fork() relies
* on threadgroup_change_begin() held by the copy_process().
*/
-static int pids_can_fork(struct task_struct *task, void **priv_p)
+static int pids_can_fork(struct task_struct *task)
{
struct cgroup_subsys_state *css;
struct pids_cgroup *pids;
@@ -219,7 +219,7 @@ static int pids_can_fork(struct task_struct *task, void **priv_p)
return pids_try_charge(pids, 1);
}

-static void pids_cancel_fork(struct task_struct *task, void *priv)
+static void pids_cancel_fork(struct task_struct *task)
{
struct cgroup_subsys_state *css;
struct pids_cgroup *pids;
diff --git a/kernel/fork.c b/kernel/fork.c
index fce002e..ba7d1c0 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1249,7 +1249,6 @@ static struct task_struct *copy_process(unsigned long clone_flags,
{
int retval;
struct task_struct *p;
- void *cgrp_ss_priv[CGROUP_CANFORK_COUNT] = {};

if ((clone_flags & (CLONE_NEWNS|CLONE_FS)) == (CLONE_NEWNS|CLONE_FS))
return ERR_PTR(-EINVAL);
@@ -1526,7 +1525,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
* between here and cgroup_post_fork() if an organisation operation is in
* progress.
*/
- retval = cgroup_can_fork(p, cgrp_ss_priv);
+ retval = cgroup_can_fork(p);
if (retval)
goto bad_fork_free_pid;

@@ -1608,7 +1607,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
write_unlock_irq(&tasklist_lock);

proc_fork_connector(p);
- cgroup_post_fork(p, cgrp_ss_priv);
+ cgroup_post_fork(p);
threadgroup_change_end(current);
perf_event_fork(p);

@@ -1618,7 +1617,7 @@ static struct task_struct *copy_process(unsigned long clone_flags,
return p;

bad_fork_cancel_cgroup:
- cgroup_cancel_fork(p, cgrp_ss_priv);
+ cgroup_cancel_fork(p);
bad_fork_free_pid:
if (pid != &init_struct_pid)
free_pid(pid);
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 4d568ac..d6bd5eb 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8212,7 +8212,7 @@ static void cpu_cgroup_css_offline(struct cgroup_subsys_state *css)
sched_offline_group(tg);
}

-static void cpu_cgroup_fork(struct task_struct *task, void *private)
+static void cpu_cgroup_fork(struct task_struct *task)
{
sched_move_task(task);
}
--
1.5.5.1

2015-11-28 03:14:45

by Zefan Li

[permalink] [raw]
Subject: Re: [PATCH 0/3] cgroup: fix race between cgroup_post_fork() and cgroup_migrate()

On 2015/11/28 2:57, Oleg Nesterov wrote:
> On 11/26, Oleg Nesterov wrote:
>>
>> OK. I do not know what exactly do you mean, perhaps if you fix this problem
>> the race between fork and attach goes away and in this case the fix I sent
>> is not needed?
>
> Otherwise please consider this series.
>
> Slightly tested, seems to work; please review.
>

Allowing tasks migrating between cgroups while forking is problematic. I'm more
than glad to see those changes.

Acked-by: Zefan Li <[email protected]>

> Oleg.
>
> include/linux/cgroup-defs.h | 12 ++------
> include/linux/cgroup.h | 19 ++++---------
> include/linux/cgroup_subsys.h | 18 ------------
> kernel/cgroup.c | 30 +++++----------------
> kernel/cgroup_freezer.c | 2 +-
> kernel/cgroup_pids.c | 58 +++++++---------------------------------
> kernel/fork.c | 16 ++++-------
> kernel/sched/core.c | 2 +-
> 8 files changed, 34 insertions(+), 123 deletions(-)
>
>

2015-11-30 14:50:16

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 0/3] cgroup: fix race between cgroup_post_fork() and cgroup_migrate()

Hello, Oleg.

On Fri, Nov 27, 2015 at 07:57:05PM +0100, Oleg Nesterov wrote:
> On 11/26, Oleg Nesterov wrote:
> >
> > OK. I do not know what exactly do you mean, perhaps if you fix this problem
> > the race between fork and attach goes away and in this case the fix I sent
> > is not needed?

Ah, there's a different bug in the v2 hierarchy independent of this one.

> Otherwise please consider this series.

Applied 1-2 to cgroup/for-4.4-fixes. Will apply 3 to cgroup/for-4.5
after applying the fix for the other problem on for-4.4-fixes and
merging that into for-4.5.

Thanks.

--
tejun

2015-11-30 15:16:15

by Oleg Nesterov

[permalink] [raw]
Subject: Re: [PATCH 0/3] cgroup: fix race between cgroup_post_fork() and cgroup_migrate()

On 11/28, Zefan Li wrote:
>
> On 2015/11/28 2:57, Oleg Nesterov wrote:
>> On 11/26, Oleg Nesterov wrote:
>>>
>>> OK. I do not know what exactly do you mean, perhaps if you fix this problem
>>> the race between fork and attach goes away and in this case the fix I sent
>>> is not needed?
>>
>> Otherwise please consider this series.
>>
>> Slightly tested, seems to work; please review.
>>
>
> Allowing tasks migrating between cgroups while forking is problematic. I'm more
> than glad to see those changes.

Yes, I think this way we can probably do other cleanups/fixes in the generic code
too.

For example, cgroup_enable_task_cg_lists() looks racy, spin_lock_irq(siglock)
can't ensure we can't miss PF_EXITING, exit_signals() doesn't take this lock
in the single-threaded case. We can change it to use cgroup_threadgroup_rwsem
and avoid tasklist_lock afaics.

And I forgot to mention, if we apply this series, we need to rename
threadgroup_change_begin/end and remove the "task_struct *tsk" argument, plus
provide the helpers for write_lock/unlock. Then we can remove
uprobe_start_dup_mmap() and change register_for_each_vma() to use the same
lock.

But I failed to invent the good names for the new helpers ;)

> Acked-by: Zefan Li <[email protected]>

Thanks!

Oleg.

2015-12-03 15:18:56

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] cgroup: kill cgrp_ss_priv[CGROUP_CANFORK_COUNT] and friends

On Fri, Nov 27, 2015 at 07:57:25PM +0100, Oleg Nesterov wrote:
> Now that nobody use the "priv" arg passed to can_fork/cancel_fork/fork we can
> kill CGROUP_CANFORK_COUNT/SUBSYS_TAG/etc and cgrp_ss_priv[] in copy_process().
>
> Signed-off-by: Oleg Nesterov <[email protected]>

Applied to cgroup/for-4.5.

Thanks.

--
tejun