(Resending because the previous posting went out with patches from v1)
Hello, guys.
Changes from the last posting[L] are,
* cgroup_call_pre_destroy() removal moved from 0001 to 0004 per
Michal.
* Comment and commit message updates per Glauber and Michal.
Original head message follows.
cgroup removal path is quite ugly. A lot of the ugliness comes from
the weird design which allows ->pre_destroy() to fail and the feature
to drain existing CSS reference counts before committing to removal.
Both mean that it should be possible to roll-back cgroup destruction
after some or all ->pre_destroy() invocations.
This weird design has never really worked. To list a couple examples.
* Some ->pre_destroy() implementations aren't side-effect free.
Roll-back happens after a lot of state is already lost.
* Some ->pre_destroy() implementations (naturally) assume that the
cgroup being destroyed would stay quiescent between successful
->pre_destroy() and its destruction. Unfortunately, any operation
can happen inbetween and the cgroup could be in a very different
state by the time it actually gets destroyed.
It's just such an unusual design which unnecessarily contains weird
code path combinations which are tricky to hit, reproduce and expect.
Moreover, the design's deficiencies attracts kludges on top as
workarounds and we end up with stuff like cgroup_exclude_rmdir() and
cgroup_release_and_wakeup_rmdir() which really make me want to cry.
Now that memcg has moved away from failable ->pre_destroy(), we can do
away with all these. I tested some basic operations and some corner
cases but am still a bit scared. Would love to get acks from Li and
memcg people.
This patchset contains the following eight patches.
0001-cgroup-kill-cgroup_subsys-__DEPRECATED_clear_css_ref.patch
0002-cgroup-kill-CSS_REMOVED.patch
0003-cgroup-use-cgroup_lock_live_group-parent-in-cgroup_c.patch
0004-cgroup-deactivate-CSS-s-and-mark-cgroup-dead-before-.patch
0005-cgroup-remove-CGRP_WAIT_ON_RMDIR-cgroup_exclude_rmdi.patch
0006-memcg-make-mem_cgroup_reparent_charges-non-failing.patch
0007-hugetlb-do-not-fail-in-hugetlb_cgroup_pre_destroy.patch
0008-cgroup-make-pre_destroy-return-void.patch
0001-0002 remove now unused ->pre_destroy() failure handling and do
follow-up simplification.
0003-0004 update removal path such that each ->pre_destroy() is
guaranteed to be invoked once per removal and the cgroup being
destroyed stays quiescent until destruction is complete.
0005 removes the scary CGRP_WAIT_ON_RMDIR mechanism.
0006-0008 are follow-up clean-ups. 0006 and 0007 are from Michal's
patchset[1].
This patchset is on top of
v3.6 (a0d271cbfe)
+ [1] the first three patches of
"memcg/cgroup: do not fail fail on pre_destroy callbacks" patchset
and available in the following git branch.
git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git review-cgroup-rmdir-updates
Thanks.
block/blk-cgroup.c | 3
include/linux/cgroup.h | 41 -------
kernel/cgroup.c | 256 +++++++++++--------------------------------------
mm/hugetlb_cgroup.c | 11 --
mm/memcontrol.c | 51 +--------
5 files changed, 75 insertions(+), 287 deletions(-)
--
tejun
[L] http://www.spinics.net/lists/linux-containers/msg26157.html
[1] http://thread.gmane.org/gmane.linux.kernel.cgroups/4757
2ef37d3fe4 ("memcg: Simplify mem_cgroup_force_empty_list error
handling") removed the last user of __DEPRECATED_clear_css_refs. This
patch removes __DEPRECATED_clear_css_refs and mechanisms to support
it.
* Conditionals dependent on __DEPRECATED_clear_css_refs removed.
* cgroup_clear_css_refs() can no longer fail. All that needs to be
done are deactivating refcnts, setting CSS_REMOVED and putting the
base reference on each css. Remove cgroup_clear_css_refs() and the
failure path, and open-code the loops into cgroup_rmdir().
This patch keeps the two for_each_subsys() loops separate while open
coding them. They can be merged now but there are scheduled changes
which need them to be separate, so keep them separate to reduce the
amount of churn.
local_irq_save/restore() from cgroup_clear_css_refs() are replaced
with local_irq_disable/enable() for simplicity. This is safe as
cgroup_rmdir() is always called with IRQ enabled. Note that this IRQ
switching is necessary to make CSS deactivation and CSS_REMOVED
assertion atomic against css_tryget() and will be removed by future
changes.
v2: cgroup_call_pre_destroy() removal dropped per Michal. Commit
message updated to explain local_irq_disable/enable() conversion.
Signed-off-by: Tejun Heo <[email protected]>
Reviewed-by: Michal Hocko <[email protected]>
---
include/linux/cgroup.h | 12 -----
kernel/cgroup.c | 131 ++++++++++++++-----------------------------------
2 files changed, 36 insertions(+), 107 deletions(-)
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index c90eaa8..02e09c0 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -86,7 +86,6 @@ struct cgroup_subsys_state {
enum {
CSS_ROOT, /* This CSS is the root of the subsystem */
CSS_REMOVED, /* This CSS is dead */
- CSS_CLEAR_CSS_REFS, /* @ss->__DEPRECATED_clear_css_refs */
};
/* Caller must verify that the css is not for root cgroup */
@@ -485,17 +484,6 @@ struct cgroup_subsys {
*/
bool use_id;
- /*
- * If %true, cgroup removal will try to clear css refs by retrying
- * ss->pre_destroy() until there's no css ref left. This behavior
- * is strictly for backward compatibility and will be removed as
- * soon as the current user (memcg) is updated.
- *
- * If %false, ss->pre_destroy() can't fail and cgroup removal won't
- * wait for css refs to drop to zero before proceeding.
- */
- bool __DEPRECATED_clear_css_refs;
-
#define MAX_CGROUP_TYPE_NAMELEN 32
const char *name;
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 7981850..8c605e2 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -865,11 +865,8 @@ static int cgroup_call_pre_destroy(struct cgroup *cgrp)
continue;
ret = ss->pre_destroy(cgrp);
- if (ret) {
- /* ->pre_destroy() failure is being deprecated */
- WARN_ON_ONCE(!ss->__DEPRECATED_clear_css_refs);
+ if (WARN_ON_ONCE(ret))
break;
- }
}
return ret;
@@ -3901,14 +3898,12 @@ static void init_cgroup_css(struct cgroup_subsys_state *css,
cgrp->subsys[ss->subsys_id] = css;
/*
- * If !clear_css_refs, css holds an extra ref to @cgrp->dentry
- * which is put on the last css_put(). dput() requires process
- * context, which css_put() may be called without. @css->dput_work
- * will be used to invoke dput() asynchronously from css_put().
+ * css holds an extra ref to @cgrp->dentry which is put on the last
+ * css_put(). dput() requires process context, which css_put() may
+ * be called without. @css->dput_work will be used to invoke
+ * dput() asynchronously from css_put().
*/
INIT_WORK(&css->dput_work, css_dput_fn);
- if (ss->__DEPRECATED_clear_css_refs)
- set_bit(CSS_CLEAR_CSS_REFS, &css->flags);
}
/*
@@ -3978,10 +3973,9 @@ static long cgroup_create(struct cgroup *parent, struct dentry *dentry,
if (err < 0)
goto err_remove;
- /* If !clear_css_refs, each css holds a ref to the cgroup's dentry */
+ /* each css holds a ref to the cgroup's dentry */
for_each_subsys(root, ss)
- if (!ss->__DEPRECATED_clear_css_refs)
- dget(dentry);
+ dget(dentry);
/* The cgroup directory was pre-locked for us */
BUG_ON(!mutex_is_locked(&cgrp->dentry->d_inode->i_mutex));
@@ -4066,71 +4060,6 @@ static int cgroup_has_css_refs(struct cgroup *cgrp)
return 0;
}
-/*
- * Atomically mark all (or else none) of the cgroup's CSS objects as
- * CSS_REMOVED. Return true on success, or false if the cgroup has
- * busy subsystems. Call with cgroup_mutex held
- *
- * Depending on whether a subsys has __DEPRECATED_clear_css_refs set or
- * not, cgroup removal behaves differently.
- *
- * If clear is set, css refcnt for the subsystem should be zero before
- * cgroup removal can be committed. This is implemented by
- * CGRP_WAIT_ON_RMDIR and retry logic around ->pre_destroy(), which may be
- * called multiple times until all css refcnts reach zero and is allowed to
- * veto removal on any invocation. This behavior is deprecated and will be
- * removed as soon as the existing user (memcg) is updated.
- *
- * If clear is not set, each css holds an extra reference to the cgroup's
- * dentry and cgroup removal proceeds regardless of css refs.
- * ->pre_destroy() will be called at least once and is not allowed to fail.
- * On the last put of each css, whenever that may be, the extra dentry ref
- * is put so that dentry destruction happens only after all css's are
- * released.
- */
-static int cgroup_clear_css_refs(struct cgroup *cgrp)
-{
- struct cgroup_subsys *ss;
- unsigned long flags;
- bool failed = false;
-
- local_irq_save(flags);
-
- /*
- * Block new css_tryget() by deactivating refcnt. If all refcnts
- * for subsystems w/ clear_css_refs set were 1 at the moment of
- * deactivation, we succeeded.
- */
- for_each_subsys(cgrp->root, ss) {
- struct cgroup_subsys_state *css = cgrp->subsys[ss->subsys_id];
-
- WARN_ON(atomic_read(&css->refcnt) < 0);
- atomic_add(CSS_DEACT_BIAS, &css->refcnt);
-
- if (ss->__DEPRECATED_clear_css_refs)
- failed |= css_refcnt(css) != 1;
- }
-
- /*
- * If succeeded, set REMOVED and put all the base refs; otherwise,
- * restore refcnts to positive values. Either way, all in-progress
- * css_tryget() will be released.
- */
- for_each_subsys(cgrp->root, ss) {
- struct cgroup_subsys_state *css = cgrp->subsys[ss->subsys_id];
-
- if (!failed) {
- set_bit(CSS_REMOVED, &css->flags);
- css_put(css);
- } else {
- atomic_sub(CSS_DEACT_BIAS, &css->refcnt);
- }
- }
-
- local_irq_restore(flags);
- return !failed;
-}
-
static int cgroup_rmdir(struct inode *unused_dir, struct dentry *dentry)
{
struct cgroup *cgrp = dentry->d_fsdata;
@@ -4138,10 +4067,10 @@ static int cgroup_rmdir(struct inode *unused_dir, struct dentry *dentry)
struct cgroup *parent;
DEFINE_WAIT(wait);
struct cgroup_event *event, *tmp;
+ struct cgroup_subsys *ss;
int ret;
/* the vfs holds both inode->i_mutex already */
-again:
mutex_lock(&cgroup_mutex);
if (atomic_read(&cgrp->count) != 0) {
mutex_unlock(&cgroup_mutex);
@@ -4182,21 +4111,34 @@ again:
return -EBUSY;
}
prepare_to_wait(&cgroup_rmdir_waitq, &wait, TASK_INTERRUPTIBLE);
- if (!cgroup_clear_css_refs(cgrp)) {
- mutex_unlock(&cgroup_mutex);
- /*
- * Because someone may call cgroup_wakeup_rmdir_waiter() before
- * prepare_to_wait(), we need to check this flag.
- */
- if (test_bit(CGRP_WAIT_ON_RMDIR, &cgrp->flags))
- schedule();
- finish_wait(&cgroup_rmdir_waitq, &wait);
- clear_bit(CGRP_WAIT_ON_RMDIR, &cgrp->flags);
- if (signal_pending(current))
- return -EINTR;
- goto again;
+
+ local_irq_disable();
+
+ /* block new css_tryget() by deactivating refcnt */
+ for_each_subsys(cgrp->root, ss) {
+ struct cgroup_subsys_state *css = cgrp->subsys[ss->subsys_id];
+
+ WARN_ON(atomic_read(&css->refcnt) < 0);
+ atomic_add(CSS_DEACT_BIAS, &css->refcnt);
+ }
+
+ /*
+ * Set REMOVED. All in-progress css_tryget() will be released.
+ * Put all the base refs. Each css holds an extra reference to the
+ * cgroup's dentry and cgroup removal proceeds regardless of css
+ * refs. On the last put of each css, whenever that may be, the
+ * extra dentry ref is put so that dentry destruction happens only
+ * after all css's are released.
+ */
+ for_each_subsys(cgrp->root, ss) {
+ struct cgroup_subsys_state *css = cgrp->subsys[ss->subsys_id];
+
+ set_bit(CSS_REMOVED, &css->flags);
+ css_put(css);
}
- /* NO css_tryget() can success after here. */
+
+ local_irq_enable();
+
finish_wait(&cgroup_rmdir_waitq, &wait);
clear_bit(CGRP_WAIT_ON_RMDIR, &cgrp->flags);
@@ -4949,8 +4891,7 @@ void __css_put(struct cgroup_subsys_state *css)
cgroup_wakeup_rmdir_waiter(cgrp);
break;
case 0:
- if (!test_bit(CSS_CLEAR_CSS_REFS, &css->flags))
- schedule_work(&css->dput_work);
+ schedule_work(&css->dput_work);
break;
}
rcu_read_unlock();
--
1.7.11.7
CSS_REMOVED is one of the several contortions which were necessary to
support css reference draining on cgroup removal. All css->refcnts
which need draining should be deactivated and verified to equal zero
atomically w.r.t. css_tryget(). If any one isn't zero, all refcnts
needed to be re-activated and css_tryget() shouldn't fail in the
process.
This was achieved by letting css_tryget() busy-loop until either the
refcnt is reactivated (failed removal attempt) or CSS_REMOVED is set
(committing to removal).
Now that css refcnt draining is no longer used, there's no need for
atomic rollback mechanism. css_tryget() simply can look at the
reference count and fail if it's deactivated - it's never getting
re-activated.
This patch removes CSS_REMOVED and updates __css_tryget() to fail if
the refcnt is deactivated. As deactivation and removal are a single
step now, they no longer need to be protected against css_tryget()
happening from irq context. Remove local_irq_disable/enable() from
cgroup_rmdir().
Note that this removes css_is_removed() whose only user is VM_BUG_ON()
in memcontrol.c. We can replace it with a check on the refcnt but
given that the only use case is a debug assert, I think it's better to
simply unexport it.
v2: Comment updated and explanation on local_irq_disable/enable()
added per Michal Hocko.
Signed-off-by: Tejun Heo <[email protected]>
Reviewed-by: Michal Hocko <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
---
include/linux/cgroup.h | 6 ------
kernel/cgroup.c | 31 ++++++++++++-------------------
mm/memcontrol.c | 7 +++----
3 files changed, 15 insertions(+), 29 deletions(-)
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index 02e09c0..a309804 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -85,7 +85,6 @@ struct cgroup_subsys_state {
/* bits in struct cgroup_subsys_state flags field */
enum {
CSS_ROOT, /* This CSS is the root of the subsystem */
- CSS_REMOVED, /* This CSS is dead */
};
/* Caller must verify that the css is not for root cgroup */
@@ -108,11 +107,6 @@ static inline void css_get(struct cgroup_subsys_state *css)
__css_get(css, 1);
}
-static inline bool css_is_removed(struct cgroup_subsys_state *css)
-{
- return test_bit(CSS_REMOVED, &css->flags);
-}
-
/*
* Call css_tryget() to take a reference on a css if your existing
* (known-valid) reference isn't already ref-counted. Returns false if
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 8c605e2..c194f9e 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -170,8 +170,8 @@ struct css_id {
* The css to which this ID points. This pointer is set to valid value
* after cgroup is populated. If cgroup is removed, this will be NULL.
* This pointer is expected to be RCU-safe because destroy()
- * is called after synchronize_rcu(). But for safe use, css_is_removed()
- * css_tryget() should be used for avoiding race.
+ * is called after synchronize_rcu(). But for safe use, css_tryget()
+ * should be used for avoiding race.
*/
struct cgroup_subsys_state __rcu *css;
/*
@@ -4112,8 +4112,6 @@ static int cgroup_rmdir(struct inode *unused_dir, struct dentry *dentry)
}
prepare_to_wait(&cgroup_rmdir_waitq, &wait, TASK_INTERRUPTIBLE);
- local_irq_disable();
-
/* block new css_tryget() by deactivating refcnt */
for_each_subsys(cgrp->root, ss) {
struct cgroup_subsys_state *css = cgrp->subsys[ss->subsys_id];
@@ -4123,21 +4121,14 @@ static int cgroup_rmdir(struct inode *unused_dir, struct dentry *dentry)
}
/*
- * Set REMOVED. All in-progress css_tryget() will be released.
* Put all the base refs. Each css holds an extra reference to the
* cgroup's dentry and cgroup removal proceeds regardless of css
* refs. On the last put of each css, whenever that may be, the
* extra dentry ref is put so that dentry destruction happens only
* after all css's are released.
*/
- for_each_subsys(cgrp->root, ss) {
- struct cgroup_subsys_state *css = cgrp->subsys[ss->subsys_id];
-
- set_bit(CSS_REMOVED, &css->flags);
- css_put(css);
- }
-
- local_irq_enable();
+ for_each_subsys(cgrp->root, ss)
+ css_put(cgrp->subsys[ss->subsys_id]);
finish_wait(&cgroup_rmdir_waitq, &wait);
clear_bit(CGRP_WAIT_ON_RMDIR, &cgrp->flags);
@@ -4861,15 +4852,17 @@ static void check_for_release(struct cgroup *cgrp)
/* Caller must verify that the css is not for root cgroup */
bool __css_tryget(struct cgroup_subsys_state *css)
{
- do {
- int v = css_refcnt(css);
+ while (true) {
+ int t, v;
- if (atomic_cmpxchg(&css->refcnt, v, v + 1) == v)
+ v = css_refcnt(css);
+ t = atomic_cmpxchg(&css->refcnt, v, v + 1);
+ if (likely(t == v))
return true;
+ else if (t < 0)
+ return false;
cpu_relax();
- } while (!test_bit(CSS_REMOVED, &css->flags));
-
- return false;
+ }
}
EXPORT_SYMBOL_GPL(__css_tryget);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 5a1d584..37c3566 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2343,7 +2343,6 @@ static int __mem_cgroup_try_charge(struct mm_struct *mm,
again:
if (*ptr) { /* css should be a valid one */
memcg = *ptr;
- VM_BUG_ON(css_is_removed(&memcg->css));
if (mem_cgroup_is_root(memcg))
goto done;
if (nr_pages == 1 && consume_stock(memcg))
@@ -2483,9 +2482,9 @@ static void __mem_cgroup_cancel_local_charge(struct mem_cgroup *memcg,
/*
* A helper function to get mem_cgroup from ID. must be called under
- * rcu_read_lock(). The caller must check css_is_removed() or some if
- * it's concern. (dropping refcnt from swap can be called against removed
- * memcg.)
+ * rcu_read_lock(). The caller is responsible for calling css_tryget if
+ * the mem_cgroup is used for charging. (dropping refcnt from swap can be
+ * called against removed memcg.)
*/
static struct mem_cgroup *mem_cgroup_lookup(unsigned short id)
{
--
1.7.11.7
Because ->pre_destroy() could fail and can't be called under
cgroup_mutex, cgroup destruction did something very ugly.
1. Grab cgroup_mutex and verify it can be destroyed; fail otherwise.
2. Release cgroup_mutex and call ->pre_destroy().
3. Re-grab cgroup_mutex and verify it can still be destroyed; fail
otherwise.
4. Continue destroying.
In addition to being ugly, it has been always broken in various ways.
For example, memcg ->pre_destroy() expects the cgroup to be inactive
after it's done but tasks can be attached and detached between #2 and
#3 and the conditions that memcg verified in ->pre_destroy() might no
longer hold by the time control reaches #3.
Now that ->pre_destroy() is no longer allowed to fail. We can switch
to the following.
1. Grab cgroup_mutex and verify it can be destroyed; fail otherwise.
2. Deactivate CSS's and mark the cgroup removed thus preventing any
further operations which can invalidate the verification from #1.
3. Release cgroup_mutex and call ->pre_destroy().
4. Re-grab cgroup_mutex and continue destroying.
After this change, controllers can safely assume that ->pre_destroy()
will only be called only once for a given cgroup and, once
->pre_destroy() is called, the cgroup will stay dormant till it's
destroyed.
This removes the only reason ->pre_destroy() can fail - new task being
attached or child cgroup being created inbetween. Error out path is
removed and ->pre_destroy() invocation is open coded in
cgroup_rmdir().
v2: cgroup_call_pre_destroy() removal moved to this patch per Michal.
Commit message updated per Glauber.
Signed-off-by: Tejun Heo <[email protected]>
Reviewed-by: Michal Hocko <[email protected]>
Cc: Glauber Costa <[email protected]>
---
kernel/cgroup.c | 65 +++++++++++++++++----------------------------------------
1 file changed, 19 insertions(+), 46 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index f22e3cd..66204a6 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -851,27 +851,6 @@ static struct inode *cgroup_new_inode(umode_t mode, struct super_block *sb)
return inode;
}
-/*
- * Call subsys's pre_destroy handler.
- * This is called before css refcnt check.
- */
-static int cgroup_call_pre_destroy(struct cgroup *cgrp)
-{
- struct cgroup_subsys *ss;
- int ret = 0;
-
- for_each_subsys(cgrp->root, ss) {
- if (!ss->pre_destroy)
- continue;
-
- ret = ss->pre_destroy(cgrp);
- if (WARN_ON_ONCE(ret))
- break;
- }
-
- return ret;
-}
-
static void cgroup_diput(struct dentry *dentry, struct inode *inode)
{
/* is dentry a directory ? if so, kfree() associated cgroup */
@@ -4078,19 +4057,6 @@ static int cgroup_rmdir(struct inode *unused_dir, struct dentry *dentry)
DEFINE_WAIT(wait);
struct cgroup_event *event, *tmp;
struct cgroup_subsys *ss;
- int ret;
-
- /* the vfs holds both inode->i_mutex already */
- mutex_lock(&cgroup_mutex);
- if (atomic_read(&cgrp->count) != 0) {
- mutex_unlock(&cgroup_mutex);
- return -EBUSY;
- }
- if (!list_empty(&cgrp->children)) {
- mutex_unlock(&cgroup_mutex);
- return -EBUSY;
- }
- mutex_unlock(&cgroup_mutex);
/*
* In general, subsystem has no css->refcnt after pre_destroy(). But
@@ -4103,16 +4069,7 @@ static int cgroup_rmdir(struct inode *unused_dir, struct dentry *dentry)
*/
set_bit(CGRP_WAIT_ON_RMDIR, &cgrp->flags);
- /*
- * Call pre_destroy handlers of subsys. Notify subsystems
- * that rmdir() request comes.
- */
- ret = cgroup_call_pre_destroy(cgrp);
- if (ret) {
- clear_bit(CGRP_WAIT_ON_RMDIR, &cgrp->flags);
- return ret;
- }
-
+ /* the vfs holds both inode->i_mutex already */
mutex_lock(&cgroup_mutex);
parent = cgrp->parent;
if (atomic_read(&cgrp->count) || !list_empty(&cgrp->children)) {
@@ -4122,13 +4079,30 @@ static int cgroup_rmdir(struct inode *unused_dir, struct dentry *dentry)
}
prepare_to_wait(&cgroup_rmdir_waitq, &wait, TASK_INTERRUPTIBLE);
- /* block new css_tryget() by deactivating refcnt */
+ /*
+ * Block new css_tryget() by deactivating refcnt and mark @cgrp
+ * removed. This makes future css_tryget() and child creation
+ * attempts fail thus maintaining the removal conditions verified
+ * above.
+ */
for_each_subsys(cgrp->root, ss) {
struct cgroup_subsys_state *css = cgrp->subsys[ss->subsys_id];
WARN_ON(atomic_read(&css->refcnt) < 0);
atomic_add(CSS_DEACT_BIAS, &css->refcnt);
}
+ set_bit(CGRP_REMOVED, &cgrp->flags);
+
+ /*
+ * Tell subsystems to initate destruction. pre_destroy() should be
+ * called with cgroup_mutex unlocked. See 3fa59dfbc3 ("cgroup: fix
+ * potential deadlock in pre_destroy") for details.
+ */
+ mutex_unlock(&cgroup_mutex);
+ for_each_subsys(cgrp->root, ss)
+ if (ss->pre_destroy)
+ WARN_ON_ONCE(ss->pre_destroy(cgrp));
+ mutex_lock(&cgroup_mutex);
/*
* Put all the base refs. Each css holds an extra reference to the
@@ -4144,7 +4118,6 @@ static int cgroup_rmdir(struct inode *unused_dir, struct dentry *dentry)
clear_bit(CGRP_WAIT_ON_RMDIR, &cgrp->flags);
raw_spin_lock(&release_list_lock);
- set_bit(CGRP_REMOVED, &cgrp->flags);
if (!list_empty(&cgrp->release_list))
list_del_init(&cgrp->release_list);
raw_spin_unlock(&release_list_lock);
--
1.7.11.7
From: Michal Hocko <[email protected]>
Now that pre_destroy callbacks are called from the context where neither
any task can attach the group nor any children group can be added there
is no other way to fail from mem_cgroup_pre_destroy.
mem_cgroup_pre_destroy doesn't have to take a reference to memcg's css
because all css' are marked dead already.
tj: Remove now unused local variable @cgrp from
mem_cgroup_reparent_charges().
Signed-off-by: Michal Hocko <[email protected]>
Reviewed-by: Glauber Costa <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
---
mm/memcontrol.c | 19 ++++++-------------
1 file changed, 6 insertions(+), 13 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 930edfa..6678f99 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3740,14 +3740,11 @@ static void mem_cgroup_force_empty_list(struct mem_cgroup *memcg,
*
* Caller is responsible for holding css reference on the memcg.
*/
-static int mem_cgroup_reparent_charges(struct mem_cgroup *memcg)
+static void mem_cgroup_reparent_charges(struct mem_cgroup *memcg)
{
- struct cgroup *cgrp = memcg->css.cgroup;
int node, zid;
do {
- if (cgroup_task_count(cgrp) || !list_empty(&cgrp->children))
- return -EBUSY;
/* This is for making all *used* pages to be on LRU. */
lru_add_drain_all();
drain_all_stock_sync(memcg);
@@ -3773,8 +3770,6 @@ static int mem_cgroup_reparent_charges(struct mem_cgroup *memcg)
* charge before adding to the LRU.
*/
} while (res_counter_read_u64(&memcg->res, RES_USAGE) > 0);
-
- return 0;
}
/*
@@ -3811,7 +3806,9 @@ static int mem_cgroup_force_empty(struct mem_cgroup *memcg)
}
lru_add_drain();
- return mem_cgroup_reparent_charges(memcg);
+ mem_cgroup_reparent_charges(memcg);
+
+ return 0;
}
static int mem_cgroup_force_empty_write(struct cgroup *cont, unsigned int event)
@@ -5008,13 +5005,9 @@ free_out:
static int mem_cgroup_pre_destroy(struct cgroup *cont)
{
struct mem_cgroup *memcg = mem_cgroup_from_cont(cont);
- int ret;
- css_get(&memcg->css);
- ret = mem_cgroup_reparent_charges(memcg);
- css_put(&memcg->css);
-
- return ret;
+ mem_cgroup_reparent_charges(memcg);
+ return 0;
}
static void mem_cgroup_destroy(struct cgroup *cont)
--
1.7.11.7
All ->pre_destory() implementations return 0 now, which is the only
allowed return value. Make it return void.
Signed-off-by: Tejun Heo <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Cc: Vivek Goyal <[email protected]>
---
block/blk-cgroup.c | 3 +--
include/linux/cgroup.h | 2 +-
kernel/cgroup.c | 2 +-
mm/hugetlb_cgroup.c | 4 +---
mm/memcontrol.c | 3 +--
5 files changed, 5 insertions(+), 9 deletions(-)
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index f3b44a6..a7816f3 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -600,7 +600,7 @@ struct cftype blkcg_files[] = {
*
* This is the blkcg counterpart of ioc_release_fn().
*/
-static int blkcg_pre_destroy(struct cgroup *cgroup)
+static void blkcg_pre_destroy(struct cgroup *cgroup)
{
struct blkcg *blkcg = cgroup_to_blkcg(cgroup);
@@ -622,7 +622,6 @@ static int blkcg_pre_destroy(struct cgroup *cgroup)
}
spin_unlock_irq(&blkcg->lock);
- return 0;
}
static void blkcg_destroy(struct cgroup *cgroup)
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index 47868a8..adb2adc 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -436,7 +436,7 @@ int cgroup_taskset_size(struct cgroup_taskset *tset);
struct cgroup_subsys {
struct cgroup_subsys_state *(*create)(struct cgroup *cgrp);
- int (*pre_destroy)(struct cgroup *cgrp);
+ void (*pre_destroy)(struct cgroup *cgrp);
void (*destroy)(struct cgroup *cgrp);
int (*can_attach)(struct cgroup *cgrp, struct cgroup_taskset *tset);
void (*cancel_attach)(struct cgroup *cgrp, struct cgroup_taskset *tset);
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index c5f6fb2..83cd7d0 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -4054,7 +4054,7 @@ static int cgroup_rmdir(struct inode *unused_dir, struct dentry *dentry)
mutex_unlock(&cgroup_mutex);
for_each_subsys(cgrp->root, ss)
if (ss->pre_destroy)
- WARN_ON_ONCE(ss->pre_destroy(cgrp));
+ ss->pre_destroy(cgrp);
mutex_lock(&cgroup_mutex);
/*
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index dc595c6..0d3a1a3 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -155,7 +155,7 @@ out:
* Force the hugetlb cgroup to empty the hugetlb resources by moving them to
* the parent cgroup.
*/
-static int hugetlb_cgroup_pre_destroy(struct cgroup *cgroup)
+static void hugetlb_cgroup_pre_destroy(struct cgroup *cgroup)
{
struct hstate *h;
struct page *page;
@@ -172,8 +172,6 @@ static int hugetlb_cgroup_pre_destroy(struct cgroup *cgroup)
}
cond_resched();
} while (hugetlb_cgroup_have_usage(cgroup));
-
- return 0;
}
int hugetlb_cgroup_charge_cgroup(int idx, unsigned long nr_pages,
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6678f99..a1811ce 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5002,12 +5002,11 @@ free_out:
return ERR_PTR(error);
}
-static int mem_cgroup_pre_destroy(struct cgroup *cont)
+static void mem_cgroup_pre_destroy(struct cgroup *cont)
{
struct mem_cgroup *memcg = mem_cgroup_from_cont(cont);
mem_cgroup_reparent_charges(memcg);
- return 0;
}
static void mem_cgroup_destroy(struct cgroup *cont)
--
1.7.11.7
From: Michal Hocko <[email protected]>
Now that pre_destroy callbacks are called from the context where neither
any task can attach the group nor any children group can be added there
is no other way to fail from hugetlb_pre_destroy.
Signed-off-by: Michal Hocko <[email protected]>
Reviewed-by: Tejun Heo <[email protected]>
Reviewed-by: Glauber Costa <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
---
mm/hugetlb_cgroup.c | 11 +++--------
1 file changed, 3 insertions(+), 8 deletions(-)
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index a3f358f..dc595c6 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -159,14 +159,9 @@ static int hugetlb_cgroup_pre_destroy(struct cgroup *cgroup)
{
struct hstate *h;
struct page *page;
- int ret = 0, idx = 0;
+ int idx = 0;
do {
- if (cgroup_task_count(cgroup) ||
- !list_empty(&cgroup->children)) {
- ret = -EBUSY;
- goto out;
- }
for_each_hstate(h) {
spin_lock(&hugetlb_lock);
list_for_each_entry(page, &h->hugepage_activelist, lru)
@@ -177,8 +172,8 @@ static int hugetlb_cgroup_pre_destroy(struct cgroup *cgroup)
}
cond_resched();
} while (hugetlb_cgroup_have_usage(cgroup));
-out:
- return ret;
+
+ return 0;
}
int hugetlb_cgroup_charge_cgroup(int idx, unsigned long nr_pages,
--
1.7.11.7
CGRP_WAIT_ON_RMDIR is another kludge which was added to make cgroup
destruction rollback somewhat working. cgroup_rmdir() used to drain
CSS references and CGRP_WAIT_ON_RMDIR and the associated waitqueue and
helpers were used to allow the task performing rmdir to wait for the
next relevant event.
Unfortunately, the wait is visible to controllers too and the
mechanism got exposed to memcg by 887032670d ("cgroup avoid permanent
sleep at rmdir").
Now that the draining and retries are gone, CGRP_WAIT_ON_RMDIR is
unnecessary. Remove it and all the mechanisms supporting it. Note
that memcontrol.c changes are essentially revert of 887032670d
("cgroup avoid permanent sleep at rmdir").
Signed-off-by: Tejun Heo <[email protected]>
Reviewed-by: Michal Hocko <[email protected]>
Cc: Balbir Singh <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
---
include/linux/cgroup.h | 21 ---------------------
kernel/cgroup.c | 51 --------------------------------------------------
mm/memcontrol.c | 24 +-----------------------
3 files changed, 1 insertion(+), 95 deletions(-)
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index a309804..47868a8 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -145,10 +145,6 @@ enum {
/* Control Group requires release notifications to userspace */
CGRP_NOTIFY_ON_RELEASE,
/*
- * A thread in rmdir() is wating for this cgroup.
- */
- CGRP_WAIT_ON_RMDIR,
- /*
* Clone cgroup values when creating a new child cgroup
*/
CGRP_CLONE_CHILDREN,
@@ -412,23 +408,6 @@ int cgroup_task_count(const struct cgroup *cgrp);
int cgroup_is_descendant(const struct cgroup *cgrp, struct task_struct *task);
/*
- * When the subsys has to access css and may add permanent refcnt to css,
- * it should take care of racy conditions with rmdir(). Following set of
- * functions, is for stop/restart rmdir if necessary.
- * Because these will call css_get/put, "css" should be alive css.
- *
- * cgroup_exclude_rmdir();
- * ...do some jobs which may access arbitrary empty cgroup
- * cgroup_release_and_wakeup_rmdir();
- *
- * When someone removes a cgroup while cgroup_exclude_rmdir() holds it,
- * it sleeps and cgroup_release_and_wakeup_rmdir() will wake him up.
- */
-
-void cgroup_exclude_rmdir(struct cgroup_subsys_state *css);
-void cgroup_release_and_wakeup_rmdir(struct cgroup_subsys_state *css);
-
-/*
* Control Group taskset, used to pass around set of tasks to cgroup_subsys
* methods.
*/
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 66204a6..c5f6fb2 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -966,33 +966,6 @@ static void cgroup_d_remove_dir(struct dentry *dentry)
}
/*
- * A queue for waiters to do rmdir() cgroup. A tasks will sleep when
- * cgroup->count == 0 && list_empty(&cgroup->children) && subsys has some
- * reference to css->refcnt. In general, this refcnt is expected to goes down
- * to zero, soon.
- *
- * CGRP_WAIT_ON_RMDIR flag is set under cgroup's inode->i_mutex;
- */
-static DECLARE_WAIT_QUEUE_HEAD(cgroup_rmdir_waitq);
-
-static void cgroup_wakeup_rmdir_waiter(struct cgroup *cgrp)
-{
- if (unlikely(test_and_clear_bit(CGRP_WAIT_ON_RMDIR, &cgrp->flags)))
- wake_up_all(&cgroup_rmdir_waitq);
-}
-
-void cgroup_exclude_rmdir(struct cgroup_subsys_state *css)
-{
- css_get(css);
-}
-
-void cgroup_release_and_wakeup_rmdir(struct cgroup_subsys_state *css)
-{
- cgroup_wakeup_rmdir_waiter(css->cgroup);
- css_put(css);
-}
-
-/*
* Call with cgroup_mutex held. Drops reference counts on modules, including
* any duplicate ones that parse_cgroupfs_options took. If this function
* returns an error, no reference counts are touched.
@@ -1963,12 +1936,6 @@ int cgroup_attach_task(struct cgroup *cgrp, struct task_struct *tsk)
}
synchronize_rcu();
-
- /*
- * wake up rmdir() waiter. the rmdir should fail since the cgroup
- * is no longer empty.
- */
- cgroup_wakeup_rmdir_waiter(cgrp);
out:
if (retval) {
for_each_subsys(root, ss) {
@@ -2138,7 +2105,6 @@ static int cgroup_attach_proc(struct cgroup *cgrp, struct task_struct *leader)
* step 5: success! and cleanup
*/
synchronize_rcu();
- cgroup_wakeup_rmdir_waiter(cgrp);
retval = 0;
out_put_css_set_refs:
if (retval) {
@@ -4058,26 +4024,13 @@ static int cgroup_rmdir(struct inode *unused_dir, struct dentry *dentry)
struct cgroup_event *event, *tmp;
struct cgroup_subsys *ss;
- /*
- * In general, subsystem has no css->refcnt after pre_destroy(). But
- * in racy cases, subsystem may have to get css->refcnt after
- * pre_destroy() and it makes rmdir return with -EBUSY. This sometimes
- * make rmdir return -EBUSY too often. To avoid that, we use waitqueue
- * for cgroup's rmdir. CGRP_WAIT_ON_RMDIR is for synchronizing rmdir
- * and subsystem's reference count handling. Please see css_get/put
- * and css_tryget() and cgroup_wakeup_rmdir_waiter() implementation.
- */
- set_bit(CGRP_WAIT_ON_RMDIR, &cgrp->flags);
-
/* the vfs holds both inode->i_mutex already */
mutex_lock(&cgroup_mutex);
parent = cgrp->parent;
if (atomic_read(&cgrp->count) || !list_empty(&cgrp->children)) {
- clear_bit(CGRP_WAIT_ON_RMDIR, &cgrp->flags);
mutex_unlock(&cgroup_mutex);
return -EBUSY;
}
- prepare_to_wait(&cgroup_rmdir_waitq, &wait, TASK_INTERRUPTIBLE);
/*
* Block new css_tryget() by deactivating refcnt and mark @cgrp
@@ -4114,9 +4067,6 @@ static int cgroup_rmdir(struct inode *unused_dir, struct dentry *dentry)
for_each_subsys(cgrp->root, ss)
css_put(cgrp->subsys[ss->subsys_id]);
- finish_wait(&cgroup_rmdir_waitq, &wait);
- clear_bit(CGRP_WAIT_ON_RMDIR, &cgrp->flags);
-
raw_spin_lock(&release_list_lock);
if (!list_empty(&cgrp->release_list))
list_del_init(&cgrp->release_list);
@@ -4864,7 +4814,6 @@ void __css_put(struct cgroup_subsys_state *css)
set_bit(CGRP_RELEASABLE, &cgrp->flags);
check_for_release(cgrp);
}
- cgroup_wakeup_rmdir_waiter(cgrp);
break;
case 0:
schedule_work(&css->dput_work);
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 37c3566..930edfa 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2681,13 +2681,6 @@ static int mem_cgroup_move_account(struct page *page,
/* caller should have done css_get */
pc->mem_cgroup = to;
mem_cgroup_charge_statistics(to, anon, nr_pages);
- /*
- * We charges against "to" which may not have any tasks. Then, "to"
- * can be under rmdir(). But in current implementation, caller of
- * this function is just force_empty() and move charge, so it's
- * guaranteed that "to" is never removed. So, we don't check rmdir
- * status here.
- */
move_unlock_mem_cgroup(from, &flags);
ret = 0;
unlock:
@@ -2893,7 +2886,6 @@ __mem_cgroup_commit_charge_swapin(struct page *page, struct mem_cgroup *memcg,
return;
if (!memcg)
return;
- cgroup_exclude_rmdir(&memcg->css);
__mem_cgroup_commit_charge(memcg, page, 1, ctype, true);
/*
@@ -2907,12 +2899,6 @@ __mem_cgroup_commit_charge_swapin(struct page *page, struct mem_cgroup *memcg,
swp_entry_t ent = {.val = page_private(page)};
mem_cgroup_uncharge_swap(ent);
}
- /*
- * At swapin, we may charge account against cgroup which has no tasks.
- * So, rmdir()->pre_destroy() can be called while we do this charge.
- * In that case, we need to call pre_destroy() again. check it here.
- */
- cgroup_release_and_wakeup_rmdir(&memcg->css);
}
void mem_cgroup_commit_charge_swapin(struct page *page,
@@ -3360,8 +3346,7 @@ void mem_cgroup_end_migration(struct mem_cgroup *memcg,
if (!memcg)
return;
- /* blocks rmdir() */
- cgroup_exclude_rmdir(&memcg->css);
+
if (!migration_ok) {
used = oldpage;
unused = newpage;
@@ -3395,13 +3380,6 @@ void mem_cgroup_end_migration(struct mem_cgroup *memcg,
*/
if (anon)
mem_cgroup_uncharge_page(used);
- /*
- * At migration, we may charge account against cgroup which has no
- * tasks.
- * So, rmdir()->pre_destroy() can be called while we do this charge.
- * In that case, we need to call pre_destroy() again. check it here.
- */
- cgroup_release_and_wakeup_rmdir(&memcg->css);
}
/*
--
1.7.11.7
This patch makes cgroup_create() fail if @parent is marked removed.
This is to prepare for further updates to cgroup_rmdir() path.
Note that this change isn't strictly necessary. cgroup can only be
created via mkdir and the removed marking and dentry removal happen
without releasing cgroup_mutex, so cgroup_create() can never race with
cgroup_rmdir(). Even after the scheduled updates to cgroup_rmdir(),
cgroup_mkdir() and cgroup_rmdir() are synchronized by i_mutex
rendering the added liveliness check unnecessary.
Do it anyway such that locking is contained inside cgroup proper and
we don't get nasty surprises if we ever grow another caller of
cgroup_create().
Signed-off-by: Tejun Heo <[email protected]>
Reviewed-by: Michal Hocko <[email protected]>
---
kernel/cgroup.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index c194f9e..f22e3cd 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -3927,6 +3927,18 @@ static long cgroup_create(struct cgroup *parent, struct dentry *dentry,
if (!cgrp)
return -ENOMEM;
+ /*
+ * Only live parents can have children. Note that the liveliness
+ * check isn't strictly necessary because cgroup_mkdir() and
+ * cgroup_rmdir() are fully synchronized by i_mutex; however, do it
+ * anyway so that locking is contained inside cgroup proper and we
+ * don't get nasty surprises if we ever grow another caller.
+ */
+ if (!cgroup_lock_live_group(parent)) {
+ err = -ENODEV;
+ goto err_free;
+ }
+
/* Grab a reference on the superblock so the hierarchy doesn't
* get deleted on unmount if there are child cgroups. This
* can be done outside cgroup_mutex, since the sb can't
@@ -3934,8 +3946,6 @@ static long cgroup_create(struct cgroup *parent, struct dentry *dentry,
* fs */
atomic_inc(&sb->s_active);
- mutex_lock(&cgroup_mutex);
-
init_cgroup_housekeeping(cgrp);
cgrp->parent = parent;
@@ -4006,7 +4016,7 @@ static long cgroup_create(struct cgroup *parent, struct dentry *dentry,
/* Release the reference count that we took on the superblock */
deactivate_super(sb);
-
+err_free:
kfree(cgrp);
return err;
}
--
1.7.11.7
On Wed 31-10-12 12:44:03, Tejun Heo wrote:
[...]
> local_irq_save/restore() from cgroup_clear_css_refs() are replaced
> with local_irq_disable/enable() for simplicity. This is safe as
> cgroup_rmdir() is always called with IRQ enabled. Note that this IRQ
> switching is necessary to make CSS deactivation and CSS_REMOVED
> assertion atomic against css_tryget() and will be removed by future
> changes.
local_irq_disable doesn't guarantee atomicity, because other CPUs will
see the change in steps so this is a bit misleading. The real reason
AFAICS is that we do not want to hang in css_tryget from IRQ context
(does this really happen btw.?) which would interrupt cgroup_rmdir
between we add CSS_DEACT_BIAS and the group is marked CGRP_REMOVED.
Or am I still missing the point?
[...]
--
Michal Hocko
SUSE Labs
Hello,
On Wed, Oct 31, 2012 at 1:08 PM, Michal Hocko <[email protected]> wrote:
> local_irq_disable doesn't guarantee atomicity, because other CPUs will
Maybe it should say atomicity on the local CPU.
> see the change in steps so this is a bit misleading. The real reason
> AFAICS is that we do not want to hang in css_tryget from IRQ context
> (does this really happen btw.?) which would interrupt cgroup_rmdir
> between we add CSS_DEACT_BIAS and the group is marked CGRP_REMOVED.
> Or am I still missing the point?
Yeah, that's the correct one. We don't want tryget happening between
DEACT_BIAS and REMOVED as it can hang forever there.
Thanks.
--
tejun
On Wed 31-10-12 12:44:03, Tejun Heo wrote:
> diff --git a/kernel/cgroup.c b/kernel/cgroup.c
> index 7981850..8c605e2 100644
> --- a/kernel/cgroup.c
> +++ b/kernel/cgroup.c
> @@ -865,11 +865,8 @@ static int cgroup_call_pre_destroy(struct cgroup *cgrp)
> continue;
>
> ret = ss->pre_destroy(cgrp);
> - if (ret) {
> - /* ->pre_destroy() failure is being deprecated */
> - WARN_ON_ONCE(!ss->__DEPRECATED_clear_css_refs);
> + if (WARN_ON_ONCE(ret))
> break;
> - }
> }
>
> return ret;
And sorry for being to annoying about this WARN_ON_ONCE but I really
don't see any reason for it. pre_destroy can still happen and now it is
even reduced to a reasonable condition (somebody shown up). So I would
put it out of way as it doesn't give us anything and as I've already
mentioned one can trigger it really easily.
--
Michal Hocko
SUSE Labs
Hey, Michal.
On Wed, Oct 31, 2012 at 1:12 PM, Michal Hocko <[email protected]> wrote:
> And sorry for being to annoying about this WARN_ON_ONCE but I really
> don't see any reason for it. pre_destroy can still happen and now it is
> even reduced to a reasonable condition (somebody shown up). So I would
> put it out of way as it doesn't give us anything and as I've already
> mentioned one can trigger it really easily.
It was there before and goes away in several commits. I have to
explain all that to remove it in this patch. Why do that when the
whole thing is just gonna disappear anyway?
Thanks.
--
tejun
On Wed 31-10-12 13:11:02, Tejun Heo wrote:
> Hello,
>
> On Wed, Oct 31, 2012 at 1:08 PM, Michal Hocko <[email protected]> wrote:
> > local_irq_disable doesn't guarantee atomicity, because other CPUs will
>
> Maybe it should say atomicity on the local CPU.
That would be more clear but being more verbose doesn't hurt either :P
> > see the change in steps so this is a bit misleading. The real reason
> > AFAICS is that we do not want to hang in css_tryget from IRQ context
> > (does this really happen btw.?) which would interrupt cgroup_rmdir
> > between we add CSS_DEACT_BIAS and the group is marked CGRP_REMOVED.
> > Or am I still missing the point?
>
> Yeah, that's the correct one. We don't want tryget happening between
> DEACT_BIAS and REMOVED as it can hang forever there.
--
Michal Hocko
SUSE Labs
On Wed, Oct 31, 2012 at 09:14:16PM +0100, Michal Hocko wrote:
> On Wed 31-10-12 13:11:02, Tejun Heo wrote:
> > Hello,
> >
> > On Wed, Oct 31, 2012 at 1:08 PM, Michal Hocko <[email protected]> wrote:
> > > local_irq_disable doesn't guarantee atomicity, because other CPUs will
> >
> > Maybe it should say atomicity on the local CPU.
>
> That would be more clear but being more verbose doesn't hurt either :P
>
> > > see the change in steps so this is a bit misleading. The real reason
> > > AFAICS is that we do not want to hang in css_tryget from IRQ context
> > > (does this really happen btw.?) which would interrupt cgroup_rmdir
> > > between we add CSS_DEACT_BIAS and the group is marked CGRP_REMOVED.
> > > Or am I still missing the point?
> >
> > Yeah, that's the correct one. We don't want tryget happening between
> > DEACT_BIAS and REMOVED as it can hang forever there.
Here's the updated description. git branch updated accordingly.
Thanks.
From: Tejun Heo <[email protected]>
Subject: cgroup: kill cgroup_subsys->__DEPRECATED_clear_css_refs
2ef37d3fe4 ("memcg: Simplify mem_cgroup_force_empty_list error
handling") removed the last user of __DEPRECATED_clear_css_refs. This
patch removes __DEPRECATED_clear_css_refs and mechanisms to support
it.
* Conditionals dependent on __DEPRECATED_clear_css_refs removed.
* cgroup_clear_css_refs() can no longer fail. All that needs to be
done are deactivating refcnts, setting CSS_REMOVED and putting the
base reference on each css. Remove cgroup_clear_css_refs() and the
failure path, and open-code the loops into cgroup_rmdir().
This patch keeps the two for_each_subsys() loops separate while open
coding them. They can be merged now but there are scheduled changes
which need them to be separate, so keep them separate to reduce the
amount of churn.
local_irq_save/restore() from cgroup_clear_css_refs() are replaced
with local_irq_disable/enable() for simplicity. This is safe as
cgroup_rmdir() is always called with IRQ enabled. Note that this IRQ
switching is necessary to ensure that css_tryget() isn't called from
IRQ context on the same CPU while lower context is between CSS
deactivation and setting CSS_REMOVED as css_tryget() would hang
forever in such cases waiting for CSS to be re-activated or
CSS_REMOVED set. This will go away soon.
v2: cgroup_call_pre_destroy() removal dropped per Michal. Commit
message updated to explain local_irq_disable/enable() conversion.
Signed-off-by: Tejun Heo <[email protected]>
Reviewed-by: Michal Hocko <[email protected]>
---
include/linux/cgroup.h | 12 ----
kernel/cgroup.c | 131 +++++++++++++------------------------------------
2 files changed, 36 insertions(+), 107 deletions(-)
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -86,7 +86,6 @@ struct cgroup_subsys_state {
enum {
CSS_ROOT, /* This CSS is the root of the subsystem */
CSS_REMOVED, /* This CSS is dead */
- CSS_CLEAR_CSS_REFS, /* @ss->__DEPRECATED_clear_css_refs */
};
/* Caller must verify that the css is not for root cgroup */
@@ -485,17 +484,6 @@ struct cgroup_subsys {
*/
bool use_id;
- /*
- * If %true, cgroup removal will try to clear css refs by retrying
- * ss->pre_destroy() until there's no css ref left. This behavior
- * is strictly for backward compatibility and will be removed as
- * soon as the current user (memcg) is updated.
- *
- * If %false, ss->pre_destroy() can't fail and cgroup removal won't
- * wait for css refs to drop to zero before proceeding.
- */
- bool __DEPRECATED_clear_css_refs;
-
#define MAX_CGROUP_TYPE_NAMELEN 32
const char *name;
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -865,11 +865,8 @@ static int cgroup_call_pre_destroy(struc
continue;
ret = ss->pre_destroy(cgrp);
- if (ret) {
- /* ->pre_destroy() failure is being deprecated */
- WARN_ON_ONCE(!ss->__DEPRECATED_clear_css_refs);
+ if (WARN_ON_ONCE(ret))
break;
- }
}
return ret;
@@ -3901,14 +3898,12 @@ static void init_cgroup_css(struct cgrou
cgrp->subsys[ss->subsys_id] = css;
/*
- * If !clear_css_refs, css holds an extra ref to @cgrp->dentry
- * which is put on the last css_put(). dput() requires process
- * context, which css_put() may be called without. @css->dput_work
- * will be used to invoke dput() asynchronously from css_put().
+ * css holds an extra ref to @cgrp->dentry which is put on the last
+ * css_put(). dput() requires process context, which css_put() may
+ * be called without. @css->dput_work will be used to invoke
+ * dput() asynchronously from css_put().
*/
INIT_WORK(&css->dput_work, css_dput_fn);
- if (ss->__DEPRECATED_clear_css_refs)
- set_bit(CSS_CLEAR_CSS_REFS, &css->flags);
}
/*
@@ -3978,10 +3973,9 @@ static long cgroup_create(struct cgroup
if (err < 0)
goto err_remove;
- /* If !clear_css_refs, each css holds a ref to the cgroup's dentry */
+ /* each css holds a ref to the cgroup's dentry */
for_each_subsys(root, ss)
- if (!ss->__DEPRECATED_clear_css_refs)
- dget(dentry);
+ dget(dentry);
/* The cgroup directory was pre-locked for us */
BUG_ON(!mutex_is_locked(&cgrp->dentry->d_inode->i_mutex));
@@ -4066,71 +4060,6 @@ static int cgroup_has_css_refs(struct cg
return 0;
}
-/*
- * Atomically mark all (or else none) of the cgroup's CSS objects as
- * CSS_REMOVED. Return true on success, or false if the cgroup has
- * busy subsystems. Call with cgroup_mutex held
- *
- * Depending on whether a subsys has __DEPRECATED_clear_css_refs set or
- * not, cgroup removal behaves differently.
- *
- * If clear is set, css refcnt for the subsystem should be zero before
- * cgroup removal can be committed. This is implemented by
- * CGRP_WAIT_ON_RMDIR and retry logic around ->pre_destroy(), which may be
- * called multiple times until all css refcnts reach zero and is allowed to
- * veto removal on any invocation. This behavior is deprecated and will be
- * removed as soon as the existing user (memcg) is updated.
- *
- * If clear is not set, each css holds an extra reference to the cgroup's
- * dentry and cgroup removal proceeds regardless of css refs.
- * ->pre_destroy() will be called at least once and is not allowed to fail.
- * On the last put of each css, whenever that may be, the extra dentry ref
- * is put so that dentry destruction happens only after all css's are
- * released.
- */
-static int cgroup_clear_css_refs(struct cgroup *cgrp)
-{
- struct cgroup_subsys *ss;
- unsigned long flags;
- bool failed = false;
-
- local_irq_save(flags);
-
- /*
- * Block new css_tryget() by deactivating refcnt. If all refcnts
- * for subsystems w/ clear_css_refs set were 1 at the moment of
- * deactivation, we succeeded.
- */
- for_each_subsys(cgrp->root, ss) {
- struct cgroup_subsys_state *css = cgrp->subsys[ss->subsys_id];
-
- WARN_ON(atomic_read(&css->refcnt) < 0);
- atomic_add(CSS_DEACT_BIAS, &css->refcnt);
-
- if (ss->__DEPRECATED_clear_css_refs)
- failed |= css_refcnt(css) != 1;
- }
-
- /*
- * If succeeded, set REMOVED and put all the base refs; otherwise,
- * restore refcnts to positive values. Either way, all in-progress
- * css_tryget() will be released.
- */
- for_each_subsys(cgrp->root, ss) {
- struct cgroup_subsys_state *css = cgrp->subsys[ss->subsys_id];
-
- if (!failed) {
- set_bit(CSS_REMOVED, &css->flags);
- css_put(css);
- } else {
- atomic_sub(CSS_DEACT_BIAS, &css->refcnt);
- }
- }
-
- local_irq_restore(flags);
- return !failed;
-}
-
static int cgroup_rmdir(struct inode *unused_dir, struct dentry *dentry)
{
struct cgroup *cgrp = dentry->d_fsdata;
@@ -4138,10 +4067,10 @@ static int cgroup_rmdir(struct inode *un
struct cgroup *parent;
DEFINE_WAIT(wait);
struct cgroup_event *event, *tmp;
+ struct cgroup_subsys *ss;
int ret;
/* the vfs holds both inode->i_mutex already */
-again:
mutex_lock(&cgroup_mutex);
if (atomic_read(&cgrp->count) != 0) {
mutex_unlock(&cgroup_mutex);
@@ -4182,21 +4111,34 @@ again:
return -EBUSY;
}
prepare_to_wait(&cgroup_rmdir_waitq, &wait, TASK_INTERRUPTIBLE);
- if (!cgroup_clear_css_refs(cgrp)) {
- mutex_unlock(&cgroup_mutex);
- /*
- * Because someone may call cgroup_wakeup_rmdir_waiter() before
- * prepare_to_wait(), we need to check this flag.
- */
- if (test_bit(CGRP_WAIT_ON_RMDIR, &cgrp->flags))
- schedule();
- finish_wait(&cgroup_rmdir_waitq, &wait);
- clear_bit(CGRP_WAIT_ON_RMDIR, &cgrp->flags);
- if (signal_pending(current))
- return -EINTR;
- goto again;
+
+ local_irq_disable();
+
+ /* block new css_tryget() by deactivating refcnt */
+ for_each_subsys(cgrp->root, ss) {
+ struct cgroup_subsys_state *css = cgrp->subsys[ss->subsys_id];
+
+ WARN_ON(atomic_read(&css->refcnt) < 0);
+ atomic_add(CSS_DEACT_BIAS, &css->refcnt);
}
- /* NO css_tryget() can success after here. */
+
+ /*
+ * Set REMOVED. All in-progress css_tryget() will be released.
+ * Put all the base refs. Each css holds an extra reference to the
+ * cgroup's dentry and cgroup removal proceeds regardless of css
+ * refs. On the last put of each css, whenever that may be, the
+ * extra dentry ref is put so that dentry destruction happens only
+ * after all css's are released.
+ */
+ for_each_subsys(cgrp->root, ss) {
+ struct cgroup_subsys_state *css = cgrp->subsys[ss->subsys_id];
+
+ set_bit(CSS_REMOVED, &css->flags);
+ css_put(css);
+ }
+
+ local_irq_enable();
+
finish_wait(&cgroup_rmdir_waitq, &wait);
clear_bit(CGRP_WAIT_ON_RMDIR, &cgrp->flags);
@@ -4949,8 +4891,7 @@ void __css_put(struct cgroup_subsys_stat
cgroup_wakeup_rmdir_waiter(cgrp);
break;
case 0:
- if (!test_bit(CSS_CLEAR_CSS_REFS, &css->flags))
- schedule_work(&css->dput_work);
+ schedule_work(&css->dput_work);
break;
}
rcu_read_unlock();
On Wed 31-10-12 13:24:00, Tejun Heo wrote:
[...]
> local_irq_save/restore() from cgroup_clear_css_refs() are replaced
> with local_irq_disable/enable() for simplicity. This is safe as
> cgroup_rmdir() is always called with IRQ enabled. Note that this IRQ
> switching is necessary to ensure that css_tryget() isn't called from
> IRQ context on the same CPU while lower context is between CSS
> deactivation and setting CSS_REMOVED as css_tryget() would hang
> forever in such cases waiting for CSS to be re-activated or
> CSS_REMOVED set. This will go away soon.
Much better. Thanks a lot!
--
Michal Hocko
SUSE Labs
On Wed 31-10-12 13:14:07, Tejun Heo wrote:
> Hey, Michal.
>
> On Wed, Oct 31, 2012 at 1:12 PM, Michal Hocko <[email protected]> wrote:
> > And sorry for being to annoying about this WARN_ON_ONCE but I really
> > don't see any reason for it. pre_destroy can still happen and now it is
> > even reduced to a reasonable condition (somebody shown up). So I would
> > put it out of way as it doesn't give us anything and as I've already
> > mentioned one can trigger it really easily.
>
> It was there before and goes away in several commits. I have to
> explain all that to remove it in this patch. Why do that when the
> whole thing is just gonna disappear anyway?
I would consider it nicer but if you feel it is not worth it then don't
worry. It is not a big deal. The important part is that we do not ignore
the return value now.
Thanks
--
Michal Hocko
SUSE Labs
On Wed 31-10-12 12:44:10, Tejun Heo wrote:
> All ->pre_destory() implementations return 0 now, which is the only
> allowed return value. Make it return void.
>
> Signed-off-by: Tejun Heo <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Balbir Singh <[email protected]>
> Cc: KAMEZAWA Hiroyuki <[email protected]>
> Cc: Vivek Goyal <[email protected]>
I thought I already gave my r-b but anyway
Reviewed-by: Michal Hocko <[email protected]>
> ---
> block/blk-cgroup.c | 3 +--
> include/linux/cgroup.h | 2 +-
> kernel/cgroup.c | 2 +-
> mm/hugetlb_cgroup.c | 4 +---
> mm/memcontrol.c | 3 +--
> 5 files changed, 5 insertions(+), 9 deletions(-)
>
> diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
> index f3b44a6..a7816f3 100644
> --- a/block/blk-cgroup.c
> +++ b/block/blk-cgroup.c
> @@ -600,7 +600,7 @@ struct cftype blkcg_files[] = {
> *
> * This is the blkcg counterpart of ioc_release_fn().
> */
> -static int blkcg_pre_destroy(struct cgroup *cgroup)
> +static void blkcg_pre_destroy(struct cgroup *cgroup)
> {
> struct blkcg *blkcg = cgroup_to_blkcg(cgroup);
>
> @@ -622,7 +622,6 @@ static int blkcg_pre_destroy(struct cgroup *cgroup)
> }
>
> spin_unlock_irq(&blkcg->lock);
> - return 0;
> }
>
> static void blkcg_destroy(struct cgroup *cgroup)
> diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
> index 47868a8..adb2adc 100644
> --- a/include/linux/cgroup.h
> +++ b/include/linux/cgroup.h
> @@ -436,7 +436,7 @@ int cgroup_taskset_size(struct cgroup_taskset *tset);
>
> struct cgroup_subsys {
> struct cgroup_subsys_state *(*create)(struct cgroup *cgrp);
> - int (*pre_destroy)(struct cgroup *cgrp);
> + void (*pre_destroy)(struct cgroup *cgrp);
> void (*destroy)(struct cgroup *cgrp);
> int (*can_attach)(struct cgroup *cgrp, struct cgroup_taskset *tset);
> void (*cancel_attach)(struct cgroup *cgrp, struct cgroup_taskset *tset);
> diff --git a/kernel/cgroup.c b/kernel/cgroup.c
> index c5f6fb2..83cd7d0 100644
> --- a/kernel/cgroup.c
> +++ b/kernel/cgroup.c
> @@ -4054,7 +4054,7 @@ static int cgroup_rmdir(struct inode *unused_dir, struct dentry *dentry)
> mutex_unlock(&cgroup_mutex);
> for_each_subsys(cgrp->root, ss)
> if (ss->pre_destroy)
> - WARN_ON_ONCE(ss->pre_destroy(cgrp));
> + ss->pre_destroy(cgrp);
> mutex_lock(&cgroup_mutex);
>
> /*
> diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
> index dc595c6..0d3a1a3 100644
> --- a/mm/hugetlb_cgroup.c
> +++ b/mm/hugetlb_cgroup.c
> @@ -155,7 +155,7 @@ out:
> * Force the hugetlb cgroup to empty the hugetlb resources by moving them to
> * the parent cgroup.
> */
> -static int hugetlb_cgroup_pre_destroy(struct cgroup *cgroup)
> +static void hugetlb_cgroup_pre_destroy(struct cgroup *cgroup)
> {
> struct hstate *h;
> struct page *page;
> @@ -172,8 +172,6 @@ static int hugetlb_cgroup_pre_destroy(struct cgroup *cgroup)
> }
> cond_resched();
> } while (hugetlb_cgroup_have_usage(cgroup));
> -
> - return 0;
> }
>
> int hugetlb_cgroup_charge_cgroup(int idx, unsigned long nr_pages,
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 6678f99..a1811ce 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -5002,12 +5002,11 @@ free_out:
> return ERR_PTR(error);
> }
>
> -static int mem_cgroup_pre_destroy(struct cgroup *cont)
> +static void mem_cgroup_pre_destroy(struct cgroup *cont)
> {
> struct mem_cgroup *memcg = mem_cgroup_from_cont(cont);
>
> mem_cgroup_reparent_charges(memcg);
> - return 0;
> }
>
> static void mem_cgroup_destroy(struct cgroup *cont)
> --
> 1.7.11.7
>
--
Michal Hocko
SUSE Labs
On Wed 31-10-12 12:44:06, Tejun Heo wrote:
[...]
> diff --git a/kernel/cgroup.c b/kernel/cgroup.c
> index f22e3cd..66204a6 100644
> --- a/kernel/cgroup.c
> +++ b/kernel/cgroup.c
[...]
> @@ -4122,13 +4079,30 @@ static int cgroup_rmdir(struct inode *unused_dir, struct dentry *dentry)
> }
> prepare_to_wait(&cgroup_rmdir_waitq, &wait, TASK_INTERRUPTIBLE);
>
> - /* block new css_tryget() by deactivating refcnt */
> + /*
> + * Block new css_tryget() by deactivating refcnt and mark @cgrp
> + * removed. This makes future css_tryget() and child creation
> + * attempts fail thus maintaining the removal conditions verified
> + * above.
> + */
> for_each_subsys(cgrp->root, ss) {
> struct cgroup_subsys_state *css = cgrp->subsys[ss->subsys_id];
>
> WARN_ON(atomic_read(&css->refcnt) < 0);
> atomic_add(CSS_DEACT_BIAS, &css->refcnt);
> }
> + set_bit(CGRP_REMOVED, &cgrp->flags);
> +
> + /*
> + * Tell subsystems to initate destruction. pre_destroy() should be
> + * called with cgroup_mutex unlocked. See 3fa59dfbc3 ("cgroup: fix
> + * potential deadlock in pre_destroy") for details.
> + */
> + mutex_unlock(&cgroup_mutex);
> + for_each_subsys(cgrp->root, ss)
> + if (ss->pre_destroy)
> + WARN_ON_ONCE(ss->pre_destroy(cgrp));
Do you think that BUG_ON would be too harsh?
--
Michal Hocko
SUSE Labs
Hey, Michal.
On Wed, Oct 31, 2012 at 10:23:59PM +0100, Michal Hocko wrote:
> > + for_each_subsys(cgrp->root, ss)
> > + if (ss->pre_destroy)
> > + WARN_ON_ONCE(ss->pre_destroy(cgrp));
>
> Do you think that BUG_ON would be too harsh?
Yeah, I do think so. In general, I think the consensus now is to
prefer WARN_ON[_ONCE]() over BUG_ON() whenever possible. It's not
like we can get more information from BUG_ON()s (more likely to get
less reporting actually by taking down the machine). Limping machines
are better than dead ones and there just isn't much to gain here by
killing it.
Thanks.
--
tejun
On Wed 31-10-12 14:27:25, Tejun Heo wrote:
> Hey, Michal.
>
> On Wed, Oct 31, 2012 at 10:23:59PM +0100, Michal Hocko wrote:
> > > + for_each_subsys(cgrp->root, ss)
> > > + if (ss->pre_destroy)
> > > + WARN_ON_ONCE(ss->pre_destroy(cgrp));
> >
> > Do you think that BUG_ON would be too harsh?
>
> Yeah, I do think so. In general, I think the consensus now is to
> prefer WARN_ON[_ONCE]() over BUG_ON() whenever possible. It's not
> like we can get more information from BUG_ON()s (more likely to get
> less reporting actually by taking down the machine). Limping machines
> are better than dead ones and there just isn't much to gain here by
> killing it.
Fair enough
--
Michal Hocko
SUSE Labs
(2012/11/01 5:24), Tejun Heo wrote:
> On Wed, Oct 31, 2012 at 09:14:16PM +0100, Michal Hocko wrote:
>> On Wed 31-10-12 13:11:02, Tejun Heo wrote:
>>> Hello,
>>>
>>> On Wed, Oct 31, 2012 at 1:08 PM, Michal Hocko <[email protected]> wrote:
>>>> local_irq_disable doesn't guarantee atomicity, because other CPUs will
>>>
>>> Maybe it should say atomicity on the local CPU.
>>
>> That would be more clear but being more verbose doesn't hurt either :P
>>
>>>> see the change in steps so this is a bit misleading. The real reason
>>>> AFAICS is that we do not want to hang in css_tryget from IRQ context
>>>> (does this really happen btw.?) which would interrupt cgroup_rmdir
>>>> between we add CSS_DEACT_BIAS and the group is marked CGRP_REMOVED.
>>>> Or am I still missing the point?
>>>
>>> Yeah, that's the correct one. We don't want tryget happening between
>>> DEACT_BIAS and REMOVED as it can hang forever there.
>
> Here's the updated description. git branch updated accordingly.
>
> Thanks.
>
> From: Tejun Heo <[email protected]>
> Subject: cgroup: kill cgroup_subsys->__DEPRECATED_clear_css_refs
>
> 2ef37d3fe4 ("memcg: Simplify mem_cgroup_force_empty_list error
> handling") removed the last user of __DEPRECATED_clear_css_refs. This
> patch removes __DEPRECATED_clear_css_refs and mechanisms to support
> it.
>
> * Conditionals dependent on __DEPRECATED_clear_css_refs removed.
>
> * cgroup_clear_css_refs() can no longer fail. All that needs to be
> done are deactivating refcnts, setting CSS_REMOVED and putting the
> base reference on each css. Remove cgroup_clear_css_refs() and the
> failure path, and open-code the loops into cgroup_rmdir().
>
> This patch keeps the two for_each_subsys() loops separate while open
> coding them. They can be merged now but there are scheduled changes
> which need them to be separate, so keep them separate to reduce the
> amount of churn.
>
> local_irq_save/restore() from cgroup_clear_css_refs() are replaced
> with local_irq_disable/enable() for simplicity. This is safe as
> cgroup_rmdir() is always called with IRQ enabled. Note that this IRQ
> switching is necessary to ensure that css_tryget() isn't called from
> IRQ context on the same CPU while lower context is between CSS
> deactivation and setting CSS_REMOVED as css_tryget() would hang
> forever in such cases waiting for CSS to be re-activated or
> CSS_REMOVED set. This will go away soon.
>
> v2: cgroup_call_pre_destroy() removal dropped per Michal. Commit
> message updated to explain local_irq_disable/enable() conversion.
>
> Signed-off-by: Tejun Heo <[email protected]>
> Reviewed-by: Michal Hocko <[email protected]>
I should see this v2 thread 1st...
Reviewed-by: KAMEZAWA Hiroyuki <[email protected]>
(2012/11/01 4:44), Tejun Heo wrote:
> CSS_REMOVED is one of the several contortions which were necessary to
> support css reference draining on cgroup removal. All css->refcnts
> which need draining should be deactivated and verified to equal zero
> atomically w.r.t. css_tryget(). If any one isn't zero, all refcnts
> needed to be re-activated and css_tryget() shouldn't fail in the
> process.
>
> This was achieved by letting css_tryget() busy-loop until either the
> refcnt is reactivated (failed removal attempt) or CSS_REMOVED is set
> (committing to removal).
>
> Now that css refcnt draining is no longer used, there's no need for
> atomic rollback mechanism. css_tryget() simply can look at the
> reference count and fail if it's deactivated - it's never getting
> re-activated.
>
> This patch removes CSS_REMOVED and updates __css_tryget() to fail if
> the refcnt is deactivated. As deactivation and removal are a single
> step now, they no longer need to be protected against css_tryget()
> happening from irq context. Remove local_irq_disable/enable() from
> cgroup_rmdir().
>
> Note that this removes css_is_removed() whose only user is VM_BUG_ON()
> in memcontrol.c. We can replace it with a check on the refcnt but
> given that the only use case is a debug assert, I think it's better to
> simply unexport it.
>
> v2: Comment updated and explanation on local_irq_disable/enable()
> added per Michal Hocko.
>
> Signed-off-by: Tejun Heo <[email protected]>
> Reviewed-by: Michal Hocko <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Cc: Balbir Singh <[email protected]>
> Cc: KAMEZAWA Hiroyuki <[email protected]>
Thank you.
Reviewed-by: KAMEZAWA Hiroyuki <[email protected]>
(2012/11/01 4:44), Tejun Heo wrote:
> This patch makes cgroup_create() fail if @parent is marked removed.
> This is to prepare for further updates to cgroup_rmdir() path.
>
> Note that this change isn't strictly necessary. cgroup can only be
> created via mkdir and the removed marking and dentry removal happen
> without releasing cgroup_mutex, so cgroup_create() can never race with
> cgroup_rmdir(). Even after the scheduled updates to cgroup_rmdir(),
> cgroup_mkdir() and cgroup_rmdir() are synchronized by i_mutex
> rendering the added liveliness check unnecessary.
>
> Do it anyway such that locking is contained inside cgroup proper and
> we don't get nasty surprises if we ever grow another caller of
> cgroup_create().
>
> Signed-off-by: Tejun Heo <[email protected]>
> Reviewed-by: Michal Hocko <[email protected]>
Reviewed-by: KAMEZAWA Hiroyuki <[email protected]>
(2012/11/01 4:44), Tejun Heo wrote:
> Because ->pre_destroy() could fail and can't be called under
> cgroup_mutex, cgroup destruction did something very ugly.
>
> 1. Grab cgroup_mutex and verify it can be destroyed; fail otherwise.
>
> 2. Release cgroup_mutex and call ->pre_destroy().
>
> 3. Re-grab cgroup_mutex and verify it can still be destroyed; fail
> otherwise.
>
> 4. Continue destroying.
>
> In addition to being ugly, it has been always broken in various ways.
> For example, memcg ->pre_destroy() expects the cgroup to be inactive
> after it's done but tasks can be attached and detached between #2 and
> #3 and the conditions that memcg verified in ->pre_destroy() might no
> longer hold by the time control reaches #3.
>
> Now that ->pre_destroy() is no longer allowed to fail. We can switch
> to the following.
>
> 1. Grab cgroup_mutex and verify it can be destroyed; fail otherwise.
>
> 2. Deactivate CSS's and mark the cgroup removed thus preventing any
> further operations which can invalidate the verification from #1.
>
> 3. Release cgroup_mutex and call ->pre_destroy().
>
> 4. Re-grab cgroup_mutex and continue destroying.
>
> After this change, controllers can safely assume that ->pre_destroy()
> will only be called only once for a given cgroup and, once
> ->pre_destroy() is called, the cgroup will stay dormant till it's
> destroyed.
>
> This removes the only reason ->pre_destroy() can fail - new task being
> attached or child cgroup being created inbetween. Error out path is
> removed and ->pre_destroy() invocation is open coded in
> cgroup_rmdir().
>
> v2: cgroup_call_pre_destroy() removal moved to this patch per Michal.
> Commit message updated per Glauber.
>
> Signed-off-by: Tejun Heo <[email protected]>
> Reviewed-by: Michal Hocko <[email protected]>
> Cc: Glauber Costa <[email protected]>
Reviewed-by: KAMEZAWA Hiroyuki <[email protected]>
(2012/11/01 4:44), Tejun Heo wrote:
> CGRP_WAIT_ON_RMDIR is another kludge which was added to make cgroup
> destruction rollback somewhat working. cgroup_rmdir() used to drain
> CSS references and CGRP_WAIT_ON_RMDIR and the associated waitqueue and
> helpers were used to allow the task performing rmdir to wait for the
> next relevant event.
>
> Unfortunately, the wait is visible to controllers too and the
> mechanism got exposed to memcg by 887032670d ("cgroup avoid permanent
> sleep at rmdir").
>
> Now that the draining and retries are gone, CGRP_WAIT_ON_RMDIR is
> unnecessary. Remove it and all the mechanisms supporting it. Note
> that memcontrol.c changes are essentially revert of 887032670d
> ("cgroup avoid permanent sleep at rmdir").
>
> Signed-off-by: Tejun Heo <[email protected]>
> Reviewed-by: Michal Hocko <[email protected]>
> Cc: Balbir Singh <[email protected]>
> Cc: KAMEZAWA Hiroyuki <[email protected]>
plz forget my comments on v1...But...
If swap-in makes a charge before css is deactivated, pre_destroy()->force_empty()
will cause infinite loop until it finds a newly charged page in LRU. I think
force_empty() will find it by lru_drain() and can make res_coutnter to be 0.
I think we need some test with swap-in handling..Anyway,
Reviewed-by: KAMEZAWA Hiroyuki <[email protected]>
(2012/11/01 4:44), Tejun Heo wrote:
> From: Michal Hocko <[email protected]>
>
> Now that pre_destroy callbacks are called from the context where neither
> any task can attach the group nor any children group can be added there
> is no other way to fail from mem_cgroup_pre_destroy.
> mem_cgroup_pre_destroy doesn't have to take a reference to memcg's css
> because all css' are marked dead already.
>
> tj: Remove now unused local variable @cgrp from
> mem_cgroup_reparent_charges().
>
> Signed-off-by: Michal Hocko <[email protected]>
> Reviewed-by: Glauber Costa <[email protected]>
> Signed-off-by: Tejun Heo <[email protected]>
Acked-by: KAMEZAWA Hiroyuki <[email protected]>
(2012/11/01 4:44), Tejun Heo wrote:
> From: Michal Hocko <[email protected]>
>
> Now that pre_destroy callbacks are called from the context where neither
> any task can attach the group nor any children group can be added there
> is no other way to fail from hugetlb_pre_destroy.
>
> Signed-off-by: Michal Hocko <[email protected]>
> Reviewed-by: Tejun Heo <[email protected]>
> Reviewed-by: Glauber Costa <[email protected]>
> Signed-off-by: Tejun Heo <[email protected]>
Acked-by: KAMEZAWA Hiroyuki <[email protected]>
(2012/11/01 4:44), Tejun Heo wrote:
> All ->pre_destory() implementations return 0 now, which is the only
> allowed return value. Make it return void.
>
> Signed-off-by: Tejun Heo <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Balbir Singh <[email protected]>
> Cc: KAMEZAWA Hiroyuki <[email protected]>
> Cc: Vivek Goyal <[email protected]>
Thank you !
Acked-by: KAMEZAWA Hiroyuki <[email protected]>
On 2012/11/1 3:44, Tejun Heo wrote:
> CSS_REMOVED is one of the several contortions which were necessary to
> support css reference draining on cgroup removal. All css->refcnts
> which need draining should be deactivated and verified to equal zero
> atomically w.r.t. css_tryget(). If any one isn't zero, all refcnts
> needed to be re-activated and css_tryget() shouldn't fail in the
> process.
>
> This was achieved by letting css_tryget() busy-loop until either the
> refcnt is reactivated (failed removal attempt) or CSS_REMOVED is set
> (committing to removal).
>
> Now that css refcnt draining is no longer used, there's no need for
> atomic rollback mechanism. css_tryget() simply can look at the
> reference count and fail if it's deactivated - it's never getting
> re-activated.
>
> This patch removes CSS_REMOVED and updates __css_tryget() to fail if
> the refcnt is deactivated. As deactivation and removal are a single
> step now, they no longer need to be protected against css_tryget()
> happening from irq context. Remove local_irq_disable/enable() from
> cgroup_rmdir().
>
> Note that this removes css_is_removed() whose only user is VM_BUG_ON()
> in memcontrol.c. We can replace it with a check on the refcnt but
> given that the only use case is a debug assert, I think it's better to
> simply unexport it.
>
> v2: Comment updated and explanation on local_irq_disable/enable()
> added per Michal Hocko.
>
> Signed-off-by: Tejun Heo <[email protected]>
> Reviewed-by: Michal Hocko <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Cc: Balbir Singh <[email protected]>
> Cc: KAMEZAWA Hiroyuki <[email protected]>
Acked-by: Li Zefan <[email protected]>
On 2012/11/1 3:44, Tejun Heo wrote:
> 2ef37d3fe4 ("memcg: Simplify mem_cgroup_force_empty_list error
> handling") removed the last user of __DEPRECATED_clear_css_refs. This
> patch removes __DEPRECATED_clear_css_refs and mechanisms to support
> it.
>
> * Conditionals dependent on __DEPRECATED_clear_css_refs removed.
>
> * cgroup_clear_css_refs() can no longer fail. All that needs to be
> done are deactivating refcnts, setting CSS_REMOVED and putting the
> base reference on each css. Remove cgroup_clear_css_refs() and the
> failure path, and open-code the loops into cgroup_rmdir().
>
> This patch keeps the two for_each_subsys() loops separate while open
> coding them. They can be merged now but there are scheduled changes
> which need them to be separate, so keep them separate to reduce the
> amount of churn.
>
> local_irq_save/restore() from cgroup_clear_css_refs() are replaced
> with local_irq_disable/enable() for simplicity. This is safe as
> cgroup_rmdir() is always called with IRQ enabled. Note that this IRQ
> switching is necessary to make CSS deactivation and CSS_REMOVED
> assertion atomic against css_tryget() and will be removed by future
> changes.
>
> v2: cgroup_call_pre_destroy() removal dropped per Michal. Commit
> message updated to explain local_irq_disable/enable() conversion.
>
> Signed-off-by: Tejun Heo <[email protected]>
> Reviewed-by: Michal Hocko <[email protected]>
Acked-by: Li Zefan <[email protected]>
On 2012/11/1 3:44, Tejun Heo wrote:
> This patch makes cgroup_create() fail if @parent is marked removed.
> This is to prepare for further updates to cgroup_rmdir() path.
>
> Note that this change isn't strictly necessary. cgroup can only be
> created via mkdir and the removed marking and dentry removal happen
> without releasing cgroup_mutex, so cgroup_create() can never race with
> cgroup_rmdir(). Even after the scheduled updates to cgroup_rmdir(),
> cgroup_mkdir() and cgroup_rmdir() are synchronized by i_mutex
> rendering the added liveliness check unnecessary.
>
> Do it anyway such that locking is contained inside cgroup proper and
> we don't get nasty surprises if we ever grow another caller of
> cgroup_create().
>
> Signed-off-by: Tejun Heo <[email protected]>
> Reviewed-by: Michal Hocko <[email protected]>
Acked-by: Li Zefan <[email protected]>
On 2012/11/1 3:44, Tejun Heo wrote:
> Because ->pre_destroy() could fail and can't be called under
> cgroup_mutex, cgroup destruction did something very ugly.
>
> 1. Grab cgroup_mutex and verify it can be destroyed; fail otherwise.
>
> 2. Release cgroup_mutex and call ->pre_destroy().
>
> 3. Re-grab cgroup_mutex and verify it can still be destroyed; fail
> otherwise.
>
> 4. Continue destroying.
>
> In addition to being ugly, it has been always broken in various ways.
> For example, memcg ->pre_destroy() expects the cgroup to be inactive
> after it's done but tasks can be attached and detached between #2 and
> #3 and the conditions that memcg verified in ->pre_destroy() might no
> longer hold by the time control reaches #3.
>
> Now that ->pre_destroy() is no longer allowed to fail. We can switch
> to the following.
>
> 1. Grab cgroup_mutex and verify it can be destroyed; fail otherwise.
>
> 2. Deactivate CSS's and mark the cgroup removed thus preventing any
> further operations which can invalidate the verification from #1.
>
> 3. Release cgroup_mutex and call ->pre_destroy().
>
> 4. Re-grab cgroup_mutex and continue destroying.
>
> After this change, controllers can safely assume that ->pre_destroy()
> will only be called only once for a given cgroup and, once
> ->pre_destroy() is called, the cgroup will stay dormant till it's
> destroyed.
>
> This removes the only reason ->pre_destroy() can fail - new task being
> attached or child cgroup being created inbetween. Error out path is
> removed and ->pre_destroy() invocation is open coded in
> cgroup_rmdir().
>
> v2: cgroup_call_pre_destroy() removal moved to this patch per Michal.
> Commit message updated per Glauber.
>
> Signed-off-by: Tejun Heo <[email protected]>
> Reviewed-by: Michal Hocko <[email protected]>
> Cc: Glauber Costa <[email protected]>
Acked-by: Li Zefan <[email protected]>
On 2012/11/1 3:44, Tejun Heo wrote:
> CGRP_WAIT_ON_RMDIR is another kludge which was added to make cgroup
> destruction rollback somewhat working. cgroup_rmdir() used to drain
> CSS references and CGRP_WAIT_ON_RMDIR and the associated waitqueue and
> helpers were used to allow the task performing rmdir to wait for the
> next relevant event.
>
> Unfortunately, the wait is visible to controllers too and the
> mechanism got exposed to memcg by 887032670d ("cgroup avoid permanent
> sleep at rmdir").
>
> Now that the draining and retries are gone, CGRP_WAIT_ON_RMDIR is
> unnecessary. Remove it and all the mechanisms supporting it. Note
> that memcontrol.c changes are essentially revert of 887032670d
> ("cgroup avoid permanent sleep at rmdir").
>
> Signed-off-by: Tejun Heo <[email protected]>
> Reviewed-by: Michal Hocko <[email protected]>
> Cc: Balbir Singh <[email protected]>
> Cc: KAMEZAWA Hiroyuki <[email protected]>
> ---
> include/linux/cgroup.h | 21 ---------------------
> kernel/cgroup.c | 51 --------------------------------------------------
> mm/memcontrol.c | 24 +-----------------------
> 3 files changed, 1 insertion(+), 95 deletions(-)
Acked-by: Li Zefan <[email protected]>
On 2012/11/1 3:44, Tejun Heo wrote:
> All ->pre_destory() implementations return 0 now, which is the only
> allowed return value. Make it return void.
>
> Signed-off-by: Tejun Heo <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Balbir Singh <[email protected]>
> Cc: KAMEZAWA Hiroyuki <[email protected]>
> Cc: Vivek Goyal <[email protected]>
Acked-by: Li Zefan <[email protected]>
Applied to cgroup/cgroup-rmdir-updates and then pulled into
cgroup/for-3.8 w/ Li's acks added.
Thanks.
--
tejun
On Mon 05-11-12 09:30:24, Tejun Heo wrote:
> Applied to cgroup/cgroup-rmdir-updates and then pulled into
> cgroup/for-3.8 w/ Li's acks added.
OK, I have merged it into -mm git tree.
Thanks
--
Michal Hocko
SUSE Labs