When an inode is getting dirty for the first time it's associated
with a wb structure (see __inode_attach_wb()). It can later be
switched to another wb (if e.g. some other cgroup is writing a lot of
data to the same inode), but otherwise stays attached to the original
wb until being reclaimed.
The problem is that the wb structure holds a reference to the original
memory and blkcg cgroups. So if an inode has been dirty once and later
is actively used in read-only mode, it has a good chance to pin down
the original memory and blkcg cgroups forewer. This is often the case with
services bringing data for other services, e.g. updating some rpm
packages.
In the real life it becomes a problem due to a large size of the memcg
structure, which can easily be 1000x larger than an inode. Also a
really large number of dying cgroups can raise different scalability
issues, e.g. making the memory reclaim costly and less effective.
To solve the problem inodes should be eventually detached from the
corresponding writeback structure. It's inefficient to do it after
every writeback completion. Instead it can be done whenever the
original memory cgroup is offlined and writeback structure is getting
killed. Scanning over a (potentially long) list of inodes and detach
them from the writeback structure can take quite some time. To avoid
scanning all inodes, attached inodes are kept on a new list (b_attached).
To make it less noticeable to a user, the scanning and switching is performed
from a work context.
Big thanks to Jan Kara, Dennis Zhou and Hillf Danton for their ideas and
contribution to this patchset.
v7:
- shared locking for multiple inode switching
- introduced inode_prepare_wbs_switch() helper
- extended the pre-switch inode check for I_WILL_FREE
- added comments here and there
v6:
- extended and reused wbs switching functionality to switch inodes
on cgwb cleanup
- fixed offline_list handling
- switched to the unbound_wq
- other minor fixes
v5:
- switch inodes to bdi->wb instead of zeroing inode->i_wb
- split the single patch into two
- only cgwbs maintain lists of attached inodes
- added cond_resched()
- fixed !CONFIG_CGROUP_WRITEBACK handling
- extended list of prohibited inodes flag
- other small fixes
Roman Gushchin (6):
writeback, cgroup: do not switch inodes with I_WILL_FREE flag
writeback, cgroup: switch to rcu_work API in inode_switch_wbs()
writeback, cgroup: keep list of inodes attached to bdi_writeback
writeback, cgroup: split out the functional part of
inode_switch_wbs_work_fn()
writeback, cgroup: support switching multiple inodes at once
writeback, cgroup: release dying cgwbs by switching attached inodes
fs/fs-writeback.c | 302 +++++++++++++++++++++----------
include/linux/backing-dev-defs.h | 20 +-
include/linux/writeback.h | 1 +
mm/backing-dev.c | 69 ++++++-
4 files changed, 293 insertions(+), 99 deletions(-)
--
2.31.1
Split out the functional part of the inode_switch_wbs_work_fn()
function as inode_do switch_wbs() to reuse it later for switching
inodes attached to dying cgwbs.
This commit doesn't bring any functional changes.
Signed-off-by: Roman Gushchin <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
---
fs/fs-writeback.c | 19 +++++++++++--------
1 file changed, 11 insertions(+), 8 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index f0dfcd08073e..d46cdeeb6797 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -351,15 +351,12 @@ static void bdi_up_write_wb_switch_rwsem(struct backing_dev_info *bdi)
up_write(&bdi->wb_switch_rwsem);
}
-static void inode_switch_wbs_work_fn(struct work_struct *work)
+static void inode_do_switch_wbs(struct inode *inode,
+ struct bdi_writeback *new_wb)
{
- struct inode_switch_wbs_context *isw =
- container_of(to_rcu_work(work), struct inode_switch_wbs_context, work);
- struct inode *inode = isw->inode;
struct backing_dev_info *bdi = inode_to_bdi(inode);
struct address_space *mapping = inode->i_mapping;
struct bdi_writeback *old_wb = inode->i_wb;
- struct bdi_writeback *new_wb = isw->new_wb;
XA_STATE(xas, &mapping->i_pages, 0);
struct page *page;
bool switched = false;
@@ -470,11 +467,17 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
wb_wakeup(new_wb);
wb_put(old_wb);
}
- wb_put(new_wb);
+}
- iput(inode);
- kfree(isw);
+static void inode_switch_wbs_work_fn(struct work_struct *work)
+{
+ struct inode_switch_wbs_context *isw =
+ container_of(to_rcu_work(work), struct inode_switch_wbs_context, work);
+ inode_do_switch_wbs(isw->inode, isw->new_wb);
+ wb_put(isw->new_wb);
+ iput(isw->inode);
+ kfree(isw);
atomic_dec(&isw_nr_in_flight);
}
--
2.31.1
Currently there is no way to iterate over inodes attached to a
specific cgwb structure. It limits the ability to efficiently
reclaim the writeback structure itself and associated memory and
block cgroup structures without scanning all inodes belonging to a sb,
which can be prohibitively expensive.
While dirty/in-active-writeback an inode belongs to one of the
bdi_writeback's io lists: b_dirty, b_io, b_more_io and b_dirty_time.
Once cleaned up, it's removed from all io lists. So the
inode->i_io_list can be reused to maintain the list of inodes,
attached to a bdi_writeback structure.
This patch introduces a new wb->b_attached list, which contains all
inodes which were dirty at least once and are attached to the given
cgwb. Inodes attached to the root bdi_writeback structures are never
placed on such list. The following patch will use this list to try to
release cgwbs structures more efficiently.
Suggested-by: Jan Kara <[email protected]>
Signed-off-by: Roman Gushchin <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
---
fs/fs-writeback.c | 93 ++++++++++++++++++++------------
include/linux/backing-dev-defs.h | 1 +
mm/backing-dev.c | 2 +
3 files changed, 62 insertions(+), 34 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 9f378a670db4..f0dfcd08073e 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -131,25 +131,6 @@ static bool inode_io_list_move_locked(struct inode *inode,
return false;
}
-/**
- * inode_io_list_del_locked - remove an inode from its bdi_writeback IO list
- * @inode: inode to be removed
- * @wb: bdi_writeback @inode is being removed from
- *
- * Remove @inode which may be on one of @wb->b_{dirty|io|more_io} lists and
- * clear %WB_has_dirty_io if all are empty afterwards.
- */
-static void inode_io_list_del_locked(struct inode *inode,
- struct bdi_writeback *wb)
-{
- assert_spin_locked(&wb->list_lock);
- assert_spin_locked(&inode->i_lock);
-
- inode->i_state &= ~I_SYNC_QUEUED;
- list_del_init(&inode->i_io_list);
- wb_io_lists_depopulated(wb);
-}
-
static void wb_wakeup(struct bdi_writeback *wb)
{
spin_lock_bh(&wb->work_lock);
@@ -278,6 +259,28 @@ void __inode_attach_wb(struct inode *inode, struct page *page)
}
EXPORT_SYMBOL_GPL(__inode_attach_wb);
+/**
+ * inode_cgwb_move_to_attached - put the inode onto wb->b_attached list
+ * @inode: inode of interest with i_lock held
+ * @wb: target bdi_writeback
+ *
+ * Remove the inode from wb's io lists and if necessarily put onto b_attached
+ * list. Only inodes attached to cgwb's are kept on this list.
+ */
+static void inode_cgwb_move_to_attached(struct inode *inode,
+ struct bdi_writeback *wb)
+{
+ assert_spin_locked(&wb->list_lock);
+ assert_spin_locked(&inode->i_lock);
+
+ inode->i_state &= ~I_SYNC_QUEUED;
+ if (wb != &wb->bdi->wb)
+ list_move(&inode->i_io_list, &wb->b_attached);
+ else
+ list_del_init(&inode->i_io_list);
+ wb_io_lists_depopulated(wb);
+}
+
/**
* locked_inode_to_wb_and_lock_list - determine a locked inode's wb and lock it
* @inode: inode of interest with i_lock held
@@ -418,21 +421,28 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
wb_get(new_wb);
/*
- * Transfer to @new_wb's IO list if necessary. The specific list
- * @inode was on is ignored and the inode is put on ->b_dirty which
- * is always correct including from ->b_dirty_time. The transfer
- * preserves @inode->dirtied_when ordering.
+ * Transfer to @new_wb's IO list if necessary. If the @inode is dirty,
+ * the specific list @inode was on is ignored and the @inode is put on
+ * ->b_dirty which is always correct including from ->b_dirty_time.
+ * The transfer preserves @inode->dirtied_when ordering. If the @inode
+ * was clean, it means it was on the b_attached list, so move it onto
+ * the b_attached list of @new_wb.
*/
if (!list_empty(&inode->i_io_list)) {
- struct inode *pos;
-
- inode_io_list_del_locked(inode, old_wb);
inode->i_wb = new_wb;
- list_for_each_entry(pos, &new_wb->b_dirty, i_io_list)
- if (time_after_eq(inode->dirtied_when,
- pos->dirtied_when))
- break;
- inode_io_list_move_locked(inode, new_wb, pos->i_io_list.prev);
+
+ if (inode->i_state & I_DIRTY_ALL) {
+ struct inode *pos;
+
+ list_for_each_entry(pos, &new_wb->b_dirty, i_io_list)
+ if (time_after_eq(inode->dirtied_when,
+ pos->dirtied_when))
+ break;
+ inode_io_list_move_locked(inode, new_wb,
+ pos->i_io_list.prev);
+ } else {
+ inode_cgwb_move_to_attached(inode, new_wb);
+ }
} else {
inode->i_wb = new_wb;
}
@@ -1014,6 +1024,17 @@ fs_initcall(cgroup_writeback_init);
static void bdi_down_write_wb_switch_rwsem(struct backing_dev_info *bdi) { }
static void bdi_up_write_wb_switch_rwsem(struct backing_dev_info *bdi) { }
+static void inode_cgwb_move_to_attached(struct inode *inode,
+ struct bdi_writeback *wb)
+{
+ assert_spin_locked(&wb->list_lock);
+ assert_spin_locked(&inode->i_lock);
+
+ inode->i_state &= ~I_SYNC_QUEUED;
+ list_del_init(&inode->i_io_list);
+ wb_io_lists_depopulated(wb);
+}
+
static struct bdi_writeback *
locked_inode_to_wb_and_lock_list(struct inode *inode)
__releases(&inode->i_lock)
@@ -1114,7 +1135,11 @@ void inode_io_list_del(struct inode *inode)
wb = inode_to_wb_and_lock_list(inode);
spin_lock(&inode->i_lock);
- inode_io_list_del_locked(inode, wb);
+
+ inode->i_state &= ~I_SYNC_QUEUED;
+ list_del_init(&inode->i_io_list);
+ wb_io_lists_depopulated(wb);
+
spin_unlock(&inode->i_lock);
spin_unlock(&wb->list_lock);
}
@@ -1427,7 +1452,7 @@ static void requeue_inode(struct inode *inode, struct bdi_writeback *wb,
inode->i_state &= ~I_SYNC_QUEUED;
} else {
/* The inode is clean. Remove from writeback lists. */
- inode_io_list_del_locked(inode, wb);
+ inode_cgwb_move_to_attached(inode, wb);
}
}
@@ -1579,7 +1604,7 @@ static int writeback_single_inode(struct inode *inode,
* responsible for the writeback lists.
*/
if (!(inode->i_state & I_DIRTY_ALL))
- inode_io_list_del_locked(inode, wb);
+ inode_cgwb_move_to_attached(inode, wb);
spin_unlock(&wb->list_lock);
inode_sync_complete(inode);
out:
diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h
index fff9367a6348..e5dc238ebe4f 100644
--- a/include/linux/backing-dev-defs.h
+++ b/include/linux/backing-dev-defs.h
@@ -154,6 +154,7 @@ struct bdi_writeback {
struct cgroup_subsys_state *blkcg_css; /* and blkcg */
struct list_head memcg_node; /* anchored at memcg->cgwb_list */
struct list_head blkcg_node; /* anchored at blkcg->cgwb_list */
+ struct list_head b_attached; /* attached inodes, protected by list_lock */
union {
struct work_struct release_work;
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 576220acd686..54c5dc4b8c24 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -396,6 +396,7 @@ static void cgwb_release_workfn(struct work_struct *work)
fprop_local_destroy_percpu(&wb->memcg_completions);
percpu_ref_exit(&wb->refcnt);
wb_exit(wb);
+ WARN_ON_ONCE(!list_empty(&wb->b_attached));
kfree_rcu(wb, rcu);
}
@@ -472,6 +473,7 @@ static int cgwb_create(struct backing_dev_info *bdi,
wb->memcg_css = memcg_css;
wb->blkcg_css = blkcg_css;
+ INIT_LIST_HEAD(&wb->b_attached);
INIT_WORK(&wb->release_work, cgwb_release_workfn);
set_bit(WB_registered, &wb->state);
--
2.31.1
If an inode's state has I_WILL_FREE flag set, the inode will be
freed soon, so there is no point in trying to switch the inode
to a different cgwb.
I_WILL_FREE was ignored since the introduction of the inode switching,
so it looks like it doesn't lead to any noticeable issues for a user.
This is why the patch is not intended for a stable backport.
Suggested-by: Jan Kara <[email protected]>
Signed-off-by: Roman Gushchin <[email protected]>
---
fs/fs-writeback.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index e91980f49388..bd99890599e0 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -389,10 +389,10 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
xa_lock_irq(&mapping->i_pages);
/*
- * Once I_FREEING is visible under i_lock, the eviction path owns
- * the inode and we shouldn't modify ->i_io_list.
+ * Once I_FREEING or I_WILL_FREE are visible under i_lock, the eviction
+ * path owns the inode and we shouldn't modify ->i_io_list.
*/
- if (unlikely(inode->i_state & I_FREEING))
+ if (unlikely(inode->i_state & (I_FREEING | I_WILL_FREE)))
goto skip_switch;
trace_inode_switch_wbs(inode, old_wb, new_wb);
@@ -517,7 +517,7 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
/* while holding I_WB_SWITCH, no one else can update the association */
spin_lock(&inode->i_lock);
if (!(inode->i_sb->s_flags & SB_ACTIVE) ||
- inode->i_state & (I_WB_SWITCH | I_FREEING) ||
+ inode->i_state & (I_WB_SWITCH | I_FREEING | I_WILL_FREE) ||
inode_to_wb(inode) == isw->new_wb) {
spin_unlock(&inode->i_lock);
goto out_free;
--
2.31.1
Inode's wb switching requires two steps divided by an RCU grace
period. It's currently implemented as an RCU callback
inode_switch_wbs_rcu_fn(), which schedules inode_switch_wbs_work_fn()
as a work.
Switching to the rcu_work API allows to do the same in a cleaner and
slightly shorter form.
Signed-off-by: Roman Gushchin <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
---
fs/fs-writeback.c | 18 ++++--------------
1 file changed, 4 insertions(+), 14 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index bd99890599e0..9f378a670db4 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -335,8 +335,7 @@ struct inode_switch_wbs_context {
struct inode *inode;
struct bdi_writeback *new_wb;
- struct rcu_head rcu_head;
- struct work_struct work;
+ struct rcu_work work;
};
static void bdi_down_write_wb_switch_rwsem(struct backing_dev_info *bdi)
@@ -352,7 +351,7 @@ static void bdi_up_write_wb_switch_rwsem(struct backing_dev_info *bdi)
static void inode_switch_wbs_work_fn(struct work_struct *work)
{
struct inode_switch_wbs_context *isw =
- container_of(work, struct inode_switch_wbs_context, work);
+ container_of(to_rcu_work(work), struct inode_switch_wbs_context, work);
struct inode *inode = isw->inode;
struct backing_dev_info *bdi = inode_to_bdi(inode);
struct address_space *mapping = inode->i_mapping;
@@ -469,16 +468,6 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
atomic_dec(&isw_nr_in_flight);
}
-static void inode_switch_wbs_rcu_fn(struct rcu_head *rcu_head)
-{
- struct inode_switch_wbs_context *isw = container_of(rcu_head,
- struct inode_switch_wbs_context, rcu_head);
-
- /* needs to grab bh-unsafe locks, bounce to work item */
- INIT_WORK(&isw->work, inode_switch_wbs_work_fn);
- queue_work(isw_wq, &isw->work);
-}
-
/**
* inode_switch_wbs - change the wb association of an inode
* @inode: target inode
@@ -534,7 +523,8 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
* lock so that stat transfer can synchronize against them.
* Let's continue after I_WB_SWITCH is guaranteed to be visible.
*/
- call_rcu(&isw->rcu_head, inode_switch_wbs_rcu_fn);
+ INIT_RCU_WORK(&isw->work, inode_switch_wbs_work_fn);
+ queue_rcu_work(isw_wq, &isw->work);
atomic_inc(&isw_nr_in_flight);
return;
--
2.31.1
Asynchronously try to release dying cgwbs by switching attached inodes
to the bdi's wb. It helps to get rid of per-cgroup writeback
structures themselves and of pinned memory and block cgroups, which
are significantly larger structures (mostly due to large per-cpu
statistics data). This prevents memory waste and helps to avoid
different scalability problems caused by large piles of dying cgroups.
Reuse the existing mechanism of inode switching used for foreign inode
detection. To speed things up batch up to 115 inode switching in a
single operation (the maximum number is selected so that the resulting
struct inode_switch_wbs_context can fit into 1024 bytes). Because
every switching consists of two steps divided by an RCU grace period,
it would be too slow without batching. Please note that the whole
batch counts as a single operation (when increasing/decreasing
isw_nr_in_flight). This allows to keep umounting working (flush the
switching queue), however prevents cleanups from consuming the whole
switching quota and effectively blocking the frn switching.
A cgwb cleanup operation can fail due to different reasons (e.g. not
enough memory, the cgwb has an in-flight/pending io, an attached inode
in a wrong state, etc). In this case the next scheduled cleanup will
make a new attempt. An attempt is made each time a new cgwb is offlined
(in other words a memcg and/or a blkcg is deleted by a user). In the
future an additional attempt scheduled by a timer can be implemented.
Signed-off-by: Roman Gushchin <[email protected]>
---
fs/fs-writeback.c | 93 ++++++++++++++++++++++++++++----
include/linux/backing-dev-defs.h | 1 +
include/linux/writeback.h | 1 +
mm/backing-dev.c | 67 ++++++++++++++++++++++-
4 files changed, 150 insertions(+), 12 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 5f5502238bf0..b63420c9cf41 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -225,6 +225,12 @@ void wb_wait_for_completion(struct wb_completion *done)
/* one round can affect upto 5 slots */
#define WB_FRN_MAX_IN_FLIGHT 1024 /* don't queue too many concurrently */
+/*
+ * Maximum inodes per isw. A specific value has been chosen to make
+ * struct inode_switch_wbs_context fit into 1024 bytes kmalloc.
+ */
+#define WB_MAX_INODES_PER_ISW 115
+
static atomic_t isw_nr_in_flight = ATOMIC_INIT(0);
static struct workqueue_struct *isw_wq;
@@ -502,6 +508,24 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
atomic_dec(&isw_nr_in_flight);
}
+static bool inode_prepare_wbs_switch(struct inode *inode,
+ struct bdi_writeback *new_wb)
+{
+ /* while holding I_WB_SWITCH, no one else can update the association */
+ spin_lock(&inode->i_lock);
+ if (!(inode->i_sb->s_flags & SB_ACTIVE) ||
+ inode->i_state & (I_WB_SWITCH | I_FREEING | I_WILL_FREE) ||
+ inode_to_wb(inode) == new_wb) {
+ spin_unlock(&inode->i_lock);
+ return false;
+ }
+ inode->i_state |= I_WB_SWITCH;
+ __iget(inode);
+ spin_unlock(&inode->i_lock);
+
+ return true;
+}
+
/**
* inode_switch_wbs - change the wb association of an inode
* @inode: target inode
@@ -537,17 +561,8 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
if (!isw->new_wb)
goto out_free;
- /* while holding I_WB_SWITCH, no one else can update the association */
- spin_lock(&inode->i_lock);
- if (!(inode->i_sb->s_flags & SB_ACTIVE) ||
- inode->i_state & (I_WB_SWITCH | I_FREEING | I_WILL_FREE) ||
- inode_to_wb(inode) == isw->new_wb) {
- spin_unlock(&inode->i_lock);
+ if (!inode_prepare_wbs_switch(inode, isw->new_wb))
goto out_free;
- }
- inode->i_state |= I_WB_SWITCH;
- __iget(inode);
- spin_unlock(&inode->i_lock);
isw->inodes[0] = inode;
@@ -569,6 +584,64 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
kfree(isw);
}
+/**
+ * cleanup_offline_cgwb - detach associated inodes
+ * @wb: target wb
+ *
+ * Switch all inodes attached to @wb to the bdi's root wb in order to eventually
+ * release the dying @wb. Returns %true if not all inodes were switched and
+ * the function has to be restarted.
+ */
+bool cleanup_offline_cgwb(struct bdi_writeback *wb)
+{
+ struct inode_switch_wbs_context *isw;
+ struct inode *inode;
+ int nr;
+ bool restart = false;
+
+ isw = kzalloc(sizeof(*isw) + WB_MAX_INODES_PER_ISW *
+ sizeof(struct inode *), GFP_KERNEL);
+ if (!isw)
+ return restart;
+
+ /* no need to call wb_get() here: bdi's root wb is not refcounted */
+ isw->new_wb = &wb->bdi->wb;
+
+ nr = 0;
+ spin_lock(&wb->list_lock);
+ list_for_each_entry(inode, &wb->b_attached, i_io_list) {
+ if (!inode_prepare_wbs_switch(inode, isw->new_wb))
+ continue;
+
+ isw->inodes[nr++] = inode;
+
+ if (nr >= WB_MAX_INODES_PER_ISW - 1) {
+ restart = true;
+ break;
+ }
+ }
+ spin_unlock(&wb->list_lock);
+
+ /* no attached inodes? bail out */
+ if (nr == 0) {
+ kfree(isw);
+ return restart;
+ }
+
+ /*
+ * In addition to synchronizing among switchers, I_WB_SWITCH tells
+ * the RCU protected stat update paths to grab the i_page
+ * lock so that stat transfer can synchronize against them.
+ * Let's continue after I_WB_SWITCH is guaranteed to be visible.
+ */
+ INIT_RCU_WORK(&isw->work, inode_switch_wbs_work_fn);
+ queue_rcu_work(isw_wq, &isw->work);
+
+ atomic_inc(&isw_nr_in_flight);
+
+ return restart;
+}
+
/**
* wbc_attach_and_unlock_inode - associate wbc with target inode and unlock it
* @wbc: writeback_control of interest
diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h
index 63f52ad2ce7a..1d7edad9914f 100644
--- a/include/linux/backing-dev-defs.h
+++ b/include/linux/backing-dev-defs.h
@@ -155,6 +155,7 @@ struct bdi_writeback {
struct list_head memcg_node; /* anchored at memcg->cgwb_list */
struct list_head blkcg_node; /* anchored at blkcg->cgwb_list */
struct list_head b_attached; /* attached inodes, protected by list_lock */
+ struct list_head offline_node; /* anchored at offline_cgwbs */
union {
struct work_struct release_work;
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index 8e5c5bb16e2d..95de51c10248 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -221,6 +221,7 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
int cgroup_writeback_by_id(u64 bdi_id, int memcg_id, unsigned long nr_pages,
enum wb_reason reason, struct wb_completion *done);
void cgroup_writeback_umount(void);
+bool cleanup_offline_cgwb(struct bdi_writeback *wb);
/**
* inode_attach_wb - associate an inode with its wb
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index 54c5dc4b8c24..53aee015dc49 100644
--- a/mm/backing-dev.c
+++ b/mm/backing-dev.c
@@ -371,12 +371,16 @@ static void wb_exit(struct bdi_writeback *wb)
#include <linux/memcontrol.h>
/*
- * cgwb_lock protects bdi->cgwb_tree, blkcg->cgwb_list, and memcg->cgwb_list.
- * bdi->cgwb_tree is also RCU protected.
+ * cgwb_lock protects bdi->cgwb_tree, blkcg->cgwb_list, offline_cgwbs and
+ * memcg->cgwb_list. bdi->cgwb_tree is also RCU protected.
*/
static DEFINE_SPINLOCK(cgwb_lock);
static struct workqueue_struct *cgwb_release_wq;
+static LIST_HEAD(offline_cgwbs);
+static void cleanup_offline_cgwbs_workfn(struct work_struct *work);
+static DECLARE_WORK(cleanup_offline_cgwbs_work, cleanup_offline_cgwbs_workfn);
+
static void cgwb_release_workfn(struct work_struct *work)
{
struct bdi_writeback *wb = container_of(work, struct bdi_writeback,
@@ -395,6 +399,11 @@ static void cgwb_release_workfn(struct work_struct *work)
fprop_local_destroy_percpu(&wb->memcg_completions);
percpu_ref_exit(&wb->refcnt);
+
+ spin_lock_irq(&cgwb_lock);
+ list_del(&wb->offline_node);
+ spin_unlock_irq(&cgwb_lock);
+
wb_exit(wb);
WARN_ON_ONCE(!list_empty(&wb->b_attached));
kfree_rcu(wb, rcu);
@@ -414,6 +423,7 @@ static void cgwb_kill(struct bdi_writeback *wb)
WARN_ON(!radix_tree_delete(&wb->bdi->cgwb_tree, wb->memcg_css->id));
list_del(&wb->memcg_node);
list_del(&wb->blkcg_node);
+ list_add(&wb->offline_node, &offline_cgwbs);
percpu_ref_kill(&wb->refcnt);
}
@@ -635,6 +645,57 @@ static void cgwb_bdi_unregister(struct backing_dev_info *bdi)
mutex_unlock(&bdi->cgwb_release_mutex);
}
+/**
+ * cleanup_offline_cgwbs - try to release dying cgwbs
+ *
+ * Try to release dying cgwbs by switching attached inodes to the wb
+ * belonging to the root memory cgroup. Processed wbs are placed at the
+ * end of the list to guarantee the forward progress.
+ *
+ * Should be called with the acquired cgwb_lock lock, which might
+ * be released and re-acquired in the process.
+ */
+static void cleanup_offline_cgwbs_workfn(struct work_struct *work)
+{
+ struct bdi_writeback *wb;
+ LIST_HEAD(processed);
+
+ spin_lock_irq(&cgwb_lock);
+
+ while (!list_empty(&offline_cgwbs)) {
+ wb = list_first_entry(&offline_cgwbs, struct bdi_writeback,
+ offline_node);
+ list_move(&wb->offline_node, &processed);
+
+ /*
+ * If wb is dirty, cleaning up the writeback by switching
+ * attached inodes will result in an effective removal of any
+ * bandwidth restrictions, which isn't the goal. Instead,
+ * it can be postponed until the next time, when all io
+ * will be likely completed. If in the meantime some inodes
+ * will get re-dirtied, they should be eventually switched to
+ * a new cgwb.
+ */
+ if (wb_has_dirty_io(wb))
+ continue;
+
+ if (!wb_tryget(wb))
+ continue;
+
+ spin_unlock_irq(&cgwb_lock);
+ while ((cleanup_offline_cgwb(wb)))
+ cond_resched();
+ spin_lock_irq(&cgwb_lock);
+
+ wb_put(wb);
+ }
+
+ if (!list_empty(&processed))
+ list_splice_tail(&processed, &offline_cgwbs);
+
+ spin_unlock_irq(&cgwb_lock);
+}
+
/**
* wb_memcg_offline - kill all wb's associated with a memcg being offlined
* @memcg: memcg being offlined
@@ -651,6 +712,8 @@ void wb_memcg_offline(struct mem_cgroup *memcg)
cgwb_kill(wb);
memcg_cgwb_list->next = NULL; /* prevent new wb's */
spin_unlock_irq(&cgwb_lock);
+
+ queue_work(system_unbound_wq, &cleanup_offline_cgwbs_work);
}
/**
--
2.31.1
Currently only a single inode can be switched to another writeback
structure at once. That means to switch an inode a separate
inode_switch_wbs_context structure must be allocated, and a separate
rcu callback and work must be scheduled.
It's fine for the existing ad-hoc switching, which is not happening
that often, but sub-optimal for massive switching required in order to
release a writeback structure. To prepare for it, let's add a support
for switching multiple inodes at once.
Instead of containing a single inode pointer, inode_switch_wbs_context
will contain a NULL-terminated array of inode pointers.
inode_do_switch_wbs() will be called for each inode.
To optimize the locking bdi->wb_switch_rwsem, old_wb's and new_wb's
list_locks will be acquired and released only once altogether for all
inodes. wb_wakeup() will be also be called only once. Instead of
calling wb_put(old_wb) after each successful switch, wb_put_many()
is introduced and used.
Signed-off-by: Roman Gushchin <[email protected]>
---
fs/fs-writeback.c | 105 ++++++++++++++++++-------------
include/linux/backing-dev-defs.h | 18 +++++-
2 files changed, 79 insertions(+), 44 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index d46cdeeb6797..5f5502238bf0 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -335,10 +335,18 @@ static struct bdi_writeback *inode_to_wb_and_lock_list(struct inode *inode)
}
struct inode_switch_wbs_context {
- struct inode *inode;
- struct bdi_writeback *new_wb;
-
struct rcu_work work;
+
+ /*
+ * Multiple inodes can be switched at once. The switching procedure
+ * consists of two parts, separated by a RCU grace period. To make
+ * sure that the second part is executed for each inode gone through
+ * the first part, all inode pointers are placed into a NULL-terminated
+ * array embedded into struct inode_switch_wbs_context. Otherwise
+ * an inode could be left in a non-consistent state.
+ */
+ struct bdi_writeback *new_wb;
+ struct inode *inodes[];
};
static void bdi_down_write_wb_switch_rwsem(struct backing_dev_info *bdi)
@@ -351,39 +359,15 @@ static void bdi_up_write_wb_switch_rwsem(struct backing_dev_info *bdi)
up_write(&bdi->wb_switch_rwsem);
}
-static void inode_do_switch_wbs(struct inode *inode,
+static bool inode_do_switch_wbs(struct inode *inode,
+ struct bdi_writeback *old_wb,
struct bdi_writeback *new_wb)
{
- struct backing_dev_info *bdi = inode_to_bdi(inode);
struct address_space *mapping = inode->i_mapping;
- struct bdi_writeback *old_wb = inode->i_wb;
XA_STATE(xas, &mapping->i_pages, 0);
struct page *page;
bool switched = false;
- /*
- * If @inode switches cgwb membership while sync_inodes_sb() is
- * being issued, sync_inodes_sb() might miss it. Synchronize.
- */
- down_read(&bdi->wb_switch_rwsem);
-
- /*
- * By the time control reaches here, RCU grace period has passed
- * since I_WB_SWITCH assertion and all wb stat update transactions
- * between unlocked_inode_to_wb_begin/end() are guaranteed to be
- * synchronizing against the i_pages lock.
- *
- * Grabbing old_wb->list_lock, inode->i_lock and the i_pages lock
- * gives us exclusion against all wb related operations on @inode
- * including IO list manipulations and stat updates.
- */
- if (old_wb < new_wb) {
- spin_lock(&old_wb->list_lock);
- spin_lock_nested(&new_wb->list_lock, SINGLE_DEPTH_NESTING);
- } else {
- spin_lock(&new_wb->list_lock);
- spin_lock_nested(&old_wb->list_lock, SINGLE_DEPTH_NESTING);
- }
spin_lock(&inode->i_lock);
xa_lock_irq(&mapping->i_pages);
@@ -458,25 +442,62 @@ static void inode_do_switch_wbs(struct inode *inode,
xa_unlock_irq(&mapping->i_pages);
spin_unlock(&inode->i_lock);
- spin_unlock(&new_wb->list_lock);
- spin_unlock(&old_wb->list_lock);
-
- up_read(&bdi->wb_switch_rwsem);
- if (switched) {
- wb_wakeup(new_wb);
- wb_put(old_wb);
- }
+ return switched;
}
static void inode_switch_wbs_work_fn(struct work_struct *work)
{
struct inode_switch_wbs_context *isw =
container_of(to_rcu_work(work), struct inode_switch_wbs_context, work);
+ struct backing_dev_info *bdi = inode_to_bdi(isw->inodes[0]);
+ struct bdi_writeback *old_wb = isw->inodes[0]->i_wb;
+ struct bdi_writeback *new_wb = isw->new_wb;
+ unsigned long nr_switched = 0;
+ struct inode **inodep;
+
+ /*
+ * If @inode switches cgwb membership while sync_inodes_sb() is
+ * being issued, sync_inodes_sb() might miss it. Synchronize.
+ */
+ down_read(&bdi->wb_switch_rwsem);
+
+ /*
+ * By the time control reaches here, RCU grace period has passed
+ * since I_WB_SWITCH assertion and all wb stat update transactions
+ * between unlocked_inode_to_wb_begin/end() are guaranteed to be
+ * synchronizing against the i_pages lock.
+ *
+ * Grabbing old_wb->list_lock, inode->i_lock and the i_pages lock
+ * gives us exclusion against all wb related operations on @inode
+ * including IO list manipulations and stat updates.
+ */
+ if (old_wb < new_wb) {
+ spin_lock(&old_wb->list_lock);
+ spin_lock_nested(&new_wb->list_lock, SINGLE_DEPTH_NESTING);
+ } else {
+ spin_lock(&new_wb->list_lock);
+ spin_lock_nested(&old_wb->list_lock, SINGLE_DEPTH_NESTING);
+ }
+
+ for (inodep = isw->inodes; *inodep; inodep++) {
+ WARN_ON_ONCE((*inodep)->i_wb != old_wb);
+ if (inode_do_switch_wbs(*inodep, old_wb, new_wb))
+ nr_switched++;
+ iput(*inodep);
+ }
+
+ spin_unlock(&new_wb->list_lock);
+ spin_unlock(&old_wb->list_lock);
+
+ up_read(&bdi->wb_switch_rwsem);
+
+ if (nr_switched) {
+ wb_wakeup(new_wb);
+ wb_put_many(old_wb, nr_switched);
+ }
- inode_do_switch_wbs(isw->inode, isw->new_wb);
- wb_put(isw->new_wb);
- iput(isw->inode);
+ wb_put(new_wb);
kfree(isw);
atomic_dec(&isw_nr_in_flight);
}
@@ -503,7 +524,7 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
if (atomic_read(&isw_nr_in_flight) > WB_FRN_MAX_IN_FLIGHT)
return;
- isw = kzalloc(sizeof(*isw), GFP_ATOMIC);
+ isw = kzalloc(sizeof(*isw) + 2 * sizeof(struct inode *), GFP_ATOMIC);
if (!isw)
return;
@@ -528,7 +549,7 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
__iget(inode);
spin_unlock(&inode->i_lock);
- isw->inode = inode;
+ isw->inodes[0] = inode;
/*
* In addition to synchronizing among switchers, I_WB_SWITCH tells
diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h
index e5dc238ebe4f..63f52ad2ce7a 100644
--- a/include/linux/backing-dev-defs.h
+++ b/include/linux/backing-dev-defs.h
@@ -240,8 +240,9 @@ static inline void wb_get(struct bdi_writeback *wb)
/**
* wb_put - decrement a wb's refcount
* @wb: bdi_writeback to put
+ * @nr: number of references to put
*/
-static inline void wb_put(struct bdi_writeback *wb)
+static inline void wb_put_many(struct bdi_writeback *wb, unsigned long nr)
{
if (WARN_ON_ONCE(!wb->bdi)) {
/*
@@ -252,7 +253,16 @@ static inline void wb_put(struct bdi_writeback *wb)
}
if (wb != &wb->bdi->wb)
- percpu_ref_put(&wb->refcnt);
+ percpu_ref_put_many(&wb->refcnt, nr);
+}
+
+/**
+ * wb_put - decrement a wb's refcount
+ * @wb: bdi_writeback to put
+ */
+static inline void wb_put(struct bdi_writeback *wb)
+{
+ wb_put_many(wb, 1);
}
/**
@@ -281,6 +291,10 @@ static inline void wb_put(struct bdi_writeback *wb)
{
}
+static inline void wb_put_many(struct bdi_writeback *wb, unsigned long nr)
+{
+}
+
static inline bool wb_dying(struct bdi_writeback *wb)
{
return false;
--
2.31.1
Hello,
On Thu, Jun 03, 2021 at 06:31:59PM -0700, Roman Gushchin wrote:
> +bool cleanup_offline_cgwb(struct bdi_writeback *wb)
> +{
> + struct inode_switch_wbs_context *isw;
> + struct inode *inode;
> + int nr;
> + bool restart = false;
> +
> + isw = kzalloc(sizeof(*isw) + WB_MAX_INODES_PER_ISW *
> + sizeof(struct inode *), GFP_KERNEL);
> + if (!isw)
> + return restart;
> +
> + /* no need to call wb_get() here: bdi's root wb is not refcounted */
> + isw->new_wb = &wb->bdi->wb;
Not a deal breaker but I wonder whether it'd be safer to migrate it to the
nearest live ancestor rather than directly to the root. As adaptive
migration isn't something guaranteed, there's some chance that this can
behave as escape-to-root path in pathological cases especially for inodes
which may be written to by multiple cgroups.
Thanks.
--
tejun
On Thu, Jun 03, 2021 at 06:31:53PM -0700, Roman Gushchin wrote:
> To solve the problem inodes should be eventually detached from the
> corresponding writeback structure. It's inefficient to do it after
> every writeback completion. Instead it can be done whenever the
> original memory cgroup is offlined and writeback structure is getting
> killed. Scanning over a (potentially long) list of inodes and detach
> them from the writeback structure can take quite some time. To avoid
> scanning all inodes, attached inodes are kept on a new list (b_attached).
> To make it less noticeable to a user, the scanning and switching is performed
> from a work context.
Sorry for chiming in late but the series looks great to me and the only
comment I have is the migration target on the last patch, which isn't a
critical issue. Please feel free to add
Acked-by: Tejun Heo <[email protected]>
Thanks.
--
tejun
On Fri, Jun 04, 2021 at 11:53:02AM -0400, Tejun Heo wrote:
> On Thu, Jun 03, 2021 at 06:31:53PM -0700, Roman Gushchin wrote:
> > To solve the problem inodes should be eventually detached from the
> > corresponding writeback structure. It's inefficient to do it after
> > every writeback completion. Instead it can be done whenever the
> > original memory cgroup is offlined and writeback structure is getting
> > killed. Scanning over a (potentially long) list of inodes and detach
> > them from the writeback structure can take quite some time. To avoid
> > scanning all inodes, attached inodes are kept on a new list (b_attached).
> > To make it less noticeable to a user, the scanning and switching is performed
> > from a work context.
>
> Sorry for chiming in late but the series looks great to me and the only
> comment I have is the migration target on the last patch, which isn't a
> critical issue. Please feel free to add
>
> Acked-by: Tejun Heo <[email protected]>
Thank you for taking a look and for acking the series!
I agree that switching to the nearest ancestor makes sense. If I remember
correctly, I was doing this in v1 (or at least planned to do), but then
switched to zeroing the pointer and then to bdi's wb.
I fixed it in v8 and pushed it here: https://github.com/rgushchin/linux/tree/cgwb.8 .
I'll wait a bit for Jan's and others feedback and will post v8 on Monday.
Hopefully, it will be the final version.
Btw, how are such patches usually routed? Through Jens's tree?
Thanks!
Hello,
On Fri, Jun 04, 2021 at 03:24:38PM -0700, Roman Gushchin wrote:
> I agree that switching to the nearest ancestor makes sense. If I remember
> correctly, I was doing this in v1 (or at least planned to do), but then
> switched to zeroing the pointer and then to bdi's wb.
>
> I fixed it in v8 and pushed it here: https://github.com/rgushchin/linux/tree/cgwb.8 .
> I'll wait a bit for Jan's and others feedback and will post v8 on Monday.
> Hopefully, it will be the final version.
Sounds great.
> Btw, how are such patches usually routed? Through Jens's tree?
I think the past writeback patches went through -mm.
Thanks.
--
tejun
Hello,
On Thu, Jun 03, 2021 at 06:31:53PM -0700, Roman Gushchin wrote:
> When an inode is getting dirty for the first time it's associated
> with a wb structure (see __inode_attach_wb()). It can later be
> switched to another wb (if e.g. some other cgroup is writing a lot of
> data to the same inode), but otherwise stays attached to the original
> wb until being reclaimed.
>
> The problem is that the wb structure holds a reference to the original
> memory and blkcg cgroups. So if an inode has been dirty once and later
> is actively used in read-only mode, it has a good chance to pin down
> the original memory and blkcg cgroups forewer. This is often the case with
> services bringing data for other services, e.g. updating some rpm
> packages.
>
> In the real life it becomes a problem due to a large size of the memcg
> structure, which can easily be 1000x larger than an inode. Also a
> really large number of dying cgroups can raise different scalability
> issues, e.g. making the memory reclaim costly and less effective.
>
> To solve the problem inodes should be eventually detached from the
> corresponding writeback structure. It's inefficient to do it after
> every writeback completion. Instead it can be done whenever the
> original memory cgroup is offlined and writeback structure is getting
> killed. Scanning over a (potentially long) list of inodes and detach
> them from the writeback structure can take quite some time. To avoid
> scanning all inodes, attached inodes are kept on a new list (b_attached).
> To make it less noticeable to a user, the scanning and switching is performed
> from a work context.
>
> Big thanks to Jan Kara, Dennis Zhou and Hillf Danton for their ideas and
> contribution to this patchset.
>
> v7:
> - shared locking for multiple inode switching
> - introduced inode_prepare_wbs_switch() helper
> - extended the pre-switch inode check for I_WILL_FREE
> - added comments here and there
>
> v6:
> - extended and reused wbs switching functionality to switch inodes
> on cgwb cleanup
> - fixed offline_list handling
> - switched to the unbound_wq
> - other minor fixes
>
> v5:
> - switch inodes to bdi->wb instead of zeroing inode->i_wb
> - split the single patch into two
> - only cgwbs maintain lists of attached inodes
> - added cond_resched()
> - fixed !CONFIG_CGROUP_WRITEBACK handling
> - extended list of prohibited inodes flag
> - other small fixes
>
>
> Roman Gushchin (6):
> writeback, cgroup: do not switch inodes with I_WILL_FREE flag
> writeback, cgroup: switch to rcu_work API in inode_switch_wbs()
> writeback, cgroup: keep list of inodes attached to bdi_writeback
> writeback, cgroup: split out the functional part of
> inode_switch_wbs_work_fn()
> writeback, cgroup: support switching multiple inodes at once
> writeback, cgroup: release dying cgwbs by switching attached inodes
>
> fs/fs-writeback.c | 302 +++++++++++++++++++++----------
> include/linux/backing-dev-defs.h | 20 +-
> include/linux/writeback.h | 1 +
> mm/backing-dev.c | 69 ++++++-
> 4 files changed, 293 insertions(+), 99 deletions(-)
>
> --
> 2.31.1
>
I too am a bit late to the party. Feel free to add mine as well to the
series.
Acked-by: Dennis Zhou <[email protected]>
I left my one comment on the last patch regarding a possible future
extension.
Thanks,
Dennis
Hello,
On Thu, Jun 03, 2021 at 06:31:59PM -0700, Roman Gushchin wrote:
> Asynchronously try to release dying cgwbs by switching attached inodes
> to the bdi's wb. It helps to get rid of per-cgroup writeback
> structures themselves and of pinned memory and block cgroups, which
> are significantly larger structures (mostly due to large per-cpu
> statistics data). This prevents memory waste and helps to avoid
> different scalability problems caused by large piles of dying cgroups.
>
> Reuse the existing mechanism of inode switching used for foreign inode
> detection. To speed things up batch up to 115 inode switching in a
> single operation (the maximum number is selected so that the resulting
> struct inode_switch_wbs_context can fit into 1024 bytes). Because
> every switching consists of two steps divided by an RCU grace period,
> it would be too slow without batching. Please note that the whole
> batch counts as a single operation (when increasing/decreasing
> isw_nr_in_flight). This allows to keep umounting working (flush the
> switching queue), however prevents cleanups from consuming the whole
> switching quota and effectively blocking the frn switching.
>
> A cgwb cleanup operation can fail due to different reasons (e.g. not
> enough memory, the cgwb has an in-flight/pending io, an attached inode
> in a wrong state, etc). In this case the next scheduled cleanup will
> make a new attempt. An attempt is made each time a new cgwb is offlined
> (in other words a memcg and/or a blkcg is deleted by a user). In the
> future an additional attempt scheduled by a timer can be implemented.
I've been thinking about this for a little while and the only thing I'm
not super thrilled by is that the subsequent cleanup work trigger isn't
due to forward progress.
As future work, we could tag the inodes to switch when writeback
completes instead of using a timer. This would be nice because then we
only have to make a single (successful) pass switching the inodes we can
and then mark the others to switch. Once a cgwb is killed no one else
can attach to it so we should be good there.
I don't think this is a blocker or even necessary, I just wanted to put
it out there as possible future direction instead of a timer.
Thanks,
Dennis
>
> Signed-off-by: Roman Gushchin <[email protected]>
> ---
> fs/fs-writeback.c | 93 ++++++++++++++++++++++++++++----
> include/linux/backing-dev-defs.h | 1 +
> include/linux/writeback.h | 1 +
> mm/backing-dev.c | 67 ++++++++++++++++++++++-
> 4 files changed, 150 insertions(+), 12 deletions(-)
>
> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> index 5f5502238bf0..b63420c9cf41 100644
> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -225,6 +225,12 @@ void wb_wait_for_completion(struct wb_completion *done)
> /* one round can affect upto 5 slots */
> #define WB_FRN_MAX_IN_FLIGHT 1024 /* don't queue too many concurrently */
>
> +/*
> + * Maximum inodes per isw. A specific value has been chosen to make
> + * struct inode_switch_wbs_context fit into 1024 bytes kmalloc.
> + */
> +#define WB_MAX_INODES_PER_ISW 115
> +
> static atomic_t isw_nr_in_flight = ATOMIC_INIT(0);
> static struct workqueue_struct *isw_wq;
>
> @@ -502,6 +508,24 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
> atomic_dec(&isw_nr_in_flight);
> }
>
> +static bool inode_prepare_wbs_switch(struct inode *inode,
> + struct bdi_writeback *new_wb)
> +{
> + /* while holding I_WB_SWITCH, no one else can update the association */
> + spin_lock(&inode->i_lock);
> + if (!(inode->i_sb->s_flags & SB_ACTIVE) ||
> + inode->i_state & (I_WB_SWITCH | I_FREEING | I_WILL_FREE) ||
> + inode_to_wb(inode) == new_wb) {
> + spin_unlock(&inode->i_lock);
> + return false;
> + }
> + inode->i_state |= I_WB_SWITCH;
> + __iget(inode);
> + spin_unlock(&inode->i_lock);
> +
> + return true;
> +}
> +
> /**
> * inode_switch_wbs - change the wb association of an inode
> * @inode: target inode
> @@ -537,17 +561,8 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
> if (!isw->new_wb)
> goto out_free;
>
> - /* while holding I_WB_SWITCH, no one else can update the association */
> - spin_lock(&inode->i_lock);
> - if (!(inode->i_sb->s_flags & SB_ACTIVE) ||
> - inode->i_state & (I_WB_SWITCH | I_FREEING | I_WILL_FREE) ||
> - inode_to_wb(inode) == isw->new_wb) {
> - spin_unlock(&inode->i_lock);
> + if (!inode_prepare_wbs_switch(inode, isw->new_wb))
> goto out_free;
> - }
> - inode->i_state |= I_WB_SWITCH;
> - __iget(inode);
> - spin_unlock(&inode->i_lock);
>
> isw->inodes[0] = inode;
>
> @@ -569,6 +584,64 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
> kfree(isw);
> }
>
> +/**
> + * cleanup_offline_cgwb - detach associated inodes
> + * @wb: target wb
> + *
> + * Switch all inodes attached to @wb to the bdi's root wb in order to eventually
> + * release the dying @wb. Returns %true if not all inodes were switched and
> + * the function has to be restarted.
> + */
> +bool cleanup_offline_cgwb(struct bdi_writeback *wb)
> +{
> + struct inode_switch_wbs_context *isw;
> + struct inode *inode;
> + int nr;
> + bool restart = false;
> +
> + isw = kzalloc(sizeof(*isw) + WB_MAX_INODES_PER_ISW *
> + sizeof(struct inode *), GFP_KERNEL);
> + if (!isw)
> + return restart;
> +
> + /* no need to call wb_get() here: bdi's root wb is not refcounted */
> + isw->new_wb = &wb->bdi->wb;
> +
> + nr = 0;
> + spin_lock(&wb->list_lock);
> + list_for_each_entry(inode, &wb->b_attached, i_io_list) {
> + if (!inode_prepare_wbs_switch(inode, isw->new_wb))
> + continue;
> +
> + isw->inodes[nr++] = inode;
> +
> + if (nr >= WB_MAX_INODES_PER_ISW - 1) {
> + restart = true;
> + break;
> + }
> + }
> + spin_unlock(&wb->list_lock);
> +
> + /* no attached inodes? bail out */
> + if (nr == 0) {
> + kfree(isw);
> + return restart;
> + }
> +
> + /*
> + * In addition to synchronizing among switchers, I_WB_SWITCH tells
> + * the RCU protected stat update paths to grab the i_page
> + * lock so that stat transfer can synchronize against them.
> + * Let's continue after I_WB_SWITCH is guaranteed to be visible.
> + */
> + INIT_RCU_WORK(&isw->work, inode_switch_wbs_work_fn);
> + queue_rcu_work(isw_wq, &isw->work);
> +
> + atomic_inc(&isw_nr_in_flight);
> +
> + return restart;
> +}
> +
> /**
> * wbc_attach_and_unlock_inode - associate wbc with target inode and unlock it
> * @wbc: writeback_control of interest
> diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h
> index 63f52ad2ce7a..1d7edad9914f 100644
> --- a/include/linux/backing-dev-defs.h
> +++ b/include/linux/backing-dev-defs.h
> @@ -155,6 +155,7 @@ struct bdi_writeback {
> struct list_head memcg_node; /* anchored at memcg->cgwb_list */
> struct list_head blkcg_node; /* anchored at blkcg->cgwb_list */
> struct list_head b_attached; /* attached inodes, protected by list_lock */
> + struct list_head offline_node; /* anchored at offline_cgwbs */
>
> union {
> struct work_struct release_work;
> diff --git a/include/linux/writeback.h b/include/linux/writeback.h
> index 8e5c5bb16e2d..95de51c10248 100644
> --- a/include/linux/writeback.h
> +++ b/include/linux/writeback.h
> @@ -221,6 +221,7 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
> int cgroup_writeback_by_id(u64 bdi_id, int memcg_id, unsigned long nr_pages,
> enum wb_reason reason, struct wb_completion *done);
> void cgroup_writeback_umount(void);
> +bool cleanup_offline_cgwb(struct bdi_writeback *wb);
>
> /**
> * inode_attach_wb - associate an inode with its wb
> diff --git a/mm/backing-dev.c b/mm/backing-dev.c
> index 54c5dc4b8c24..53aee015dc49 100644
> --- a/mm/backing-dev.c
> +++ b/mm/backing-dev.c
> @@ -371,12 +371,16 @@ static void wb_exit(struct bdi_writeback *wb)
> #include <linux/memcontrol.h>
>
> /*
> - * cgwb_lock protects bdi->cgwb_tree, blkcg->cgwb_list, and memcg->cgwb_list.
> - * bdi->cgwb_tree is also RCU protected.
> + * cgwb_lock protects bdi->cgwb_tree, blkcg->cgwb_list, offline_cgwbs and
> + * memcg->cgwb_list. bdi->cgwb_tree is also RCU protected.
> */
> static DEFINE_SPINLOCK(cgwb_lock);
> static struct workqueue_struct *cgwb_release_wq;
>
> +static LIST_HEAD(offline_cgwbs);
> +static void cleanup_offline_cgwbs_workfn(struct work_struct *work);
> +static DECLARE_WORK(cleanup_offline_cgwbs_work, cleanup_offline_cgwbs_workfn);
> +
> static void cgwb_release_workfn(struct work_struct *work)
> {
> struct bdi_writeback *wb = container_of(work, struct bdi_writeback,
> @@ -395,6 +399,11 @@ static void cgwb_release_workfn(struct work_struct *work)
>
> fprop_local_destroy_percpu(&wb->memcg_completions);
> percpu_ref_exit(&wb->refcnt);
> +
> + spin_lock_irq(&cgwb_lock);
> + list_del(&wb->offline_node);
> + spin_unlock_irq(&cgwb_lock);
> +
> wb_exit(wb);
> WARN_ON_ONCE(!list_empty(&wb->b_attached));
> kfree_rcu(wb, rcu);
> @@ -414,6 +423,7 @@ static void cgwb_kill(struct bdi_writeback *wb)
> WARN_ON(!radix_tree_delete(&wb->bdi->cgwb_tree, wb->memcg_css->id));
> list_del(&wb->memcg_node);
> list_del(&wb->blkcg_node);
> + list_add(&wb->offline_node, &offline_cgwbs);
> percpu_ref_kill(&wb->refcnt);
> }
>
> @@ -635,6 +645,57 @@ static void cgwb_bdi_unregister(struct backing_dev_info *bdi)
> mutex_unlock(&bdi->cgwb_release_mutex);
> }
>
> +/**
> + * cleanup_offline_cgwbs - try to release dying cgwbs
> + *
> + * Try to release dying cgwbs by switching attached inodes to the wb
> + * belonging to the root memory cgroup. Processed wbs are placed at the
> + * end of the list to guarantee the forward progress.
> + *
> + * Should be called with the acquired cgwb_lock lock, which might
> + * be released and re-acquired in the process.
> + */
> +static void cleanup_offline_cgwbs_workfn(struct work_struct *work)
> +{
> + struct bdi_writeback *wb;
> + LIST_HEAD(processed);
> +
> + spin_lock_irq(&cgwb_lock);
> +
> + while (!list_empty(&offline_cgwbs)) {
> + wb = list_first_entry(&offline_cgwbs, struct bdi_writeback,
> + offline_node);
> + list_move(&wb->offline_node, &processed);
> +
> + /*
> + * If wb is dirty, cleaning up the writeback by switching
> + * attached inodes will result in an effective removal of any
> + * bandwidth restrictions, which isn't the goal. Instead,
> + * it can be postponed until the next time, when all io
> + * will be likely completed. If in the meantime some inodes
> + * will get re-dirtied, they should be eventually switched to
> + * a new cgwb.
> + */
> + if (wb_has_dirty_io(wb))
> + continue;
> +
> + if (!wb_tryget(wb))
> + continue;
> +
> + spin_unlock_irq(&cgwb_lock);
> + while ((cleanup_offline_cgwb(wb)))
> + cond_resched();
> + spin_lock_irq(&cgwb_lock);
> +
> + wb_put(wb);
> + }
> +
> + if (!list_empty(&processed))
> + list_splice_tail(&processed, &offline_cgwbs);
> +
> + spin_unlock_irq(&cgwb_lock);
> +}
> +
> /**
> * wb_memcg_offline - kill all wb's associated with a memcg being offlined
> * @memcg: memcg being offlined
> @@ -651,6 +712,8 @@ void wb_memcg_offline(struct mem_cgroup *memcg)
> cgwb_kill(wb);
> memcg_cgwb_list->next = NULL; /* prevent new wb's */
> spin_unlock_irq(&cgwb_lock);
> +
> + queue_work(system_unbound_wq, &cleanup_offline_cgwbs_work);
> }
>
> /**
> --
> 2.31.1
>
On Thu 03-06-21 18:31:54, Roman Gushchin wrote:
> If an inode's state has I_WILL_FREE flag set, the inode will be
> freed soon, so there is no point in trying to switch the inode
> to a different cgwb.
>
> I_WILL_FREE was ignored since the introduction of the inode switching,
> so it looks like it doesn't lead to any noticeable issues for a user.
> This is why the patch is not intended for a stable backport.
>
> Suggested-by: Jan Kara <[email protected]>
> Signed-off-by: Roman Gushchin <[email protected]>
Looks good. Feel free to add:
Reviewed-by: Jan Kara <[email protected]>
Honza
> ---
> fs/fs-writeback.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> index e91980f49388..bd99890599e0 100644
> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -389,10 +389,10 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
> xa_lock_irq(&mapping->i_pages);
>
> /*
> - * Once I_FREEING is visible under i_lock, the eviction path owns
> - * the inode and we shouldn't modify ->i_io_list.
> + * Once I_FREEING or I_WILL_FREE are visible under i_lock, the eviction
> + * path owns the inode and we shouldn't modify ->i_io_list.
> */
> - if (unlikely(inode->i_state & I_FREEING))
> + if (unlikely(inode->i_state & (I_FREEING | I_WILL_FREE)))
> goto skip_switch;
>
> trace_inode_switch_wbs(inode, old_wb, new_wb);
> @@ -517,7 +517,7 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
> /* while holding I_WB_SWITCH, no one else can update the association */
> spin_lock(&inode->i_lock);
> if (!(inode->i_sb->s_flags & SB_ACTIVE) ||
> - inode->i_state & (I_WB_SWITCH | I_FREEING) ||
> + inode->i_state & (I_WB_SWITCH | I_FREEING | I_WILL_FREE) ||
> inode_to_wb(inode) == isw->new_wb) {
> spin_unlock(&inode->i_lock);
> goto out_free;
> --
> 2.31.1
>
--
Jan Kara <[email protected]>
SUSE Labs, CR
On Thu 03-06-21 18:31:58, Roman Gushchin wrote:
> Currently only a single inode can be switched to another writeback
> structure at once. That means to switch an inode a separate
> inode_switch_wbs_context structure must be allocated, and a separate
> rcu callback and work must be scheduled.
>
> It's fine for the existing ad-hoc switching, which is not happening
> that often, but sub-optimal for massive switching required in order to
> release a writeback structure. To prepare for it, let's add a support
> for switching multiple inodes at once.
>
> Instead of containing a single inode pointer, inode_switch_wbs_context
> will contain a NULL-terminated array of inode pointers.
> inode_do_switch_wbs() will be called for each inode.
>
> To optimize the locking bdi->wb_switch_rwsem, old_wb's and new_wb's
> list_locks will be acquired and released only once altogether for all
> inodes. wb_wakeup() will be also be called only once. Instead of
> calling wb_put(old_wb) after each successful switch, wb_put_many()
> is introduced and used.
>
> Signed-off-by: Roman Gushchin <[email protected]>
Looks good except for one small issue:
> + for (inodep = isw->inodes; *inodep; inodep++) {
> + WARN_ON_ONCE((*inodep)->i_wb != old_wb);
> + if (inode_do_switch_wbs(*inodep, old_wb, new_wb))
> + nr_switched++;
> + iput(*inodep);
> + }
You have to be careful here as iput() can be dropping last inode reference
and in that case it can sleep and do a lot of heavylifting (which cannot
happen under the locks you hold). So you need another loop after dropping
all the locks to do iput() on all inodes. After fixing this feel free to
add:
Reviewed-by: Jan Kara <[email protected]>
Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR
On Thu 03-06-21 18:31:59, Roman Gushchin wrote:
> Asynchronously try to release dying cgwbs by switching attached inodes
> to the bdi's wb. It helps to get rid of per-cgroup writeback
> structures themselves and of pinned memory and block cgroups, which
> are significantly larger structures (mostly due to large per-cpu
> statistics data). This prevents memory waste and helps to avoid
> different scalability problems caused by large piles of dying cgroups.
>
> Reuse the existing mechanism of inode switching used for foreign inode
> detection. To speed things up batch up to 115 inode switching in a
> single operation (the maximum number is selected so that the resulting
> struct inode_switch_wbs_context can fit into 1024 bytes). Because
> every switching consists of two steps divided by an RCU grace period,
> it would be too slow without batching. Please note that the whole
> batch counts as a single operation (when increasing/decreasing
> isw_nr_in_flight). This allows to keep umounting working (flush the
> switching queue), however prevents cleanups from consuming the whole
> switching quota and effectively blocking the frn switching.
Hum, your comment about unmount made me think... Isn't all that stuff racy?
generic_shutdown_super() has:
sync_filesystem(sb);
sb->s_flags &= ~SB_ACTIVE;
cgroup_writeback_umount();
and cgroup_writeback_umount() is:
if (atomic_read(&isw_nr_in_flight)) {
/*
* Use rcu_barrier() to wait for all pending callbacks to
* ensure that all in-flight wb switches are in the workqueue.
*/
rcu_barrier();
flush_workqueue(isw_wq);
}
So we are clearly missing a smp_mb() here (likely in
cgroup_writeback_umount()) as clearing of SB_ACTIVE needs to be reliably
happing before atomic_read(&isw_nr_in_flight).
Also ...
> +bool cleanup_offline_cgwb(struct bdi_writeback *wb)
> +{
> + struct inode_switch_wbs_context *isw;
> + struct inode *inode;
> + int nr;
> + bool restart = false;
> +
> + isw = kzalloc(sizeof(*isw) + WB_MAX_INODES_PER_ISW *
> + sizeof(struct inode *), GFP_KERNEL);
> + if (!isw)
> + return restart;
> +
> + /* no need to call wb_get() here: bdi's root wb is not refcounted */
> + isw->new_wb = &wb->bdi->wb;
> +
> + nr = 0;
> + spin_lock(&wb->list_lock);
> + list_for_each_entry(inode, &wb->b_attached, i_io_list) {
> + if (!inode_prepare_wbs_switch(inode, isw->new_wb))
> + continue;
> +
> + isw->inodes[nr++] = inode;
> +
> + if (nr >= WB_MAX_INODES_PER_ISW - 1) {
> + restart = true;
> + break;
> + }
> + }
> + spin_unlock(&wb->list_lock);
> +
> + /* no attached inodes? bail out */
> + if (nr == 0) {
> + kfree(isw);
> + return restart;
> + }
> +
> + /*
> + * In addition to synchronizing among switchers, I_WB_SWITCH tells
> + * the RCU protected stat update paths to grab the i_page
> + * lock so that stat transfer can synchronize against them.
> + * Let's continue after I_WB_SWITCH is guaranteed to be visible.
> + */
> + INIT_RCU_WORK(&isw->work, inode_switch_wbs_work_fn);
> + queue_rcu_work(isw_wq, &isw->work);
> +
> + atomic_inc(&isw_nr_in_flight);
... the increment of isw_nr_in_flight needs to happen before we start to
grab any inodes. Otherwise unmount can pass past cgroup_writeback_umount()
while we are still holding inode references in cleanup_offline_cgwb() the
result will be "Busy inodes after unmount." message and use-after-free
issues (with inode->i_sb which gets freed).
Frankly, I think much safer option would be to wait in evict() for
I_WB_SWITCH similarly as we wait for I_SYNC (through
inode_wait_for_writeback()). And with that we can do away with
cgroup_writeback_umount() altogether. But I guess that's out of scope of
this series.
Honza
--
Jan Kara <[email protected]>
SUSE Labs, CR
On Sat, Jun 05, 2021 at 09:34:41PM +0000, Dennis Zhou wrote:
> Hello,
>
> On Thu, Jun 03, 2021 at 06:31:59PM -0700, Roman Gushchin wrote:
> > Asynchronously try to release dying cgwbs by switching attached inodes
> > to the bdi's wb. It helps to get rid of per-cgroup writeback
> > structures themselves and of pinned memory and block cgroups, which
> > are significantly larger structures (mostly due to large per-cpu
> > statistics data). This prevents memory waste and helps to avoid
> > different scalability problems caused by large piles of dying cgroups.
> >
> > Reuse the existing mechanism of inode switching used for foreign inode
> > detection. To speed things up batch up to 115 inode switching in a
> > single operation (the maximum number is selected so that the resulting
> > struct inode_switch_wbs_context can fit into 1024 bytes). Because
> > every switching consists of two steps divided by an RCU grace period,
> > it would be too slow without batching. Please note that the whole
> > batch counts as a single operation (when increasing/decreasing
> > isw_nr_in_flight). This allows to keep umounting working (flush the
> > switching queue), however prevents cleanups from consuming the whole
> > switching quota and effectively blocking the frn switching.
> >
> > A cgwb cleanup operation can fail due to different reasons (e.g. not
> > enough memory, the cgwb has an in-flight/pending io, an attached inode
> > in a wrong state, etc). In this case the next scheduled cleanup will
> > make a new attempt. An attempt is made each time a new cgwb is offlined
> > (in other words a memcg and/or a blkcg is deleted by a user). In the
> > future an additional attempt scheduled by a timer can be implemented.
>
> I've been thinking about this for a little while and the only thing I'm
> not super thrilled by is that the subsequent cleanup work trigger isn't
> due to forward progress.
>
> As future work, we could tag the inodes to switch when writeback
> completes instead of using a timer. This would be nice because then we
> only have to make a single (successful) pass switching the inodes we can
> and then mark the others to switch. Once a cgwb is killed no one else
> can attach to it so we should be good there.
>
> I don't think this is a blocker or even necessary, I just wanted to put
> it out there as possible future direction instead of a timer.
Yeah, I agree that it's a good direction to explore. It will be likely
more intrusive and will require new inode flag. So I'd leave it for further
improvements.
Thank you for reviewing the series!