When an inode is getting dirty for the first time it's associated
with a wb structure (see __inode_attach_wb()). It can later be
switched to another wb (if e.g. some other cgroup is writing a lot of
data to the same inode), but otherwise stays attached to the original
wb until being reclaimed.
The problem is that the wb structure holds a reference to the original
memory and blkcg cgroups. So if an inode has been dirty once and later
is actively used in read-only mode, it has a good chance to pin down
the original memory and blkcg cgroups forewer. This is often the case with
services bringing data for other services, e.g. updating some rpm
packages.
In the real life it becomes a problem due to a large size of the memcg
structure, which can easily be 1000x larger than an inode. Also a
really large number of dying cgroups can raise different scalability
issues, e.g. making the memory reclaim costly and less effective.
To solve the problem inodes should be eventually detached from the
corresponding writeback structure. It's inefficient to do it after
every writeback completion. Instead it can be done whenever the
original memory cgroup is offlined and writeback structure is getting
killed. Scanning over a (potentially long) list of inodes and detach
them from the writeback structure can take quite some time. To avoid
scanning all inodes, attached inodes are kept on a new list (b_attached).
To make it less noticeable to a user, the scanning and switching is performed
from a work context.
Big thanks to Jan Kara, Dennis Zhou, Hillf Danton and Tejun Heo for their ideas
and contribution to this patchset.
v8:
- switch inodes to a nearest living ancestor wb instead of root wb
- added two inodes switching fixes suggested by Jan Kara
v7:
- shared locking for multiple inode switching
- introduced inode_prepare_wbs_switch() helper
- extended the pre-switch inode check for I_WILL_FREE
- added comments here and there
v6:
- extended and reused wbs switching functionality to switch inodes
on cgwb cleanup
- fixed offline_list handling
- switched to the unbound_wq
- other minor fixes
v5:
- switch inodes to bdi->wb instead of zeroing inode->i_wb
- split the single patch into two
- only cgwbs maintain lists of attached inodes
- added cond_resched()
- fixed !CONFIG_CGROUP_WRITEBACK handling
- extended list of prohibited inodes flag
- other small fixes
Roman Gushchin (8):
writeback, cgroup: do not switch inodes with I_WILL_FREE flag
writeback, cgroup: add smp_mb() to cgroup_writeback_umount()
writeback, cgroup: increment isw_nr_in_flight before grabbing an inode
writeback, cgroup: switch to rcu_work API in inode_switch_wbs()
writeback, cgroup: keep list of inodes attached to bdi_writeback
writeback, cgroup: split out the functional part of
inode_switch_wbs_work_fn()
writeback, cgroup: support switching multiple inodes at once
writeback, cgroup: release dying cgwbs by switching attached inodes
fs/fs-writeback.c | 323 +++++++++++++++++++++----------
include/linux/backing-dev-defs.h | 20 +-
include/linux/writeback.h | 1 +
mm/backing-dev.c | 69 ++++++-
4 files changed, 312 insertions(+), 101 deletions(-)
--
2.31.1
Split out the functional part of the inode_switch_wbs_work_fn()
function as inode_do switch_wbs() to reuse it later for switching
inodes attached to dying cgwbs.
This commit doesn't bring any functional changes.
Signed-off-by: Roman Gushchin <[email protected]>
Reviewed-by: Jan Kara <[email protected]>
Acked-by: Tejun Heo <[email protected]>
Acked-by: Dennis Zhou <[email protected]>
---
fs/fs-writeback.c | 19 +++++++++++--------
1 file changed, 11 insertions(+), 8 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 87b305ee5348..5520a6b5cc4d 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -351,15 +351,12 @@ static void bdi_up_write_wb_switch_rwsem(struct backing_dev_info *bdi)
up_write(&bdi->wb_switch_rwsem);
}
-static void inode_switch_wbs_work_fn(struct work_struct *work)
+static void inode_do_switch_wbs(struct inode *inode,
+ struct bdi_writeback *new_wb)
{
- struct inode_switch_wbs_context *isw =
- container_of(to_rcu_work(work), struct inode_switch_wbs_context, work);
- struct inode *inode = isw->inode;
struct backing_dev_info *bdi = inode_to_bdi(inode);
struct address_space *mapping = inode->i_mapping;
struct bdi_writeback *old_wb = inode->i_wb;
- struct bdi_writeback *new_wb = isw->new_wb;
XA_STATE(xas, &mapping->i_pages, 0);
struct page *page;
bool switched = false;
@@ -470,11 +467,17 @@ static void inode_switch_wbs_work_fn(struct work_struct *work)
wb_wakeup(new_wb);
wb_put(old_wb);
}
- wb_put(new_wb);
+}
- iput(inode);
- kfree(isw);
+static void inode_switch_wbs_work_fn(struct work_struct *work)
+{
+ struct inode_switch_wbs_context *isw =
+ container_of(to_rcu_work(work), struct inode_switch_wbs_context, work);
+ inode_do_switch_wbs(isw->inode, isw->new_wb);
+ wb_put(isw->new_wb);
+ iput(isw->inode);
+ kfree(isw);
atomic_dec(&isw_nr_in_flight);
}
--
2.31.1
isw_nr_in_flight is used do determine whether the inode switch queue
should be flushed from the umount path. Currently it's increased
after grabbing an inode and even scheduling the switch work. It means
the umount path can be walked past cleanup_offline_cgwb() with active
inode references, which can result in a "Busy inodes after unmount."
message and use-after-free issues (with inode->i_sb which gets freed).
Fix it by incrementing isw_nr_in_flight before doing anything with
the inode and decrementing in the case when switching wasn't scheduled.
The problem hasn't yet been seen in the real life and was discovered
by Jan Kara by looking into the code.
Suggested-by: Jan Kara <[email protected]>
Signed-off-by: Roman Gushchin <[email protected]>
---
fs/fs-writeback.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index 3564efcc4b78..e2cc860a001b 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -505,6 +505,8 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
if (!isw)
return;
+ atomic_inc(&isw_nr_in_flight);
+
/* find and pin the new wb */
rcu_read_lock();
memcg_css = css_from_id(new_wb_id, &memory_cgrp_subsys);
@@ -535,11 +537,10 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
* Let's continue after I_WB_SWITCH is guaranteed to be visible.
*/
call_rcu(&isw->rcu_head, inode_switch_wbs_rcu_fn);
-
- atomic_inc(&isw_nr_in_flight);
return;
out_free:
+ atomic_dec(&isw_nr_in_flight);
if (isw->new_wb)
wb_put(isw->new_wb);
kfree(isw);
--
2.31.1
A full memory barrier is required between clearing SB_ACTIVE flag
in generic_shutdown_super() and checking isw_nr_in_flight in
cgroup_writeback_umount(), otherwise a new switch operation might
be scheduled after atomic_read(&isw_nr_in_flight) returned 0.
This would result in a non-flushed isw_wq, and a potential crash.
The problem hasn't yet been seen in the real life and was discovered
by Jan Kara by looking into the code.
Suggested-by: Jan Kara <[email protected]>
Signed-off-by: Roman Gushchin <[email protected]>
---
fs/fs-writeback.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
index bd99890599e0..3564efcc4b78 100644
--- a/fs/fs-writeback.c
+++ b/fs/fs-writeback.c
@@ -1000,6 +1000,12 @@ int cgroup_writeback_by_id(u64 bdi_id, int memcg_id, unsigned long nr,
*/
void cgroup_writeback_umount(void)
{
+ /*
+ * SB_ACTIVE should be reliably cleared before checking
+ * isw_nr_in_flight, see generic_shutdown_super().
+ */
+ smp_mb();
+
if (atomic_read(&isw_nr_in_flight)) {
/*
* Use rcu_barrier() to wait for all pending callbacks to
--
2.31.1
On Mon 07-06-21 18:31:17, Roman Gushchin wrote:
> A full memory barrier is required between clearing SB_ACTIVE flag
> in generic_shutdown_super() and checking isw_nr_in_flight in
> cgroup_writeback_umount(), otherwise a new switch operation might
> be scheduled after atomic_read(&isw_nr_in_flight) returned 0.
> This would result in a non-flushed isw_wq, and a potential crash.
>
> The problem hasn't yet been seen in the real life and was discovered
> by Jan Kara by looking into the code.
>
> Suggested-by: Jan Kara <[email protected]>
> Signed-off-by: Roman Gushchin <[email protected]>
Looks good. Feel free to add:
Reviewed-by: Jan Kara <[email protected]>
Honza
> ---
> fs/fs-writeback.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> index bd99890599e0..3564efcc4b78 100644
> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -1000,6 +1000,12 @@ int cgroup_writeback_by_id(u64 bdi_id, int memcg_id, unsigned long nr,
> */
> void cgroup_writeback_umount(void)
> {
> + /*
> + * SB_ACTIVE should be reliably cleared before checking
> + * isw_nr_in_flight, see generic_shutdown_super().
> + */
> + smp_mb();
> +
> if (atomic_read(&isw_nr_in_flight)) {
> /*
> * Use rcu_barrier() to wait for all pending callbacks to
> --
> 2.31.1
>
--
Jan Kara <[email protected]>
SUSE Labs, CR
On Mon 07-06-21 18:31:18, Roman Gushchin wrote:
> isw_nr_in_flight is used do determine whether the inode switch queue
> should be flushed from the umount path. Currently it's increased
> after grabbing an inode and even scheduling the switch work. It means
> the umount path can be walked past cleanup_offline_cgwb() with active
> inode references, which can result in a "Busy inodes after unmount."
> message and use-after-free issues (with inode->i_sb which gets freed).
>
> Fix it by incrementing isw_nr_in_flight before doing anything with
> the inode and decrementing in the case when switching wasn't scheduled.
>
> The problem hasn't yet been seen in the real life and was discovered
> by Jan Kara by looking into the code.
>
> Suggested-by: Jan Kara <[email protected]>
> Signed-off-by: Roman Gushchin <[email protected]>
Looks good. Feel free to add:
Reviewed-by: Jan Kara <[email protected]>
Honza
> ---
> fs/fs-writeback.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> index 3564efcc4b78..e2cc860a001b 100644
> --- a/fs/fs-writeback.c
> +++ b/fs/fs-writeback.c
> @@ -505,6 +505,8 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
> if (!isw)
> return;
>
> + atomic_inc(&isw_nr_in_flight);
> +
> /* find and pin the new wb */
> rcu_read_lock();
> memcg_css = css_from_id(new_wb_id, &memory_cgrp_subsys);
> @@ -535,11 +537,10 @@ static void inode_switch_wbs(struct inode *inode, int new_wb_id)
> * Let's continue after I_WB_SWITCH is guaranteed to be visible.
> */
> call_rcu(&isw->rcu_head, inode_switch_wbs_rcu_fn);
> -
> - atomic_inc(&isw_nr_in_flight);
> return;
>
> out_free:
> + atomic_dec(&isw_nr_in_flight);
> if (isw->new_wb)
> wb_put(isw->new_wb);
> kfree(isw);
> --
> 2.31.1
>
--
Jan Kara <[email protected]>
SUSE Labs, CR