2022-12-13 19:25:56

by Waiman Long

[permalink] [raw]
Subject: [PATCH-block v3 0/2] blk-cgroup: Fix potential UAF & flush rstat at blkgs destruction path

v3:
- Drop v2 patch 2 as it may not be needed.
- Replace css_tryget() with percpu_ref_is_zero() in patch 1 as
suggested by Tejun.
- Expand comment on patch 2 to elaborate the reason for this patch.

v2:
- Remove unnecessary rcu_read_{lock|unlock} from
cgroup_rstat_css_cpu_flush() in patch 3.

It was found that blkcg_destroy_blkgs() may be called with all blkcg
references gone. This may potentially cause user-after-free and so should
be fixed. The second patch flushes rstat when blkcg_destroy_blkgs().

Waiman Long (2):
bdi, blk-cgroup: Fix potential UAF of blkcg
blk-cgroup: Flush stats at blkgs destruction path

block/blk-cgroup.c | 22 ++++++++++++++++++++++
include/linux/cgroup.h | 1 +
kernel/cgroup/rstat.c | 18 ++++++++++++++++++
mm/backing-dev.c | 8 ++++++--
4 files changed, 47 insertions(+), 2 deletions(-)

--
2.31.1


2022-12-13 19:27:58

by Waiman Long

[permalink] [raw]
Subject: [PATCH-block v3 2/2] blk-cgroup: Flush stats at blkgs destruction path

As noted by Michal, the blkg_iostat_set's in the lockless list
hold reference to blkg's to protect against their removal. Those
blkg's hold reference to blkcg. When a cgroup is being destroyed,
cgroup_rstat_flush() is only called at css_release_work_fn() which is
called when the blkcg reference count reaches 0. This circular dependency
will prevent blkcg from being freed until some other events cause
cgroup_rstat_flush() to be called to flush out the pending blkcg stats.

To prevent this delayed blkcg removal, add a new cgroup_rstat_css_flush()
function to flush stats for a given css and cpu and call it at the blkgs
destruction path, blkcg_destroy_blkgs(), whenever there are still some
pending stats to be flushed. This will ensure that blkcg reference
count can reach 0 ASAP.

Signed-off-by: Waiman Long <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
block/blk-cgroup.c | 15 +++++++++++++++
include/linux/cgroup.h | 1 +
kernel/cgroup/rstat.c | 18 ++++++++++++++++++
3 files changed, 34 insertions(+)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index ca28306aa1b1..ddd27a714d3e 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1084,6 +1084,8 @@ struct list_head *blkcg_get_cgwb_list(struct cgroup_subsys_state *css)
*/
static void blkcg_destroy_blkgs(struct blkcg *blkcg)
{
+ int cpu;
+
/*
* blkcg_destroy_blkgs() shouldn't be called with all the blkcg
* references gone.
@@ -1093,6 +1095,19 @@ static void blkcg_destroy_blkgs(struct blkcg *blkcg)

might_sleep();

+ /*
+ * Flush all the non-empty percpu lockless lists so as to release
+ * the blkg references held by those lists which, in turn, may
+ * allow the blkgs to be freed and release their references to
+ * blkcg speeding up its freeing.
+ */
+ for_each_possible_cpu(cpu) {
+ struct llist_head *lhead = per_cpu_ptr(blkcg->lhead, cpu);
+
+ if (!llist_empty(lhead))
+ cgroup_rstat_css_cpu_flush(&blkcg->css, cpu);
+ }
+
spin_lock_irq(&blkcg->lock);

while (!hlist_empty(&blkcg->blkg_list)) {
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index 528bd44b59e2..6c4e66b3fa84 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -766,6 +766,7 @@ void cgroup_rstat_flush(struct cgroup *cgrp);
void cgroup_rstat_flush_irqsafe(struct cgroup *cgrp);
void cgroup_rstat_flush_hold(struct cgroup *cgrp);
void cgroup_rstat_flush_release(void);
+void cgroup_rstat_css_cpu_flush(struct cgroup_subsys_state *css, int cpu);

/*
* Basic resource stats.
diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
index 793ecff29038..2e44be44351f 100644
--- a/kernel/cgroup/rstat.c
+++ b/kernel/cgroup/rstat.c
@@ -281,6 +281,24 @@ void cgroup_rstat_flush_release(void)
spin_unlock_irq(&cgroup_rstat_lock);
}

+/**
+ * cgroup_rstat_css_cpu_flush - flush stats for the given css and cpu
+ * @css: target css to be flush
+ * @cpu: the cpu that holds the stats to be flush
+ *
+ * A lightweight rstat flush operation for a given css and cpu.
+ * Only the cpu_lock is being held for mutual exclusion, the cgroup_rstat_lock
+ * isn't used.
+ */
+void cgroup_rstat_css_cpu_flush(struct cgroup_subsys_state *css, int cpu)
+{
+ raw_spinlock_t *cpu_lock = per_cpu_ptr(&cgroup_rstat_cpu_lock, cpu);
+
+ raw_spin_lock_irq(cpu_lock);
+ css->ss->css_rstat_flush(css, cpu);
+ raw_spin_unlock_irq(cpu_lock);
+}
+
int cgroup_rstat_init(struct cgroup *cgrp)
{
int cpu;
--
2.31.1

2022-12-13 19:42:41

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH-block v3 2/2] blk-cgroup: Flush stats at blkgs destruction path

On Tue, Dec 13, 2022 at 01:44:46PM -0500, Waiman Long wrote:
> + /*
> + * Flush all the non-empty percpu lockless lists so as to release
> + * the blkg references held by those lists which, in turn, may
> + * allow the blkgs to be freed and release their references to
> + * blkcg speeding up its freeing.
> + */

Can you mention the possible deadlock explicitly? This sounds more like an
optimization.

Thanks.

--
tejun

2022-12-14 02:08:27

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH-block v3 2/2] blk-cgroup: Flush stats at blkgs destruction path


On 12/13/22 14:30, Tejun Heo wrote:
> On Tue, Dec 13, 2022 at 01:44:46PM -0500, Waiman Long wrote:
>> + /*
>> + * Flush all the non-empty percpu lockless lists so as to release
>> + * the blkg references held by those lists which, in turn, may
>> + * allow the blkgs to be freed and release their references to
>> + * blkcg speeding up its freeing.
>> + */
> Can you mention the possible deadlock explicitly? This sounds more like an
> optimization.

I am mostly thinking about the optimization aspect. Yes, deadlock in the
sense that both blkgs and blkcg remained offline but not freed is
possible because of the references hold in those lockless list. It is a
problem if blkcg is the only controller in a cgroup. For cgroup that has
both the blkcg and memory controllers, it shouldn't be a problem as the
cgroup_rstat_flush() call in the release of memory cgroup will clear up
blkcg too. Right, I will update the comment to mention that.

Cheers,
Longman