2023-01-18 13:09:25

by Yu Kuai

[permalink] [raw]
Subject: [PATCH -next v2 0/3] blk-cgroup: make sure pd_free_fn() is called in order

From: Yu Kuai <[email protected]>

The problem was found in iocost orignally([1]) that ioc can be freed in
ioc_pd_free(). And later we found that there are more problem in
iocost([2]).

After some discussion, as suggested by Tejun([3]), we decide to fix the
problem that parent pd can be freed before child pd in cgroup layer
first. And the problem in [1] will be fixed later if this patchset is
applied.

[1] https://lore.kernel.org/all/[email protected]/
[2] https://lore.kernel.org/all/[email protected]/
[3] https://lore.kernel.org/all/[email protected]/

Yu Kuai (3):
blk-cgroup: dropping parent refcount after pd_free_fn() is done
blk-cgroup: support to track if policy is online
blk-cgroup: synchronize pd_free_fn() from blkg_free_workfn() and
blkcg_deactivate_policy()

block/blk-cgroup.c | 61 +++++++++++++++++++++++++++++++-----------
block/blk-cgroup.h | 1 +
include/linux/blkdev.h | 1 +
3 files changed, 48 insertions(+), 15 deletions(-)

--
2.31.1


2023-01-18 13:10:24

by Yu Kuai

[permalink] [raw]
Subject: [PATCH -next v2 1/3] blk-cgroup: dropping parent refcount after pd_free_fn() is done

From: Yu Kuai <[email protected]>

Some cgroup policies will access parent pd through child pd even
after pd_offline_fn() is done. If pd_free_fn() for parent is called
before child, then UAF can be triggered. Hence it's better to guarantee
the order of pd_free_fn().

Currently refcount of parent blkg is dropped in __blkg_release(), which
is before pd_free_fn() is called in blkg_free_work_fn() while
blkg_free_work_fn() is called asynchronously.

This patch make sure pd_free_fn() called from removing cgroup is ordered
by delaying dropping parent refcount after calling pd_free_fn() for
child.

BTW, pd_free_fn() will also be called from blkcg_deactivate_policy()
from deleting device, and following patches will guarantee the order.

Signed-off-by: Yu Kuai <[email protected]>
---
block/blk-cgroup.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 4c94a6560f62..c6d7d1fce65a 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -124,6 +124,8 @@ static void blkg_free_workfn(struct work_struct *work)
if (blkg->pd[i])
blkcg_policy[i]->pd_free_fn(blkg->pd[i]);

+ if (blkg->parent)
+ blkg_put(blkg->parent);
if (blkg->q)
blk_put_queue(blkg->q);
free_percpu(blkg->iostat_cpu);
@@ -158,8 +160,6 @@ static void __blkg_release(struct rcu_head *rcu)

/* release the blkcg and parent blkg refs this blkg has been holding */
css_put(&blkg->blkcg->css);
- if (blkg->parent)
- blkg_put(blkg->parent);
blkg_free(blkg);
}

--
2.31.1

2023-01-18 13:34:12

by Yu Kuai

[permalink] [raw]
Subject: [PATCH -next v2 2/3] blk-cgroup: support to track if policy is online

From: Yu Kuai <[email protected]>

A new field 'online' is added to blkg_policy_date to fix following
2 problem:

1) In blkcg_activate_policy(), if pd_alloc_fn() with 'GFP_NOWAIT'
failed, 'queue_lock' will be dropped and pd_alloc_fn() will try again
without 'GFP_NOWAIT'. In the meantime, remove cgroup can race with
it, and pd_offline_fn() will be called without pd_init_fn() and
pd_online_fn(). This way null-ptr-deference can be triggered.

2) In order to synchronize pd_free_fn() from blkg_free_workfn() and
blkcg_deactivate_policy(), 'list_del_init(&blkg->q_node)' will be
delayed to blkg_free_workfn(), hence pd_offline_fn() can be called
first in blkg_destroy(), and then blkcg_deactivate_policy() will
call it again, we must prevent it.

The new field 'online' will be set after pd_online_fn() and will be
cleared after pd_offline_fn(), in the meantime pd_offline_fn() will only
be called if 'online' is set.

Signed-off-by: Yu Kuai <[email protected]>
---
block/blk-cgroup.c | 24 +++++++++++++++++-------
block/blk-cgroup.h | 1 +
2 files changed, 18 insertions(+), 7 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index c6d7d1fce65a..75f3c4460715 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -288,6 +288,7 @@ static struct blkcg_gq *blkg_alloc(struct blkcg *blkcg, struct gendisk *disk,
blkg->pd[i] = pd;
pd->blkg = blkg;
pd->plid = i;
+ pd->online = false;
}

return blkg;
@@ -359,8 +360,11 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg, struct gendisk *disk,
for (i = 0; i < BLKCG_MAX_POLS; i++) {
struct blkcg_policy *pol = blkcg_policy[i];

- if (blkg->pd[i] && pol->pd_online_fn)
- pol->pd_online_fn(blkg->pd[i]);
+ if (blkg->pd[i]) {
+ if (pol->pd_online_fn)
+ pol->pd_online_fn(blkg->pd[i]);
+ blkg->pd[i]->online = true;
+ }
}
}
blkg->online = true;
@@ -465,8 +469,11 @@ static void blkg_destroy(struct blkcg_gq *blkg)
for (i = 0; i < BLKCG_MAX_POLS; i++) {
struct blkcg_policy *pol = blkcg_policy[i];

- if (blkg->pd[i] && pol->pd_offline_fn)
- pol->pd_offline_fn(blkg->pd[i]);
+ if (blkg->pd[i] && blkg->pd[i]->online) {
+ if (pol->pd_offline_fn)
+ pol->pd_offline_fn(blkg->pd[i]);
+ blkg->pd[i]->online = false;
+ }
}

blkg->online = false;
@@ -1448,6 +1455,7 @@ int blkcg_activate_policy(struct request_queue *q,
blkg->pd[pol->plid] = pd;
pd->blkg = blkg;
pd->plid = pol->plid;
+ pd->online = false;
}

/* all allocated, init in the same order */
@@ -1455,9 +1463,11 @@ int blkcg_activate_policy(struct request_queue *q,
list_for_each_entry_reverse(blkg, &q->blkg_list, q_node)
pol->pd_init_fn(blkg->pd[pol->plid]);

- if (pol->pd_online_fn)
- list_for_each_entry_reverse(blkg, &q->blkg_list, q_node)
+ list_for_each_entry_reverse(blkg, &q->blkg_list, q_node) {
+ if (pol->pd_online_fn)
pol->pd_online_fn(blkg->pd[pol->plid]);
+ blkg->pd[pol->plid]->online = true;
+ }

__set_bit(pol->plid, q->blkcg_pols);
ret = 0;
@@ -1519,7 +1529,7 @@ void blkcg_deactivate_policy(struct request_queue *q,

spin_lock(&blkcg->lock);
if (blkg->pd[pol->plid]) {
- if (pol->pd_offline_fn)
+ if (blkg->pd[pol->plid]->online && pol->pd_offline_fn)
pol->pd_offline_fn(blkg->pd[pol->plid]);
pol->pd_free_fn(blkg->pd[pol->plid]);
blkg->pd[pol->plid] = NULL;
diff --git a/block/blk-cgroup.h b/block/blk-cgroup.h
index 1e94e404eaa8..b13ee84f358e 100644
--- a/block/blk-cgroup.h
+++ b/block/blk-cgroup.h
@@ -135,6 +135,7 @@ struct blkg_policy_data {
/* the blkg and policy id this per-policy data belongs to */
struct blkcg_gq *blkg;
int plid;
+ bool online;
};

/*
--
2.31.1

2023-01-18 13:34:36

by Yu Kuai

[permalink] [raw]
Subject: [PATCH -next v2 3/3] blk-cgroup: synchronize pd_free_fn() from blkg_free_workfn() and blkcg_deactivate_policy()

From: Yu Kuai <[email protected]>

Currently parent pd can be freed before child pd:

t1: remove cgroup C1
blkcg_destroy_blkgs
blkg_destroy
list_del_init(&blkg->q_node)
// remove blkg from queue list
percpu_ref_kill(&blkg->refcnt)
blkg_release
call_rcu

t2: from t1
__blkg_release
blkg_free
schedule_work
t4: deactivate policy
blkcg_deactivate_policy
pd_free_fn
// parent of C1 is freed first
t3: from t2
blkg_free_workfn
pd_free_fn

If policy(for example, ioc_timer_fn() from iocost) access parent pd from
child pd after pd_offline_fn(), then UAF can be triggered.

Fix the problem by delaying 'list_del_init(&blkg->q_node)' from
blkg_destroy() to blkg_free_workfn(), and use a new disk level mutex to
protect blkg_free_workfn() and blkcg_deactivate_policy).

Signed-off-by: Yu Kuai <[email protected]>
---
block/blk-cgroup.c | 33 +++++++++++++++++++++++++++------
include/linux/blkdev.h | 1 +
2 files changed, 28 insertions(+), 6 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 75f3c4460715..4098e8030e01 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -118,16 +118,26 @@ static void blkg_free_workfn(struct work_struct *work)
{
struct blkcg_gq *blkg = container_of(work, struct blkcg_gq,
free_work);
+ struct request_queue *q = blkg->q;
int i;

+ if (q)
+ mutex_lock(&q->blkcg_mutex);
+
for (i = 0; i < BLKCG_MAX_POLS; i++)
if (blkg->pd[i])
blkcg_policy[i]->pd_free_fn(blkg->pd[i]);

if (blkg->parent)
blkg_put(blkg->parent);
- if (blkg->q)
- blk_put_queue(blkg->q);
+
+ if (q) {
+ if (!list_empty(&blkg->q_node))
+ list_del_init(&blkg->q_node);
+ mutex_unlock(&q->blkcg_mutex);
+ blk_put_queue(q);
+ }
+
free_percpu(blkg->iostat_cpu);
percpu_ref_exit(&blkg->refcnt);
kfree(blkg);
@@ -462,9 +472,14 @@ static void blkg_destroy(struct blkcg_gq *blkg)
lockdep_assert_held(&blkg->q->queue_lock);
lockdep_assert_held(&blkcg->lock);

- /* Something wrong if we are trying to remove same group twice */
- WARN_ON_ONCE(list_empty(&blkg->q_node));
- WARN_ON_ONCE(hlist_unhashed(&blkg->blkcg_node));
+ /*
+ * blkg is removed from queue list in blkg_free_workfn(), hence this
+ * function can be called from blkcg_destroy_blkgs() first, and then
+ * before blkg_free_workfn(), this function can be called again in
+ * blkg_destroy_all().
+ */
+ if (hlist_unhashed(&blkg->blkcg_node))
+ return;

for (i = 0; i < BLKCG_MAX_POLS; i++) {
struct blkcg_policy *pol = blkcg_policy[i];
@@ -478,8 +493,11 @@ static void blkg_destroy(struct blkcg_gq *blkg)

blkg->online = false;

+ /*
+ * Delay deleting list blkg->q_node to blkg_free_workfn() to synchronize
+ * pd_free_fn() from blkg_free_workfn() and blkcg_deactivate_policy().
+ */
radix_tree_delete(&blkcg->blkg_tree, blkg->q->id);
- list_del_init(&blkg->q_node);
hlist_del_init_rcu(&blkg->blkcg_node);

/*
@@ -1280,6 +1298,7 @@ int blkcg_init_disk(struct gendisk *disk)
int ret;

INIT_LIST_HEAD(&q->blkg_list);
+ mutex_init(&q->blkcg_mutex);

new_blkg = blkg_alloc(&blkcg_root, disk, GFP_KERNEL);
if (!new_blkg)
@@ -1520,6 +1539,7 @@ void blkcg_deactivate_policy(struct request_queue *q,
if (queue_is_mq(q))
blk_mq_freeze_queue(q);

+ mutex_lock(&q->blkcg_mutex);
spin_lock_irq(&q->queue_lock);

__clear_bit(pol->plid, q->blkcg_pols);
@@ -1538,6 +1558,7 @@ void blkcg_deactivate_policy(struct request_queue *q,
}

spin_unlock_irq(&q->queue_lock);
+ mutex_unlock(&q->blkcg_mutex);

if (queue_is_mq(q))
blk_mq_unfreeze_queue(q);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index b87ed829ab94..53ae0a7fe377 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -485,6 +485,7 @@ struct request_queue {
DECLARE_BITMAP (blkcg_pols, BLKCG_MAX_POLS);
struct blkcg_gq *root_blkg;
struct list_head blkg_list;
+ struct mutex blkcg_mutex;
#endif

struct queue_limits limits;
--
2.31.1

2023-01-18 17:16:00

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH -next v2 1/3] blk-cgroup: dropping parent refcount after pd_free_fn() is done

On Wed, Jan 18, 2023 at 08:31:50PM +0800, Yu Kuai wrote:
> From: Yu Kuai <[email protected]>
>
> Some cgroup policies will access parent pd through child pd even
> after pd_offline_fn() is done. If pd_free_fn() for parent is called
> before child, then UAF can be triggered. Hence it's better to guarantee
> the order of pd_free_fn().
>
> Currently refcount of parent blkg is dropped in __blkg_release(), which
> is before pd_free_fn() is called in blkg_free_work_fn() while
> blkg_free_work_fn() is called asynchronously.
>
> This patch make sure pd_free_fn() called from removing cgroup is ordered
> by delaying dropping parent refcount after calling pd_free_fn() for
> child.
>
> BTW, pd_free_fn() will also be called from blkcg_deactivate_policy()
> from deleting device, and following patches will guarantee the order.
>
> Signed-off-by: Yu Kuai <[email protected]>

Acked-by: Tejun Heo <[email protected]>

Thanks.

--
tejun

2023-01-18 17:21:59

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH -next v2 3/3] blk-cgroup: synchronize pd_free_fn() from blkg_free_workfn() and blkcg_deactivate_policy()

Hello,

On Wed, Jan 18, 2023 at 08:31:52PM +0800, Yu Kuai wrote:
> From: Yu Kuai <[email protected]>
>
> Currently parent pd can be freed before child pd:
>
> t1: remove cgroup C1
> blkcg_destroy_blkgs
> blkg_destroy
> list_del_init(&blkg->q_node)
> // remove blkg from queue list
> percpu_ref_kill(&blkg->refcnt)
> blkg_release
> call_rcu
>
> t2: from t1
> __blkg_release
> blkg_free
> schedule_work
> t4: deactivate policy
> blkcg_deactivate_policy
> pd_free_fn
> // parent of C1 is freed first
> t3: from t2
> blkg_free_workfn
> pd_free_fn
>
> If policy(for example, ioc_timer_fn() from iocost) access parent pd from
> child pd after pd_offline_fn(), then UAF can be triggered.
>
> Fix the problem by delaying 'list_del_init(&blkg->q_node)' from
> blkg_destroy() to blkg_free_workfn(), and use a new disk level mutex to
^
using

> protect blkg_free_workfn() and blkcg_deactivate_policy).
^ ^
synchronize? ()

> @@ -118,16 +118,26 @@ static void blkg_free_workfn(struct work_struct *work)
> {
> struct blkcg_gq *blkg = container_of(work, struct blkcg_gq,
> free_work);
> + struct request_queue *q = blkg->q;
> int i;
>
> + if (q)
> + mutex_lock(&q->blkcg_mutex);

A comment explaining what the above is synchronizing would be useful.

> +
> for (i = 0; i < BLKCG_MAX_POLS; i++)
> if (blkg->pd[i])
> blkcg_policy[i]->pd_free_fn(blkg->pd[i]);
>
> if (blkg->parent)
> blkg_put(blkg->parent);
> - if (blkg->q)
> - blk_put_queue(blkg->q);
> +
> + if (q) {
> + if (!list_empty(&blkg->q_node))

We can drop the above if.

> + list_del_init(&blkg->q_node);
> + mutex_unlock(&q->blkcg_mutex);
> + blk_put_queue(q);
> + }
> +
> free_percpu(blkg->iostat_cpu);
> percpu_ref_exit(&blkg->refcnt);
> kfree(blkg);
> @@ -462,9 +472,14 @@ static void blkg_destroy(struct blkcg_gq *blkg)
> lockdep_assert_held(&blkg->q->queue_lock);
> lockdep_assert_held(&blkcg->lock);
>
> - /* Something wrong if we are trying to remove same group twice */
> - WARN_ON_ONCE(list_empty(&blkg->q_node));
> - WARN_ON_ONCE(hlist_unhashed(&blkg->blkcg_node));
> + /*
> + * blkg is removed from queue list in blkg_free_workfn(), hence this
> + * function can be called from blkcg_destroy_blkgs() first, and then
> + * before blkg_free_workfn(), this function can be called again in
> + * blkg_destroy_all().

How about?

* blkg stays on the queue list until blkg_free_workfn(), hence this
* function can be called from blkcg_destroy_blkgs() first and again
* from blkg_destroy_all() before blkg_free_workfn().

> + */
> + if (hlist_unhashed(&blkg->blkcg_node))
> + return;
>
> for (i = 0; i < BLKCG_MAX_POLS; i++) {
> struct blkcg_policy *pol = blkcg_policy[i];
> @@ -478,8 +493,11 @@ static void blkg_destroy(struct blkcg_gq *blkg)
>
> blkg->online = false;
>
> + /*
> + * Delay deleting list blkg->q_node to blkg_free_workfn() to synchronize
> + * pd_free_fn() from blkg_free_workfn() and blkcg_deactivate_policy().
> + */

So, it'd be better to add a more comprehensive comment in blkg_free_workfn()
explaining why we need this synchronization and how it works and then point
to it from here.

Other than comments, it looks great to me. Thanks a lot for your patience
and seeing it through.

--
tejun

2023-01-18 17:26:24

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH -next v2 2/3] blk-cgroup: support to track if policy is online

On Wed, Jan 18, 2023 at 08:31:51PM +0800, Yu Kuai wrote:
> From: Yu Kuai <[email protected]>
>
> A new field 'online' is added to blkg_policy_date to fix following
^
a
> 2 problem:
>
> 1) In blkcg_activate_policy(), if pd_alloc_fn() with 'GFP_NOWAIT'
> failed, 'queue_lock' will be dropped and pd_alloc_fn() will try again
> without 'GFP_NOWAIT'. In the meantime, remove cgroup can race with
> it, and pd_offline_fn() will be called without pd_init_fn() and
> pd_online_fn(). This way null-ptr-deference can be triggered.
>
> 2) In order to synchronize pd_free_fn() from blkg_free_workfn() and
> blkcg_deactivate_policy(), 'list_del_init(&blkg->q_node)' will be
> delayed to blkg_free_workfn(), hence pd_offline_fn() can be called
> first in blkg_destroy(), and then blkcg_deactivate_policy() will
> call it again, we must prevent it.
>
> The new field 'online' will be set after pd_online_fn() and will be
> cleared after pd_offline_fn(), in the meantime pd_offline_fn() will only
> be called if 'online' is set.
>
> Signed-off-by: Yu Kuai <[email protected]>

Acked-by: Tejun Heo <[email protected]>

Thanks.

--
tejun

2023-01-19 05:46:29

by Yu Kuai

[permalink] [raw]
Subject: Re: [PATCH -next v2 3/3] blk-cgroup: synchronize pd_free_fn() from blkg_free_workfn() and blkcg_deactivate_policy()

Hi,

?? 2023/01/19 1:05, Tejun Heo ะด??:
> Hello,
>
> On Wed, Jan 18, 2023 at 08:31:52PM +0800, Yu Kuai wrote:
>> From: Yu Kuai <[email protected]>
>>
>> Currently parent pd can be freed before child pd:
>>
>> t1: remove cgroup C1
>> blkcg_destroy_blkgs
>> blkg_destroy
>> list_del_init(&blkg->q_node)
>> // remove blkg from queue list
>> percpu_ref_kill(&blkg->refcnt)
>> blkg_release
>> call_rcu
>>
>> t2: from t1
>> __blkg_release
>> blkg_free
>> schedule_work
>> t4: deactivate policy
>> blkcg_deactivate_policy
>> pd_free_fn
>> // parent of C1 is freed first
>> t3: from t2
>> blkg_free_workfn
>> pd_free_fn
>>
>> If policy(for example, ioc_timer_fn() from iocost) access parent pd from
>> child pd after pd_offline_fn(), then UAF can be triggered.
>>
>> Fix the problem by delaying 'list_del_init(&blkg->q_node)' from
>> blkg_destroy() to blkg_free_workfn(), and use a new disk level mutex to
> ^
> using
>
>> protect blkg_free_workfn() and blkcg_deactivate_policy).
> ^ ^
> synchronize? ()
>
>> @@ -118,16 +118,26 @@ static void blkg_free_workfn(struct work_struct *work)
>> {
>> struct blkcg_gq *blkg = container_of(work, struct blkcg_gq,
>> free_work);
>> + struct request_queue *q = blkg->q;
>> int i;
>>
>> + if (q)
>> + mutex_lock(&q->blkcg_mutex);
>
> A comment explaining what the above is synchronizing would be useful.
>
>> +
>> for (i = 0; i < BLKCG_MAX_POLS; i++)
>> if (blkg->pd[i])
>> blkcg_policy[i]->pd_free_fn(blkg->pd[i]);
>>
>> if (blkg->parent)
>> blkg_put(blkg->parent);
>> - if (blkg->q)
>> - blk_put_queue(blkg->q);
>> +
>> + if (q) {
>> + if (!list_empty(&blkg->q_node))
>
> We can drop the above if.
>
>> + list_del_init(&blkg->q_node);
>> + mutex_unlock(&q->blkcg_mutex);
>> + blk_put_queue(q);
>> + }
>> +
>> free_percpu(blkg->iostat_cpu);
>> percpu_ref_exit(&blkg->refcnt);
>> kfree(blkg);
>> @@ -462,9 +472,14 @@ static void blkg_destroy(struct blkcg_gq *blkg)
>> lockdep_assert_held(&blkg->q->queue_lock);
>> lockdep_assert_held(&blkcg->lock);
>>
>> - /* Something wrong if we are trying to remove same group twice */
>> - WARN_ON_ONCE(list_empty(&blkg->q_node));
>> - WARN_ON_ONCE(hlist_unhashed(&blkg->blkcg_node));
>> + /*
>> + * blkg is removed from queue list in blkg_free_workfn(), hence this
>> + * function can be called from blkcg_destroy_blkgs() first, and then
>> + * before blkg_free_workfn(), this function can be called again in
>> + * blkg_destroy_all().
>
> How about?
>
> * blkg stays on the queue list until blkg_free_workfn(), hence this
> * function can be called from blkcg_destroy_blkgs() first and again
> * from blkg_destroy_all() before blkg_free_workfn().
>
>> + */
>> + if (hlist_unhashed(&blkg->blkcg_node))
>> + return;
>>
>> for (i = 0; i < BLKCG_MAX_POLS; i++) {
>> struct blkcg_policy *pol = blkcg_policy[i];
>> @@ -478,8 +493,11 @@ static void blkg_destroy(struct blkcg_gq *blkg)
>>
>> blkg->online = false;
>>
>> + /*
>> + * Delay deleting list blkg->q_node to blkg_free_workfn() to synchronize
>> + * pd_free_fn() from blkg_free_workfn() and blkcg_deactivate_policy().
>> + */
>
> So, it'd be better to add a more comprehensive comment in blkg_free_workfn()
> explaining why we need this synchronization and how it works and then point
> to it from here.
>
> Other than comments, it looks great to me. Thanks a lot for your patience
> and seeing it through.
Thanks for the suggestions, I'll send a new patch based on your
suggestions.

Kuai
>