2018-09-11 18:45:12

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH v3 00/12] block: always associate blkg and refcount cleanup

Hi everyone,

v3: a few minor fixes.
0003: Updated the comment to bio_associate_blkg to reflect closest
association.
Removed a return branch in __bio_lookup_create.
0009: Removed an unnecessary rcu_read_(un)lock pair.
0010: Fixed blkg null pointer... blkg->blkcg => blkcg.

This is rebased onto axboe#for-4.20/block 902d53914f64.

From v2 below (updated):
------
This is a followup to the patch series I sent out earlier [1] containing
the middle two points:
1. always associate a bio with a blkg
2. remove the extra css ref held by bios and utilize the blkg ref

The major difference with v2 is that error handling on blkg creation
and association failure is handled more gracefully. Rather than having
the complex logic to fallback to root, failures walk up the blkg tree.
This seems more natural and less prone to error with the many possible
failure scenarios.

Additionally, there are fixes for kbuild errors and some key details
overlooked by me in the first series that were pointed out in review.

Modified from the first patchset:
First, both blk-throttle and blk-iolatency rely on blkg association
to enable their policies. Rather than each policy (and future policies)
implement this logic independently, this consolidates it such that
all bios are tagged with a blkg.

Second, with the addition of always having a blkg reference, the blkcg
can now be referenced through it rather than maintaining an additional
pointer and reference. So let's clean this up.

[1] https://lore.kernel.org/lkml/[email protected]/T

This patchset contains the following 12 patches:
0001-blkcg-fix-ref-count-issue-with-bio_blkcg-using-task_.patch
0002-blkcg-update-blkg_lookup_create-to-do-locking.patch
0003-blkcg-convert-blkg_lookup_create-to-find-closest-blk.patch
0004-blkcg-always-associate-a-bio-with-a-blkg.patch
0005-blkcg-consolidate-bio_issue_init-to-be-a-part-of-cor.patch
0006-blkcg-associate-a-blkg-for-pages-being-evicted-by-sw.patch
0007-blkcg-associate-writeback-bios-with-a-blkg.patch
0008-blkcg-remove-bio-bi_css-and-instead-use-bio-bi_blkg.patch
0009-blkcg-remove-additional-reference-to-the-css.patch
0010-blkcg-cleanup-and-make-blk_get_rl-use-blkg_lookup_cr.patch
0011-blkcg-change-blkg-reference-counting-to-use-percpu_r.patch
0012-blkcg-rename-blkg_try_get-to-blkg_tryget.patch

This patchset is on top of axboe#for-4.20/block 902d53914f64.

diffstats below:

Dennis Zhou (Facebook) (12):
blkcg: fix ref count issue with bio_blkcg using task_css
blkcg: update blkg_lookup_create to do locking
blkcg: convert blkg_lookup_create to find closest blkg
blkcg: always associate a bio with a blkg
blkcg: consolidate bio_issue_init to be a part of core
blkcg: associate a blkg for pages being evicted by swap
blkcg: associate writeback bios with a blkg
blkcg: remove bio->bi_css and instead use bio->bi_blkg
blkcg: remove additional reference to the css
blkcg: cleanup and make blk_get_rl use blkg_lookup_create
blkcg: change blkg reference counting to use percpu_ref
blkcg: rename blkg_try_get to blkg_tryget

Documentation/admin-guide/cgroup-v2.rst | 8 +-
block/bfq-cgroup.c | 4 +-
block/bfq-iosched.c | 2 +-
block/bio.c | 158 ++++++++++++++++--------
block/blk-cgroup.c | 123 ++++++++++++------
block/blk-iolatency.c | 26 +---
block/blk-throttle.c | 13 +-
block/bounce.c | 4 +-
block/cfq-iosched.c | 4 +-
drivers/block/loop.c | 5 +-
drivers/md/raid0.c | 2 +-
fs/buffer.c | 10 +-
fs/ext4/page-io.c | 2 +-
include/linux/bio.h | 23 ++--
include/linux/blk-cgroup.h | 145 +++++++++++++++-------
include/linux/blk_types.h | 1 -
include/linux/cgroup.h | 2 +
include/linux/writeback.h | 5 +-
kernel/cgroup/cgroup.c | 48 +++++--
kernel/trace/blktrace.c | 4 +-
mm/page_io.c | 2 +-
21 files changed, 381 insertions(+), 210 deletions(-)

Thanks,
Dennis


2018-09-11 18:42:27

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 11/12] blkcg: change blkg reference counting to use percpu_ref

From: "Dennis Zhou (Facebook)" <[email protected]>

Now that every bio is associated with a blkg, this puts the use of
blkg_get, blkg_try_get, and blkg_put on the hot path. This switches over
the refcnt in blkg to use percpu_ref.

Signed-off-by: Dennis Zhou <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
block/blk-cgroup.c | 64 +++++++++++++++++++++++---------------
include/linux/blk-cgroup.h | 15 +++------
2 files changed, 44 insertions(+), 35 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index e9e3a955f61a..ab3676e1e15e 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -84,6 +84,37 @@ static void blkg_free(struct blkcg_gq *blkg)
kfree(blkg);
}

+static void __blkg_release(struct rcu_head *rcu)
+{
+ struct blkcg_gq *blkg = container_of(rcu, struct blkcg_gq, rcu_head);
+
+ percpu_ref_exit(&blkg->refcnt);
+
+ /* release the blkcg and parent blkg refs this blkg has been holding */
+ css_put(&blkg->blkcg->css);
+ if (blkg->parent)
+ blkg_put(blkg->parent);
+
+ wb_congested_put(blkg->wb_congested);
+
+ blkg_free(blkg);
+}
+
+/*
+ * A group is RCU protected, but having an rcu lock does not mean that one
+ * can access all the fields of blkg and assume these are valid. For
+ * example, don't try to follow throtl_data and request queue links.
+ *
+ * Having a reference to blkg under an rcu allows accesses to only values
+ * local to groups like group stats and group rate limits.
+ */
+static void blkg_release(struct percpu_ref *ref)
+{
+ struct blkcg_gq *blkg = container_of(ref, struct blkcg_gq, refcnt);
+
+ call_rcu(&blkg->rcu_head, __blkg_release);
+}
+
/**
* blkg_alloc - allocate a blkg
* @blkcg: block cgroup the new blkg is associated with
@@ -110,7 +141,6 @@ static struct blkcg_gq *blkg_alloc(struct blkcg *blkcg, struct request_queue *q,
blkg->q = q;
INIT_LIST_HEAD(&blkg->q_node);
blkg->blkcg = blkcg;
- atomic_set(&blkg->refcnt, 1);

/* root blkg uses @q->root_rl, init rl only for !root blkgs */
if (blkcg != &blkcg_root) {
@@ -217,6 +247,11 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
blkg_get(blkg->parent);
}

+ ret = percpu_ref_init(&blkg->refcnt, blkg_release, 0,
+ GFP_NOWAIT | __GFP_NOWARN);
+ if (ret)
+ goto err_cancel_ref;
+
/* invoke per-policy init */
for (i = 0; i < BLKCG_MAX_POLS; i++) {
struct blkcg_policy *pol = blkcg_policy[i];
@@ -249,6 +284,8 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
blkg_put(blkg);
return ERR_PTR(ret);

+err_cancel_ref:
+ percpu_ref_exit(&blkg->refcnt);
err_put_congested:
wb_congested_put(wb_congested);
err_put_css:
@@ -387,7 +424,7 @@ static void blkg_destroy(struct blkcg_gq *blkg)
* Put the reference taken at the time of creation so that when all
* queues are gone, group can be destroyed.
*/
- blkg_put(blkg);
+ percpu_ref_kill(&blkg->refcnt);
}

/**
@@ -414,29 +451,6 @@ static void blkg_destroy_all(struct request_queue *q)
q->root_rl.blkg = NULL;
}

-/*
- * A group is RCU protected, but having an rcu lock does not mean that one
- * can access all the fields of blkg and assume these are valid. For
- * example, don't try to follow throtl_data and request queue links.
- *
- * Having a reference to blkg under an rcu allows accesses to only values
- * local to groups like group stats and group rate limits.
- */
-void __blkg_release_rcu(struct rcu_head *rcu_head)
-{
- struct blkcg_gq *blkg = container_of(rcu_head, struct blkcg_gq, rcu_head);
-
- /* release the blkcg and parent blkg refs this blkg has been holding */
- css_put(&blkg->blkcg->css);
- if (blkg->parent)
- blkg_put(blkg->parent);
-
- wb_congested_put(blkg->wb_congested);
-
- blkg_free(blkg);
-}
-EXPORT_SYMBOL_GPL(__blkg_release_rcu);
-
/*
* The next function used by blk_queue_for_each_rl(). It's a bit tricky
* because the root blkg uses @q->root_rl instead of its own rl.
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index d2f7f1b00fcf..7ff5d8ba8c7a 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -126,7 +126,7 @@ struct blkcg_gq {
struct request_list rl;

/* reference count */
- atomic_t refcnt;
+ struct percpu_ref refcnt;

/* is this blkg online? protected by both blkcg and q locks */
bool online;
@@ -490,8 +490,7 @@ static inline int blkg_path(struct blkcg_gq *blkg, char *buf, int buflen)
*/
static inline void blkg_get(struct blkcg_gq *blkg)
{
- WARN_ON_ONCE(atomic_read(&blkg->refcnt) <= 0);
- atomic_inc(&blkg->refcnt);
+ percpu_ref_get(&blkg->refcnt);
}

/**
@@ -503,7 +502,7 @@ static inline void blkg_get(struct blkcg_gq *blkg)
*/
static inline struct blkcg_gq *blkg_try_get(struct blkcg_gq *blkg)
{
- if (atomic_inc_not_zero(&blkg->refcnt))
+ if (percpu_ref_tryget(&blkg->refcnt))
return blkg;
return NULL;
}
@@ -517,23 +516,19 @@ static inline struct blkcg_gq *blkg_try_get(struct blkcg_gq *blkg)
*/
static inline struct blkcg_gq *blkg_try_get_closest(struct blkcg_gq *blkg)
{
- while (!atomic_inc_not_zero(&blkg->refcnt))
+ while (!percpu_ref_tryget(&blkg->refcnt))
blkg = blkg->parent;

return blkg;
}

-void __blkg_release_rcu(struct rcu_head *rcu);
-
/**
* blkg_put - put a blkg reference
* @blkg: blkg to put
*/
static inline void blkg_put(struct blkcg_gq *blkg)
{
- WARN_ON_ONCE(atomic_read(&blkg->refcnt) <= 0);
- if (atomic_dec_and_test(&blkg->refcnt))
- call_rcu(&blkg->rcu_head, __blkg_release_rcu);
+ percpu_ref_put(&blkg->refcnt);
}

/**
--
2.17.1


2018-09-11 18:42:40

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 12/12] blkcg: rename blkg_try_get to blkg_tryget

From: "Dennis Zhou (Facebook)" <[email protected]>

blkg reference counting now uses percpu_ref rather than atomic_t. Let's
make this consistent with css_tryget. This renames blkg_try_get to
blkg_tryget and now returns a bool rather than the blkg or NULL.

Signed-off-by: Dennis Zhou <[email protected]>
Reviewed-by: Josef Bacik <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
block/bio.c | 2 +-
block/blk-cgroup.c | 3 +--
block/blk-iolatency.c | 2 +-
include/linux/blk-cgroup.h | 14 ++++++--------
4 files changed, 9 insertions(+), 12 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index a65ab3bacb4f..14351fac07cd 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1975,7 +1975,7 @@ int bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg)
{
if (unlikely(bio->bi_blkg))
return -EBUSY;
- bio->bi_blkg = blkg_try_get_closest(blkg);
+ bio->bi_blkg = blkg_tryget_closest(blkg);
return 0;
}

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index ab3676e1e15e..76136bea7a7f 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1794,8 +1794,7 @@ void blkcg_maybe_throttle_current(void)
blkg = blkg_lookup(blkcg, q);
if (!blkg)
goto out;
- blkg = blkg_try_get(blkg);
- if (!blkg)
+ if (!blkg_tryget(blkg))
goto out;
rcu_read_unlock();

diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 9d7052bad6f7..5a4cec54c998 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -628,7 +628,7 @@ static void blkiolatency_timer_fn(struct timer_list *t)
* We could be exiting, don't access the pd unless we have a
* ref on the blkg.
*/
- if (!blkg_try_get(blkg))
+ if (!blkg_tryget(blkg))
continue;

iolat = blkg_to_lat(blkg);
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index 7ff5d8ba8c7a..b7fd08013de2 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -494,27 +494,25 @@ static inline void blkg_get(struct blkcg_gq *blkg)
}

/**
- * blkg_try_get - try and get a blkg reference
+ * blkg_tryget - try and get a blkg reference
* @blkg: blkg to get
*
* This is for use when doing an RCU lookup of the blkg. We may be in the midst
* of freeing this blkg, so we can only use it if the refcnt is not zero.
*/
-static inline struct blkcg_gq *blkg_try_get(struct blkcg_gq *blkg)
+static inline bool blkg_tryget(struct blkcg_gq *blkg)
{
- if (percpu_ref_tryget(&blkg->refcnt))
- return blkg;
- return NULL;
+ return percpu_ref_tryget(&blkg->refcnt);
}

/**
- * blkg_try_get_closest - try and get a blkg ref on the closet blkg
+ * blkg_tryget_closest - try and get a blkg ref on the closet blkg
* @blkg: blkg to get
*
* This walks up the blkg tree to find the closest non-dying blkg and returns
* the blkg that it did association with as it may not be the passed in blkg.
*/
-static inline struct blkcg_gq *blkg_try_get_closest(struct blkcg_gq *blkg)
+static inline struct blkcg_gq *blkg_tryget_closest(struct blkcg_gq *blkg)
{
while (!percpu_ref_tryget(&blkg->refcnt))
blkg = blkg->parent;
@@ -599,7 +597,7 @@ static inline struct request_list *blk_get_rl(struct request_queue *q,
if (unlikely(!blkg))
blkg = __blkg_lookup_create(blkcg, q);

- if (!blkg_try_get(blkg))
+ if (!blkg_tryget(blkg))
goto rl_use_root;

rcu_read_unlock();
--
2.17.1


2018-09-11 18:42:48

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 09/12] blkcg: remove additional reference to the css

From: "Dennis Zhou (Facebook)" <[email protected]>

The previous patch in this series removed carrying around a pointer to
the css in blkg. However, the blkg association logic still relied on
taking a reference on the css to ensure we wouldn't fail in getting a
reference for the blkg.

Here the implicit dependency on the css is removed. The association
continues to rely on the tryget logic walking up the blkg tree. This
streamlines the three ways that association can happen: normal, swap,
and writeback.

Signed-off-by: Dennis Zhou <[email protected]>
---
block/bio.c | 62 ++++++++++++++++++++++----------------
include/linux/blk-cgroup.h | 52 +++-----------------------------
include/linux/cgroup.h | 2 ++
kernel/cgroup/cgroup.c | 48 +++++++++++++++++++++++------
4 files changed, 81 insertions(+), 83 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index f32b4757a232..a65ab3bacb4f 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1979,18 +1979,30 @@ int bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg)
return 0;
}

+/**
+ * __bio_associate_blkg_from_css - internal blkg association function
+ *
+ * This in the core association function that all association paths rely on.
+ * A blkg reference is taken which is released upon freeing of the bio.
+ */
static int __bio_associate_blkg_from_css(struct bio *bio,
struct cgroup_subsys_state *css)
{
+ struct request_queue *q = bio->bi_disk->queue;
struct blkcg_gq *blkg;
+ int ret;

rcu_read_lock();

- blkg = blkg_lookup_create(css_to_blkcg(css), bio->bi_disk->queue);
+ if (!css || !css->parent)
+ blkg = q->root_blkg;
+ else
+ blkg = blkg_lookup_create(css_to_blkcg(css), q);

- rcu_read_unlock();
+ ret = bio_associate_blkg(bio, blkg);

- return bio_associate_blkg(bio, blkg);
+ rcu_read_unlock();
+ return ret;
}

/**
@@ -1999,13 +2011,14 @@ static int __bio_associate_blkg_from_css(struct bio *bio,
* @css: target css
*
* Associate @bio with the blkg found by combining the css's blkg and the
- * request_queue of the @bio. This takes a reference on the css that will
- * be put upon freeing of @bio.
+ * request_queue of the @bio. This falls back to the queue's root_blkg if
+ * the association fails with the css.
*/
int bio_associate_blkg_from_css(struct bio *bio,
struct cgroup_subsys_state *css)
{
- css_get(css);
+ if (unlikely(bio->bi_blkg))
+ return -EBUSY;
return __bio_associate_blkg_from_css(bio, css);
}
EXPORT_SYMBOL_GPL(bio_associate_blkg_from_css);
@@ -2017,22 +2030,29 @@ EXPORT_SYMBOL_GPL(bio_associate_blkg_from_css);
* @page: the page to lookup the blkcg from
*
* Associate @bio with the blkg from @page's owning memcg and the respective
- * request_queue. This works like every other associate function wrt
- * references.
+ * request_queue. If cgroup_e_css returns NULL, fall back to the queue's
+ * root_blkg.
*
* Note: this must be called after bio has an associated device.
*/
int bio_associate_blkg_from_page(struct bio *bio, struct page *page)
{
struct cgroup_subsys_state *css;
+ int ret;

if (unlikely(bio->bi_blkg))
return -EBUSY;
if (!page->mem_cgroup)
return 0;
- css = cgroup_get_e_css(page->mem_cgroup->css.cgroup, &io_cgrp_subsys);

- return __bio_associate_blkg_from_css(bio, css);
+ rcu_read_lock();
+
+ css = cgroup_e_css(page->mem_cgroup->css.cgroup, &io_cgrp_subsys);
+
+ ret = __bio_associate_blkg_from_css(bio, css);
+
+ rcu_read_unlock();
+ return ret;
}
#endif /* CONFIG_MEMCG */

@@ -2042,12 +2062,12 @@ int bio_associate_blkg_from_page(struct bio *bio, struct page *page)
* @bio: target bio
*
* Associate @bio with the blkg found from the bio's css and the request_queue.
- * If one is not found, bio_lookup_blkg creates the blkg.
+ * If one is not found, bio_lookup_blkg creates the blkg. This falls back to
+ * the queue's root_blkg if association fails.
*/
int bio_associate_create_blkg(struct request_queue *q, struct bio *bio)
{
- struct blkcg *blkcg;
- struct blkcg_gq *blkg;
+ struct cgroup_subsys_state *css;
int ret = 0;

/* someone has already associated this bio with a blkg */
@@ -2056,15 +2076,9 @@ int bio_associate_create_blkg(struct request_queue *q, struct bio *bio)

rcu_read_lock();

- blkcg = css_to_blkcg(blkcg_get_css());
+ css = blkcg_css();

- if (!blkcg->css.parent) {
- ret = bio_associate_blkg(bio, q->root_blkg);
- } else {
- blkg = blkg_lookup_create(blkcg, q);
-
- ret = bio_associate_blkg(bio, blkg);
- }
+ ret = __bio_associate_blkg_from_css(bio, css);

rcu_read_unlock();
return ret;
@@ -2081,8 +2095,6 @@ void bio_disassociate_task(struct bio *bio)
bio->bi_ioc = NULL;
}
if (bio->bi_blkg) {
- /* a ref is always taken on css */
- css_put(&bio_blkcg(bio)->css);
blkg_put(bio->bi_blkg);
bio->bi_blkg = NULL;
}
@@ -2095,10 +2107,8 @@ void bio_disassociate_task(struct bio *bio)
*/
void bio_clone_blkg_association(struct bio *dst, struct bio *src)
{
- if (src->bi_blkg) {
- css_get(&bio_blkcg(src)->css);
+ if (src->bi_blkg)
bio_associate_blkg(dst, src->bi_blkg);
- }
}
EXPORT_SYMBOL_GPL(bio_clone_blkg_association);
#endif /* CONFIG_BLK_CGROUP */
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index c41cfcc2b4d8..2951ea3541b1 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -249,47 +249,6 @@ static inline struct cgroup_subsys_state *blkcg_css(void)
return task_css(current, io_cgrp_id);
}

-/**
- * blkcg_get_css - find and get a reference to the css
- *
- * Find the css associated with either the kthread or the current task.
- * This takes a reference on the blkcg which will need to be managed by the
- * caller.
- */
-static inline struct cgroup_subsys_state *blkcg_get_css(void)
-{
- struct cgroup_subsys_state *css;
-
- rcu_read_lock();
-
- css = kthread_blkcg();
- if (css) {
- css_get(css);
- } else {
- /*
- * This is a bit complicated. It is possible task_css is seeing
- * an old css pointer here. This is caused by the current
- * thread migrating away from this cgroup and this cgroup dying.
- * css_tryget() will fail when trying to take a ref on a cgroup
- * that's ref count has hit 0.
- *
- * Therefore, if it does fail, this means current must have
- * been swapped away already and this is waiting for it to
- * propagate on the polling cpu. Hence the use of cpu_relax().
- */
- while (true) {
- css = task_css(current, io_cgrp_id);
- if (likely(css_tryget(css)))
- break;
- cpu_relax();
- }
- }
-
- rcu_read_unlock();
-
- return css;
-}
-
static inline struct blkcg *css_to_blkcg(struct cgroup_subsys_state *css)
{
return css ? container_of(css, struct blkcg, css) : NULL;
@@ -628,10 +587,8 @@ static inline struct request_list *blk_get_rl(struct request_queue *q,
rcu_read_lock();

blkcg = bio_blkcg(bio);
- if (blkcg)
- css_get(&blkcg->css);
- else
- blkcg = css_to_blkcg(blkcg_get_css());
+ if (!blkcg)
+ blkcg = css_to_blkcg(blkcg_css());

/* bypass blkg lookup and use @q->root_rl directly for root */
if (blkcg == &blkcg_root)
@@ -646,7 +603,8 @@ static inline struct request_list *blk_get_rl(struct request_queue *q,
if (unlikely(!blkg))
goto root_rl;

- blkg_get(blkg);
+ if (!blkg_try_get(blkg))
+ goto root_rl;
rcu_read_unlock();
return &blkg->rl;
root_rl:
@@ -663,8 +621,6 @@ static inline struct request_list *blk_get_rl(struct request_queue *q,
*/
static inline void blk_put_rl(struct request_list *rl)
{
- /* an additional ref is always taken for rl */
- css_put(&rl->blkg->blkcg->css);
if (rl->blkg->blkcg != &blkcg_root)
blkg_put(rl->blkg);
}
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index 32c553556bbd..b8bcbdeb2eac 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -93,6 +93,8 @@ extern struct css_set init_css_set;

bool css_has_online_children(struct cgroup_subsys_state *css);
struct cgroup_subsys_state *css_from_id(int id, struct cgroup_subsys *ss);
+struct cgroup_subsys_state *cgroup_e_css(struct cgroup *cgroup,
+ struct cgroup_subsys *ss);
struct cgroup_subsys_state *cgroup_get_e_css(struct cgroup *cgroup,
struct cgroup_subsys *ss);
struct cgroup_subsys_state *css_tryget_online_from_dir(struct dentry *dentry,
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index aae10baf1902..48fb22e49467 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -492,7 +492,7 @@ static struct cgroup_subsys_state *cgroup_tryget_css(struct cgroup *cgrp,
}

/**
- * cgroup_e_css - obtain a cgroup's effective css for the specified subsystem
+ * cgroup_e_css_by_mask - obtain a cgroup's effective css for the specified ss
* @cgrp: the cgroup of interest
* @ss: the subsystem of interest (%NULL returns @cgrp->self)
*
@@ -501,8 +501,8 @@ static struct cgroup_subsys_state *cgroup_tryget_css(struct cgroup *cgrp,
* enabled. If @ss is associated with the hierarchy @cgrp is on, this
* function is guaranteed to return non-NULL css.
*/
-static struct cgroup_subsys_state *cgroup_e_css(struct cgroup *cgrp,
- struct cgroup_subsys *ss)
+static struct cgroup_subsys_state *cgroup_e_css_by_mask(struct cgroup *cgrp,
+ struct cgroup_subsys *ss)
{
lockdep_assert_held(&cgroup_mutex);

@@ -522,6 +522,35 @@ static struct cgroup_subsys_state *cgroup_e_css(struct cgroup *cgrp,
return cgroup_css(cgrp, ss);
}

+/**
+ * cgroup_e_css - obtain a cgroup's effective css for the specified subsystem
+ * @cgrp: the cgroup of interest
+ * @ss: the subsystem of interest
+ *
+ * Find and get the effective css of @cgrp for @ss. The effective css is
+ * defined as the matching css of the nearest ancestor including self which
+ * has @ss enabled. If @ss is not mounted on the hierarchy @cgrp is on,
+ * the root css is returned, so this function always returns a valid css.
+ *
+ * The returned css is not guaranteed to be online, and therefore it is the
+ * callers responsiblity to tryget a reference for it.
+ */
+struct cgroup_subsys_state *cgroup_e_css(struct cgroup *cgrp,
+ struct cgroup_subsys *ss)
+{
+ struct cgroup_subsys_state *css;
+
+ do {
+ css = cgroup_css(cgrp, ss);
+
+ if (css)
+ return css;
+ cgrp = cgroup_parent(cgrp);
+ } while (cgrp);
+
+ return init_css_set.subsys[ss->id];
+}
+
/**
* cgroup_get_e_css - get a cgroup's effective css for the specified subsystem
* @cgrp: the cgroup of interest
@@ -604,10 +633,11 @@ EXPORT_SYMBOL_GPL(of_css);
*
* Should be called under cgroup_[tree_]mutex.
*/
-#define for_each_e_css(css, ssid, cgrp) \
- for ((ssid) = 0; (ssid) < CGROUP_SUBSYS_COUNT; (ssid)++) \
- if (!((css) = cgroup_e_css(cgrp, cgroup_subsys[(ssid)]))) \
- ; \
+#define for_each_e_css(css, ssid, cgrp) \
+ for ((ssid) = 0; (ssid) < CGROUP_SUBSYS_COUNT; (ssid)++) \
+ if (!((css) = cgroup_e_css_by_mask(cgrp, \
+ cgroup_subsys[(ssid)]))) \
+ ; \
else

/**
@@ -1006,7 +1036,7 @@ static struct css_set *find_existing_css_set(struct css_set *old_cset,
* @ss is in this hierarchy, so we want the
* effective css from @cgrp.
*/
- template[i] = cgroup_e_css(cgrp, ss);
+ template[i] = cgroup_e_css_by_mask(cgrp, ss);
} else {
/*
* @ss is not in this hierarchy, so we don't want
@@ -3019,7 +3049,7 @@ static int cgroup_apply_control(struct cgroup *cgrp)
return ret;

/*
- * At this point, cgroup_e_css() results reflect the new csses
+ * At this point, cgroup_e_css_by_mask() results reflect the new csses
* making the following cgroup_update_dfl_csses() properly update
* css associations of all tasks in the subtree.
*/
--
2.17.1


2018-09-11 18:42:56

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 10/12] blkcg: cleanup and make blk_get_rl use blkg_lookup_create

From: "Dennis Zhou (Facebook)" <[email protected]>

blk_get_rl is responsible for identifying which request_list a request
should be allocated to. Try get logic was added earlier, but
semantically the logic was not changed.

This patch makes better use of the bio already having a reference to the
blkg in the hot path. The cold path uses a better fallback of
blkg_lookup_create rather than just blkg_lookup and then falling back to
the q->root_rl. If lookup_create fails with anything but -ENODEV, it
falls back to q->root_rl.

A clarifying comment is added to explain why q->root_rl is used rather
than the root blkg's rl.

Signed-off-by: Dennis Zhou <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
include/linux/blk-cgroup.h | 34 +++++++++++++++++++++-------------
1 file changed, 21 insertions(+), 13 deletions(-)

diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index 2951ea3541b1..d2f7f1b00fcf 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -586,28 +586,36 @@ static inline struct request_list *blk_get_rl(struct request_queue *q,

rcu_read_lock();

- blkcg = bio_blkcg(bio);
- if (!blkcg)
- blkcg = css_to_blkcg(blkcg_css());
+ if (bio && bio->bi_blkg) {
+ blkcg = bio->bi_blkg->blkcg;
+ if (blkcg == &blkcg_root)
+ goto rl_use_root;
+
+ blkg_get(bio->bi_blkg);
+ rcu_read_unlock();
+ return &bio->bi_blkg->rl;
+ }

- /* bypass blkg lookup and use @q->root_rl directly for root */
+ blkcg = css_to_blkcg(blkcg_css());
if (blkcg == &blkcg_root)
- goto root_rl;
+ goto rl_use_root;

- /*
- * Try to use blkg->rl. blkg lookup may fail under memory pressure
- * or if either the blkcg or queue is going away. Fall back to
- * root_rl in such cases.
- */
blkg = blkg_lookup(blkcg, q);
if (unlikely(!blkg))
- goto root_rl;
+ blkg = __blkg_lookup_create(blkcg, q);

if (!blkg_try_get(blkg))
- goto root_rl;
+ goto rl_use_root;
+
rcu_read_unlock();
return &blkg->rl;
-root_rl:
+
+ /*
+ * Each blkg has its own request_list, however, the root blkcg
+ * uses the request_queue's root_rl. This is to avoid most
+ * overhead for the root blkcg.
+ */
+rl_use_root:
rcu_read_unlock();
return &q->root_rl;
}
--
2.17.1


2018-09-11 18:43:16

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 04/12] blkcg: always associate a bio with a blkg

From: "Dennis Zhou (Facebook)" <[email protected]>

Previously, blkg's were only assigned as needed by blk-iolatency and
blk-throttle. bio->css was also always being associated while blkg was
being looked up and then thrown away in blkcg_bio_issue_check.

This patch begins the cleanup of bio->css and bio->bi_blkg by always
associating a blkg in blkcg_bio_issue_check. This tries to create the
blkg, but if it is not possible, falls back to using the root_blkg of
the request_queue. Therefore, a bio will always be associated with a
blkg. The duplicate association logic is removed from blk-throttle and
blk-iolatency.

Signed-off-by: Dennis Zhou <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
block/bio.c | 38 ++++++++++++++++++++++++++++++++++++++
block/blk-iolatency.c | 24 ++----------------------
block/blk-throttle.c | 5 +----
include/linux/bio.h | 3 +++
include/linux/blk-cgroup.h | 16 ++--------------
5 files changed, 46 insertions(+), 40 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 49a28abd9772..97c1c4bf8df6 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -2029,6 +2029,41 @@ int bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg)
return 0;
}

+/**
+ * bio_associate_create_blkg - associate a bio with a blkg from q
+ * @q: request_queue where bio is going
+ * @bio: target bio
+ *
+ * Associate @bio with the blkg found from the bio's css and the request_queue.
+ * If one is not found, bio_lookup_blkg creates the blkg.
+ */
+int bio_associate_create_blkg(struct request_queue *q, struct bio *bio)
+{
+ struct blkcg *blkcg;
+ struct blkcg_gq *blkg;
+ int ret = 0;
+
+ /* someone has already associated this bio with a blkg */
+ if (bio->bi_blkg)
+ return ret;
+
+ rcu_read_lock();
+
+ bio_associate_blkcg(bio, NULL);
+ blkcg = bio_blkcg(bio);
+
+ if (!blkcg->css.parent) {
+ ret = bio_associate_blkg(bio, q->root_blkg);
+ } else {
+ blkg = blkg_lookup_create(blkcg, q);
+
+ ret = bio_associate_blkg(bio, blkg);
+ }
+
+ rcu_read_unlock();
+ return ret;
+}
+
/**
* bio_disassociate_task - undo bio_associate_current()
* @bio: target bio
@@ -2058,6 +2093,9 @@ void bio_clone_blkcg_association(struct bio *dst, struct bio *src)
{
if (src->bi_css)
WARN_ON(bio_associate_blkcg(dst, src->bi_css));
+
+ if (src->bi_blkg)
+ bio_associate_blkg(dst, src->bi_blkg);
}
EXPORT_SYMBOL_GPL(bio_clone_blkcg_association);
#endif /* CONFIG_BLK_CGROUP */
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 22b2ff0440cc..79a7549e2062 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -395,34 +395,14 @@ static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio,
spinlock_t *lock)
{
struct blk_iolatency *blkiolat = BLKIOLATENCY(rqos);
- struct blkcg *blkcg;
- struct blkcg_gq *blkg;
- struct request_queue *q = rqos->q;
+ struct blkcg_gq *blkg = bio->bi_blkg;
bool issue_as_root = bio_issue_as_root_blkg(bio);

if (!blk_iolatency_enabled(blkiolat))
return;

- rcu_read_lock();
- bio_associate_blkcg(bio, NULL);
- blkcg = bio_blkcg(bio);
- blkg = blkg_lookup(blkcg, q);
- if (unlikely(!blkg)) {
- if (!lock)
- spin_lock_irq(q->queue_lock);
- blkg = __blkg_lookup_create(blkcg, q);
- if (IS_ERR(blkg))
- blkg = NULL;
- if (!lock)
- spin_unlock_irq(q->queue_lock);
- }
- if (!blkg)
- goto out;
-
bio_issue_init(&bio->bi_issue, bio_sectors(bio));
- bio_associate_blkg(bio, blkg);
-out:
- rcu_read_unlock();
+
while (blkg && blkg->parent) {
struct iolatency_grp *iolat = blkg_to_lat(blkg);
if (!iolat) {
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 01d0620a4e4a..b7b5cc4defc2 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -2129,9 +2129,6 @@ static inline void throtl_update_latency_buckets(struct throtl_data *td)
static void blk_throtl_assoc_bio(struct throtl_grp *tg, struct bio *bio)
{
#ifdef CONFIG_BLK_DEV_THROTTLING_LOW
- /* fallback to root_blkg if we fail to get a blkg ref */
- if (bio->bi_css && (bio_associate_blkg(bio, tg_to_blkg(tg)) == -ENODEV))
- bio_associate_blkg(bio, bio->bi_disk->queue->root_blkg);
bio_issue_init(&bio->bi_issue, bio_sectors(bio));
#endif
}
@@ -2140,7 +2137,7 @@ bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg,
struct bio *bio)
{
struct throtl_qnode *qn = NULL;
- struct throtl_grp *tg = blkg_to_tg(blkg ?: q->root_blkg);
+ struct throtl_grp *tg = blkg_to_tg(blkg);
struct throtl_service_queue *sq;
bool rw = bio_data_dir(bio);
bool throttled = false;
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 14b4fa266357..829cd0bb407d 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -542,11 +542,14 @@ static inline int bio_associate_blkcg_from_page(struct bio *bio,
#ifdef CONFIG_BLK_CGROUP
int bio_associate_blkcg(struct bio *bio, struct cgroup_subsys_state *blkcg_css);
int bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg);
+int bio_associate_create_blkg(struct request_queue *q, struct bio *bio);
void bio_disassociate_task(struct bio *bio);
void bio_clone_blkcg_association(struct bio *dst, struct bio *src);
#else /* CONFIG_BLK_CGROUP */
static inline int bio_associate_blkcg(struct bio *bio,
struct cgroup_subsys_state *blkcg_css) { return 0; }
+static inline int bio_associate_create_blkg(struct request_queue *q,
+ struct bio *bio) { return 0; }
static inline void bio_disassociate_task(struct bio *bio) { }
static inline void bio_clone_blkcg_association(struct bio *dst,
struct bio *src) { }
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index 1fbff1bbb651..6e33ad1d92b4 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -900,29 +900,17 @@ static inline bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg
static inline bool blkcg_bio_issue_check(struct request_queue *q,
struct bio *bio)
{
- struct blkcg *blkcg;
struct blkcg_gq *blkg;
bool throtl = false;

rcu_read_lock();

- /* associate blkcg if bio hasn't attached one */
- bio_associate_blkcg(bio, NULL);
- blkcg = bio_blkcg(bio);
-
- blkg = blkg_lookup(blkcg, q);
- if (unlikely(!blkg)) {
- spin_lock_irq(q->queue_lock);
- blkg = __blkg_lookup_create(blkcg, q);
- if (IS_ERR(blkg))
- blkg = NULL;
- spin_unlock_irq(q->queue_lock);
- }
+ bio_associate_create_blkg(q, bio);
+ blkg = bio->bi_blkg;

throtl = blk_throtl_bio(q, blkg, bio);

if (!throtl) {
- blkg = blkg ?: q->root_blkg;
/*
* If the bio is flagged with BIO_QUEUE_ENTERED it means this
* is a split bio and we would have already accounted for the
--
2.17.1


2018-09-11 18:43:17

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 08/12] blkcg: remove bio->bi_css and instead use bio->bi_blkg

From: "Dennis Zhou (Facebook)" <[email protected]>

Prior patches ensured that all bios are now associated with some blkg.
This now makes bio->bi_css unnecessary as blkg maintains a reference to
the blkcg already.

This patch removes the field bi_css and transfers corresponding uses to
access via bi_blkg.

Signed-off-by: Dennis Zhou <[email protected]>
Reviewed-by: Josef Bacik <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
block/bio.c | 56 ++++++++------------------------------
block/bounce.c | 2 +-
drivers/block/loop.c | 5 ++--
drivers/md/raid0.c | 2 +-
include/linux/bio.h | 9 ++----
include/linux/blk-cgroup.h | 8 +++---
include/linux/blk_types.h | 1 -
kernel/trace/blktrace.c | 4 +--
8 files changed, 25 insertions(+), 62 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 897c8ed9e572..f32b4757a232 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -609,7 +609,7 @@ void __bio_clone_fast(struct bio *bio, struct bio *bio_src)
bio->bi_iter = bio_src->bi_iter;
bio->bi_io_vec = bio_src->bi_io_vec;

- bio_clone_blkcg_association(bio, bio_src);
+ bio_clone_blkg_association(bio, bio_src);

blkcg_bio_issue_init(bio);
}
@@ -1957,34 +1957,6 @@ EXPORT_SYMBOL(bioset_init_from_src);

#ifdef CONFIG_BLK_CGROUP

-/**
- * bio_associate_blkcg - associate a bio with the specified blkcg
- * @bio: target bio
- * @blkcg_css: css of the blkcg to associate
- *
- * Associate @bio with the blkcg specified by @blkcg_css. Block layer will
- * treat @bio as if it were issued by a task which belongs to the blkcg.
- *
- * This function takes an extra reference of @blkcg_css which will be put
- * when @bio is released. The caller must own @bio and is responsible for
- * synchronizing calls to this function. If @blkcg_css is NULL, a call to
- * blkcg_get_css finds the current css from the kthread or task.
- */
-int bio_associate_blkcg(struct bio *bio, struct cgroup_subsys_state *blkcg_css)
-{
- if (unlikely(bio->bi_css))
- return -EBUSY;
-
- if (blkcg_css)
- css_get(blkcg_css);
- else
- blkcg_css = blkcg_get_css();
-
- bio->bi_css = blkcg_css;
- return 0;
-}
-EXPORT_SYMBOL_GPL(bio_associate_blkcg);
-
/**
* bio_associate_blkg - associate a bio with the a blkg
* @bio: target bio
@@ -2034,7 +2006,6 @@ int bio_associate_blkg_from_css(struct bio *bio,
struct cgroup_subsys_state *css)
{
css_get(css);
- bio->bi_css = css;
return __bio_associate_blkg_from_css(bio, css);
}
EXPORT_SYMBOL_GPL(bio_associate_blkg_from_css);
@@ -2055,12 +2026,11 @@ int bio_associate_blkg_from_page(struct bio *bio, struct page *page)
{
struct cgroup_subsys_state *css;

- if (unlikely(bio->bi_css))
+ if (unlikely(bio->bi_blkg))
return -EBUSY;
if (!page->mem_cgroup)
return 0;
css = cgroup_get_e_css(page->mem_cgroup->css.cgroup, &io_cgrp_subsys);
- bio->bi_css = css;

return __bio_associate_blkg_from_css(bio, css);
}
@@ -2086,8 +2056,7 @@ int bio_associate_create_blkg(struct request_queue *q, struct bio *bio)

rcu_read_lock();

- bio_associate_blkcg(bio, NULL);
- blkcg = bio_blkcg(bio);
+ blkcg = css_to_blkcg(blkcg_get_css());

if (!blkcg->css.parent) {
ret = bio_associate_blkg(bio, q->root_blkg);
@@ -2111,30 +2080,27 @@ void bio_disassociate_task(struct bio *bio)
put_io_context(bio->bi_ioc);
bio->bi_ioc = NULL;
}
- if (bio->bi_css) {
- css_put(bio->bi_css);
- bio->bi_css = NULL;
- }
if (bio->bi_blkg) {
+ /* a ref is always taken on css */
+ css_put(&bio_blkcg(bio)->css);
blkg_put(bio->bi_blkg);
bio->bi_blkg = NULL;
}
}

/**
- * bio_clone_blkcg_association - clone blkcg association from src to dst bio
+ * bio_clone_blkg_association - clone blkg association from src to dst bio
* @dst: destination bio
* @src: source bio
*/
-void bio_clone_blkcg_association(struct bio *dst, struct bio *src)
+void bio_clone_blkg_association(struct bio *dst, struct bio *src)
{
- if (src->bi_css)
- WARN_ON(bio_associate_blkcg(dst, src->bi_css));
-
- if (src->bi_blkg)
+ if (src->bi_blkg) {
+ css_get(&bio_blkcg(src)->css);
bio_associate_blkg(dst, src->bi_blkg);
+ }
}
-EXPORT_SYMBOL_GPL(bio_clone_blkcg_association);
+EXPORT_SYMBOL_GPL(bio_clone_blkg_association);
#endif /* CONFIG_BLK_CGROUP */

static void __init biovec_init_slabs(void)
diff --git a/block/bounce.c b/block/bounce.c
index 7a08703b1204..b30071ac4ec6 100644
--- a/block/bounce.c
+++ b/block/bounce.c
@@ -257,7 +257,7 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask,
}
}

- bio_clone_blkcg_association(bio, bio_src);
+ bio_clone_blkg_association(bio, bio_src);

blkcg_bio_issue_init(bio);

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index ea9debf59b22..abad6d15f956 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -77,6 +77,7 @@
#include <linux/falloc.h>
#include <linux/uio.h>
#include <linux/ioprio.h>
+#include <linux/blk-cgroup.h>

#include "loop.h"

@@ -1760,8 +1761,8 @@ static blk_status_t loop_queue_rq(struct blk_mq_hw_ctx *hctx,

/* always use the first bio's css */
#ifdef CONFIG_BLK_CGROUP
- if (cmd->use_aio && rq->bio && rq->bio->bi_css) {
- cmd->css = rq->bio->bi_css;
+ if (cmd->use_aio && rq->bio && rq->bio->bi_blkg) {
+ cmd->css = &bio_blkcg(rq->bio)->css;
css_get(cmd->css);
} else
#endif
diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index ac1cffd2a09b..f3fb5bb8c82a 100644
--- a/drivers/md/raid0.c
+++ b/drivers/md/raid0.c
@@ -542,7 +542,7 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
!discard_bio)
continue;
bio_chain(discard_bio, bio);
- bio_clone_blkcg_association(discard_bio, bio);
+ bio_clone_blkg_association(discard_bio, bio);
if (mddev->gendisk)
trace_block_bio_remap(bdev_get_queue(rdev->bdev),
discard_bio, disk_devt(mddev->gendisk),
diff --git a/include/linux/bio.h b/include/linux/bio.h
index c73a870ebc0e..e973876625a8 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -540,24 +540,21 @@ static inline int bio_associate_blkg_from_page(struct bio *bio,
#endif

#ifdef CONFIG_BLK_CGROUP
-int bio_associate_blkcg(struct bio *bio, struct cgroup_subsys_state *blkcg_css);
int bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg);
int bio_associate_blkg_from_css(struct bio *bio,
struct cgroup_subsys_state *css);
int bio_associate_create_blkg(struct request_queue *q, struct bio *bio);
void bio_disassociate_task(struct bio *bio);
-void bio_clone_blkcg_association(struct bio *dst, struct bio *src);
+void bio_clone_blkg_association(struct bio *dst, struct bio *src);
#else /* CONFIG_BLK_CGROUP */
-static inline int bio_associate_blkcg(struct bio *bio,
- struct cgroup_subsys_state *blkcg_css) { return 0; }
static inline int bio_associate_blkg_from_css(struct bio *bio,
struct cgroup_subsys_state *css)
{ return 0; }
static inline int bio_associate_create_blkg(struct request_queue *q,
struct bio *bio) { return 0; }
static inline void bio_disassociate_task(struct bio *bio) { }
-static inline void bio_clone_blkcg_association(struct bio *dst,
- struct bio *src) { }
+static inline void bio_clone_blkg_association(struct bio *dst,
+ struct bio *src) { }
#endif /* CONFIG_BLK_CGROUP */

#ifdef CONFIG_HIGHMEM
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index a6b6e741a75e..c41cfcc2b4d8 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -308,8 +308,8 @@ static inline struct blkcg *css_to_blkcg(struct cgroup_subsys_state *css)
*/
static inline struct blkcg *__bio_blkcg(struct bio *bio)
{
- if (bio && bio->bi_css)
- return css_to_blkcg(bio->bi_css);
+ if (bio && bio->bi_blkg)
+ return bio->bi_blkg->blkcg;
return css_to_blkcg(blkcg_css());
}

@@ -323,8 +323,8 @@ static inline struct blkcg *__bio_blkcg(struct bio *bio)
*/
static inline struct blkcg *bio_blkcg(struct bio *bio)
{
- if (bio && bio->bi_css)
- return css_to_blkcg(bio->bi_css);
+ if (bio && bio->bi_blkg)
+ return bio->bi_blkg->blkcg;
return NULL;
}

diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index f6dfb30737d8..9578c7ab1eb6 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -178,7 +178,6 @@ struct bio {
* release. Read comment on top of bio_associate_current().
*/
struct io_context *bi_ioc;
- struct cgroup_subsys_state *bi_css;
struct blkcg_gq *bi_blkg;
struct bio_issue bi_issue;
#endif
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index 2868d85f1fb1..fac0ddf8a8e2 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -764,9 +764,9 @@ blk_trace_bio_get_cgid(struct request_queue *q, struct bio *bio)
if (!bt || !(blk_tracer_flags.val & TRACE_BLK_OPT_CGROUP))
return NULL;

- if (!bio->bi_css)
+ if (!bio->bi_blkg)
return NULL;
- return cgroup_get_kernfs_id(bio->bi_css->cgroup);
+ return cgroup_get_kernfs_id(bio_blkcg(bio)->css.cgroup);
}
#else
static union kernfs_node_id *
--
2.17.1


2018-09-11 18:43:43

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 03/12] blkcg: convert blkg_lookup_create to find closest blkg

From: "Dennis Zhou (Facebook)" <[email protected]>

There are several scenarios where blkg_lookup_create can fail. Examples
include the blkcg dying, request_queue is dying, or simply being OOM. At
the end of the day, most handle this by simply falling back to the
q->root_blkg and calling it a day.

This patch implements the notion of closest blkg. During
blkg_lookup_create, if it fails to create, return the closest blkg
found or the q->root_blkg. blkg_try_get_closest is introduced and used
during association so a bio is always attached to a blkg.

Signed-off-by: Dennis Zhou <[email protected]>
---
block/bio.c | 17 ++++++++++-------
block/blk-cgroup.c | 25 +++++++++++++++++--------
include/linux/blk-cgroup.h | 14 ++++++++++++++
3 files changed, 41 insertions(+), 15 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 6ca4dda481ca..49a28abd9772 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -2008,21 +2008,24 @@ int bio_associate_blkcg(struct bio *bio, struct cgroup_subsys_state *blkcg_css)
EXPORT_SYMBOL_GPL(bio_associate_blkcg);

/**
- * bio_associate_blkg - associate a bio with the specified blkg
+ * bio_associate_blkg - associate a bio with the a blkg
* @bio: target bio
* @blkg: the blkg to associate
*
- * Associate @bio with the blkg specified by @blkg. This is the queue specific
- * blkcg information associated with the @bio, a reference will be taken on the
- * @blkg and will be freed when the bio is freed.
+ * This tries to associate @bio with the specified blkg. Association failure
+ * is handled by walking up the blkg tree. Therefore, the blkg associated can
+ * be anything between @blkg and the root_blkg. This situation only happens
+ * when a cgroup is dying and then the remaining bios will spill to the closest
+ * alive blkg.
+ *
+ * A reference will be taken on the @blkg and will be released when @bio is
+ * freed.
*/
int bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg)
{
if (unlikely(bio->bi_blkg))
return -EBUSY;
- if (!blkg_try_get(blkg))
- return -ENODEV;
- bio->bi_blkg = blkg;
+ bio->bi_blkg = blkg_try_get_closest(blkg);
return 0;
}

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index cd0d97bed83d..e9e3a955f61a 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -268,9 +268,8 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
* that all non-root blkg's have access to the parent blkg. This function
* should be called under RCU read lock and @q->queue_lock.
*
- * Returns pointer to the looked up or created blkg on success, ERR_PTR()
- * value on error. If @q is dead, returns ERR_PTR(-EINVAL). If @q is not
- * dead and bypassing, returns ERR_PTR(-EBUSY).
+ * Returns the blkg or the closest blkg if blkg_create fails as it walks
+ * down from root.
*/
struct blkcg_gq *__blkg_lookup_create(struct blkcg *blkcg,
struct request_queue *q)
@@ -285,7 +284,7 @@ struct blkcg_gq *__blkg_lookup_create(struct blkcg *blkcg,
* we shouldn't allow anything to go through for a bypassing queue.
*/
if (unlikely(blk_queue_bypass(q)))
- return ERR_PTR(blk_queue_dying(q) ? -ENODEV : -EBUSY);
+ return q->root_blkg;

blkg = __blkg_lookup(blkcg, q, true);
if (blkg)
@@ -293,19 +292,29 @@ struct blkcg_gq *__blkg_lookup_create(struct blkcg *blkcg,

/*
* Create blkgs walking down from blkcg_root to @blkcg, so that all
- * non-root blkgs have access to their parents.
+ * non-root blkgs have access to their parents. Returns the closest
+ * blkg to the intended blkg should blkg_create() fail.
*/
while (true) {
struct blkcg *pos = blkcg;
struct blkcg *parent = blkcg_parent(blkcg);
-
- while (parent && !__blkg_lookup(parent, q, false)) {
+ struct blkcg_gq *ret_blkg = q->root_blkg;
+
+ while (parent) {
+ blkg = __blkg_lookup(parent, q, false);
+ if (blkg) {
+ /* remember closest blkg */
+ ret_blkg = blkg;
+ break;
+ }
pos = parent;
parent = blkcg_parent(parent);
}

blkg = blkg_create(pos, q, NULL);
- if (pos == blkcg || IS_ERR(blkg))
+ if (IS_ERR(blkg))
+ return ret_blkg;
+ if (pos == blkcg)
return blkg;
}
}
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index cc0f238530f6..1fbff1bbb651 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -549,6 +549,20 @@ static inline struct blkcg_gq *blkg_try_get(struct blkcg_gq *blkg)
return NULL;
}

+/**
+ * blkg_try_get_closest - try and get a blkg ref on the closet blkg
+ * @blkg: blkg to get
+ *
+ * This walks up the blkg tree to find the closest non-dying blkg and returns
+ * the blkg that it did association with as it may not be the passed in blkg.
+ */
+static inline struct blkcg_gq *blkg_try_get_closest(struct blkcg_gq *blkg)
+{
+ while (!atomic_inc_not_zero(&blkg->refcnt))
+ blkg = blkg->parent;
+
+ return blkg;
+}

void __blkg_release_rcu(struct rcu_head *rcu);

--
2.17.1


2018-09-11 18:44:25

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 07/12] blkcg: associate writeback bios with a blkg

From: "Dennis Zhou (Facebook)" <[email protected]>

One of the goals of this series is to remove a separate reference to
the css of the bio. This can and should be accessed via bio_blkcg. In
this patch, the wbc_init_bio call is changed such that it must be called
after a queue has been associated with the bio.

Signed-off-by: Dennis Zhou <[email protected]>
Reviewed-by: Josef Bacik <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
Documentation/admin-guide/cgroup-v2.rst | 8 +++++---
fs/buffer.c | 10 +++++-----
fs/ext4/page-io.c | 2 +-
include/linux/writeback.h | 5 +++--
4 files changed, 14 insertions(+), 11 deletions(-)

diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index 184193bcb262..caf36105a1c7 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -1857,8 +1857,10 @@ following two functions.

wbc_init_bio(@wbc, @bio)
Should be called for each bio carrying writeback data and
- associates the bio with the inode's owner cgroup. Can be
- called anytime between bio allocation and submission.
+ associates the bio with the inode's owner cgroup and the
+ corresponding request queue. This must be called after
+ a queue (device) has been associated with the bio and
+ before submission.

wbc_account_io(@wbc, @page, @bytes)
Should be called for each data segment being written out.
@@ -1877,7 +1879,7 @@ the configuration, the bio may be executed at a lower priority and if
the writeback session is holding shared resources, e.g. a journal
entry, may lead to priority inversion. There is no one easy solution
for the problem. Filesystems can try to work around specific problem
-cases by skipping wbc_init_bio() or using bio_associate_blkcg()
+cases by skipping wbc_init_bio() or using bio_associate_create_blkg()
directly.


diff --git a/fs/buffer.c b/fs/buffer.c
index 6f1ae3ac9789..109f55196866 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -3060,11 +3060,6 @@ static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh,
*/
bio = bio_alloc(GFP_NOIO, 1);

- if (wbc) {
- wbc_init_bio(wbc, bio);
- wbc_account_io(wbc, bh->b_page, bh->b_size);
- }
-
bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
bio_set_dev(bio, bh->b_bdev);
bio->bi_write_hint = write_hint;
@@ -3084,6 +3079,11 @@ static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh,
op_flags |= REQ_PRIO;
bio_set_op_attrs(bio, op, op_flags);

+ if (wbc) {
+ wbc_init_bio(wbc, bio);
+ wbc_account_io(wbc, bh->b_page, bh->b_size);
+ }
+
submit_bio(bio);
return 0;
}
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index db7590178dfc..2aa62d58d8dd 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -374,13 +374,13 @@ static int io_submit_init_bio(struct ext4_io_submit *io,
bio = bio_alloc(GFP_NOIO, BIO_MAX_PAGES);
if (!bio)
return -ENOMEM;
- wbc_init_bio(io->io_wbc, bio);
bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
bio_set_dev(bio, bh->b_bdev);
bio->bi_end_io = ext4_end_bio;
bio->bi_private = ext4_get_io_end(io->io_end);
io->io_bio = bio;
io->io_next_block = bh->b_blocknr;
+ wbc_init_bio(io->io_wbc, bio);
return 0;
}

diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index fdfd04e348f6..738a0c24874f 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -246,7 +246,8 @@ static inline void wbc_attach_fdatawrite_inode(struct writeback_control *wbc,
*
* @bio is a part of the writeback in progress controlled by @wbc. Perform
* writeback specific initialization. This is used to apply the cgroup
- * writeback context.
+ * writeback context. Must be called after the bio has been associated with
+ * a device.
*/
static inline void wbc_init_bio(struct writeback_control *wbc, struct bio *bio)
{
@@ -257,7 +258,7 @@ static inline void wbc_init_bio(struct writeback_control *wbc, struct bio *bio)
* regular writeback instead of writing things out itself.
*/
if (wbc->wb)
- bio_associate_blkcg(bio, wbc->wb->blkcg_css);
+ bio_associate_blkg_from_css(bio, wbc->wb->blkcg_css);
}

#else /* CONFIG_CGROUP_WRITEBACK */
--
2.17.1


2018-09-11 18:44:42

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 05/12] blkcg: consolidate bio_issue_init to be a part of core

From: "Dennis Zhou (Facebook)" <[email protected]>

bio_issue_init among other things initializes the timestamp for an IO.
Rather than have this logic handled by policies, this consolidates it to
be on the init paths (normal, clone, bounce clone).

Signed-off-by: Dennis Zhou <[email protected]>
Acked-by: Tejun Heo <[email protected]>
Reviewed-by: Liu Bo <[email protected]>
---
block/bio.c | 2 ++
block/blk-iolatency.c | 2 --
block/blk-throttle.c | 8 --------
block/bounce.c | 2 ++
include/linux/blk-cgroup.h | 9 +++++++++
5 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 97c1c4bf8df6..1d28d7ad2342 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -610,6 +610,8 @@ void __bio_clone_fast(struct bio *bio, struct bio *bio_src)
bio->bi_io_vec = bio_src->bi_io_vec;

bio_clone_blkcg_association(bio, bio_src);
+
+ blkcg_bio_issue_init(bio);
}
EXPORT_SYMBOL(__bio_clone_fast);

diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 79a7549e2062..9d7052bad6f7 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -401,8 +401,6 @@ static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio,
if (!blk_iolatency_enabled(blkiolat))
return;

- bio_issue_init(&bio->bi_issue, bio_sectors(bio));
-
while (blkg && blkg->parent) {
struct iolatency_grp *iolat = blkg_to_lat(blkg);
if (!iolat) {
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index b7b5cc4defc2..f2b355338894 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -2126,13 +2126,6 @@ static inline void throtl_update_latency_buckets(struct throtl_data *td)
}
#endif

-static void blk_throtl_assoc_bio(struct throtl_grp *tg, struct bio *bio)
-{
-#ifdef CONFIG_BLK_DEV_THROTTLING_LOW
- bio_issue_init(&bio->bi_issue, bio_sectors(bio));
-#endif
-}
-
bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg,
struct bio *bio)
{
@@ -2156,7 +2149,6 @@ bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg,
if (unlikely(blk_queue_bypass(q)))
goto out_unlock;

- blk_throtl_assoc_bio(tg, bio);
blk_throtl_update_idletime(tg);

sq = &tg->service_queue;
diff --git a/block/bounce.c b/block/bounce.c
index bc63b3a2d18c..7a08703b1204 100644
--- a/block/bounce.c
+++ b/block/bounce.c
@@ -259,6 +259,8 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask,

bio_clone_blkcg_association(bio, bio_src);

+ blkcg_bio_issue_init(bio);
+
return bio;
}

diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index 6e33ad1d92b4..a6b6e741a75e 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -897,6 +897,12 @@ static inline bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg
struct bio *bio) { return false; }
#endif

+
+static inline void blkcg_bio_issue_init(struct bio *bio)
+{
+ bio_issue_init(&bio->bi_issue, bio_sectors(bio));
+}
+
static inline bool blkcg_bio_issue_check(struct request_queue *q,
struct bio *bio)
{
@@ -922,6 +928,8 @@ static inline bool blkcg_bio_issue_check(struct request_queue *q,
blkg_rwstat_add(&blkg->stat_ios, bio->bi_opf, 1);
}

+ blkcg_bio_issue_init(bio);
+
rcu_read_unlock();
return !throtl;
}
@@ -1034,6 +1042,7 @@ static inline void blk_put_rl(struct request_list *rl) { }
static inline void blk_rq_set_rl(struct request *rq, struct request_list *rl) { }
static inline struct request_list *blk_rq_rl(struct request *rq) { return &rq->q->root_rl; }

+static inline void blkcg_bio_issue_init(struct bio *bio) { }
static inline bool blkcg_bio_issue_check(struct request_queue *q,
struct bio *bio) { return true; }

--
2.17.1


2018-09-11 18:44:46

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 06/12] blkcg: associate a blkg for pages being evicted by swap

From: "Dennis Zhou (Facebook)" <[email protected]>

A prior patch in this series added blkg association to bios issued by
cgroups. There are two other paths that we want to attribute work back
to the appropriate cgroup: swap and writeback. Here we modify the way
swap tags bios to include the blkg. Writeback will be tackle in the next
patch.

Signed-off-by: Dennis Zhou <[email protected]>
Reviewed-by: Josef Bacik <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
block/bio.c | 83 ++++++++++++++++++++++++++++++++-------------
include/linux/bio.h | 11 ++++--
mm/page_io.c | 2 +-
3 files changed, 68 insertions(+), 28 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 1d28d7ad2342..897c8ed9e572 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1957,30 +1957,6 @@ EXPORT_SYMBOL(bioset_init_from_src);

#ifdef CONFIG_BLK_CGROUP

-#ifdef CONFIG_MEMCG
-/**
- * bio_associate_blkcg_from_page - associate a bio with the page's blkcg
- * @bio: target bio
- * @page: the page to lookup the blkcg from
- *
- * Associate @bio with the blkcg from @page's owning memcg. This works like
- * every other associate function wrt references.
- */
-int bio_associate_blkcg_from_page(struct bio *bio, struct page *page)
-{
- struct cgroup_subsys_state *blkcg_css;
-
- if (unlikely(bio->bi_css))
- return -EBUSY;
- if (!page->mem_cgroup)
- return 0;
- blkcg_css = cgroup_get_e_css(page->mem_cgroup->css.cgroup,
- &io_cgrp_subsys);
- bio->bi_css = blkcg_css;
- return 0;
-}
-#endif /* CONFIG_MEMCG */
-
/**
* bio_associate_blkcg - associate a bio with the specified blkcg
* @bio: target bio
@@ -2031,6 +2007,65 @@ int bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg)
return 0;
}

+static int __bio_associate_blkg_from_css(struct bio *bio,
+ struct cgroup_subsys_state *css)
+{
+ struct blkcg_gq *blkg;
+
+ rcu_read_lock();
+
+ blkg = blkg_lookup_create(css_to_blkcg(css), bio->bi_disk->queue);
+
+ rcu_read_unlock();
+
+ return bio_associate_blkg(bio, blkg);
+}
+
+/**
+ * bio_associate_blkg_from_css - associate a bio with a specified css
+ * @bio: target bio
+ * @css: target css
+ *
+ * Associate @bio with the blkg found by combining the css's blkg and the
+ * request_queue of the @bio. This takes a reference on the css that will
+ * be put upon freeing of @bio.
+ */
+int bio_associate_blkg_from_css(struct bio *bio,
+ struct cgroup_subsys_state *css)
+{
+ css_get(css);
+ bio->bi_css = css;
+ return __bio_associate_blkg_from_css(bio, css);
+}
+EXPORT_SYMBOL_GPL(bio_associate_blkg_from_css);
+
+#ifdef CONFIG_MEMCG
+/**
+ * bio_associate_blkg_from_page - associate a bio with the page's blkg
+ * @bio: target bio
+ * @page: the page to lookup the blkcg from
+ *
+ * Associate @bio with the blkg from @page's owning memcg and the respective
+ * request_queue. This works like every other associate function wrt
+ * references.
+ *
+ * Note: this must be called after bio has an associated device.
+ */
+int bio_associate_blkg_from_page(struct bio *bio, struct page *page)
+{
+ struct cgroup_subsys_state *css;
+
+ if (unlikely(bio->bi_css))
+ return -EBUSY;
+ if (!page->mem_cgroup)
+ return 0;
+ css = cgroup_get_e_css(page->mem_cgroup->css.cgroup, &io_cgrp_subsys);
+ bio->bi_css = css;
+
+ return __bio_associate_blkg_from_css(bio, css);
+}
+#endif /* CONFIG_MEMCG */
+
/**
* bio_associate_create_blkg - associate a bio with a blkg from q
* @q: request_queue where bio is going
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 829cd0bb407d..c73a870ebc0e 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -533,21 +533,26 @@ do { \
disk_devt((bio)->bi_disk)

#if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP)
-int bio_associate_blkcg_from_page(struct bio *bio, struct page *page);
+int bio_associate_blkg_from_page(struct bio *bio, struct page *page);
#else
-static inline int bio_associate_blkcg_from_page(struct bio *bio,
- struct page *page) { return 0; }
+static inline int bio_associate_blkg_from_page(struct bio *bio,
+ struct page *page) { return 0; }
#endif

#ifdef CONFIG_BLK_CGROUP
int bio_associate_blkcg(struct bio *bio, struct cgroup_subsys_state *blkcg_css);
int bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg);
+int bio_associate_blkg_from_css(struct bio *bio,
+ struct cgroup_subsys_state *css);
int bio_associate_create_blkg(struct request_queue *q, struct bio *bio);
void bio_disassociate_task(struct bio *bio);
void bio_clone_blkcg_association(struct bio *dst, struct bio *src);
#else /* CONFIG_BLK_CGROUP */
static inline int bio_associate_blkcg(struct bio *bio,
struct cgroup_subsys_state *blkcg_css) { return 0; }
+static inline int bio_associate_blkg_from_css(struct bio *bio,
+ struct cgroup_subsys_state *css)
+{ return 0; }
static inline int bio_associate_create_blkg(struct request_queue *q,
struct bio *bio) { return 0; }
static inline void bio_disassociate_task(struct bio *bio) { }
diff --git a/mm/page_io.c b/mm/page_io.c
index aafd19ec1db4..573d3663d846 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -339,7 +339,7 @@ int __swap_writepage(struct page *page, struct writeback_control *wbc,
goto out;
}
bio->bi_opf = REQ_OP_WRITE | REQ_SWAP | wbc_to_write_flags(wbc);
- bio_associate_blkcg_from_page(bio, page);
+ bio_associate_blkg_from_page(bio, page);
count_swpout_vm_event(page);
set_page_writeback(page);
unlock_page(page);
--
2.17.1


2018-09-11 18:44:58

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 02/12] blkcg: update blkg_lookup_create to do locking

From: "Dennis Zhou (Facebook)" <[email protected]>

To know when to create a blkg, the general pattern is to do a
blkg_lookup and if that fails, lock and then do a lookup again and if
that fails finally create. It doesn't make much sense for everyone who
wants to do creation to write this themselves.

This changes blkg_lookup_create to do locking and implement this
pattern. The old blkg_lookup_create is renamed to __blkg_lookup_create.
If a call site wants to do its own error handling or already owns the
queue lock, they can use __blkg_lookup_create. This will be used in
upcoming patches.

Signed-off-by: Dennis Zhou <[email protected]>
Reviewed-by: Josef Bacik <[email protected]>
Acked-by: Tejun Heo <[email protected]>
Reviewed-by: Liu Bo <[email protected]>
---
block/blk-cgroup.c | 31 ++++++++++++++++++++++++++++---
block/blk-iolatency.c | 2 +-
include/linux/blk-cgroup.h | 4 +++-
3 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index c19f9078da1e..cd0d97bed83d 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -259,7 +259,7 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
}

/**
- * blkg_lookup_create - lookup blkg, try to create one if not there
+ * __blkg_lookup_create - lookup blkg, try to create one if not there
* @blkcg: blkcg of interest
* @q: request_queue of interest
*
@@ -272,8 +272,8 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
* value on error. If @q is dead, returns ERR_PTR(-EINVAL). If @q is not
* dead and bypassing, returns ERR_PTR(-EBUSY).
*/
-struct blkcg_gq *blkg_lookup_create(struct blkcg *blkcg,
- struct request_queue *q)
+struct blkcg_gq *__blkg_lookup_create(struct blkcg *blkcg,
+ struct request_queue *q)
{
struct blkcg_gq *blkg;

@@ -310,6 +310,31 @@ struct blkcg_gq *blkg_lookup_create(struct blkcg *blkcg,
}
}

+/**
+ * blkg_lookup_create - find or create a blkg
+ * @blkcg: target block cgroup
+ * @q: target request_queue
+ *
+ * This looks up or creates the blkg representing the unique pair
+ * of the blkcg and the request_queue.
+ */
+struct blkcg_gq *blkg_lookup_create(struct blkcg *blkcg,
+ struct request_queue *q)
+{
+ struct blkcg_gq *blkg = blkg_lookup(blkcg, q);
+ unsigned long flags;
+
+ if (unlikely(!blkg)) {
+ spin_lock_irqsave(q->queue_lock, flags);
+
+ blkg = __blkg_lookup_create(blkcg, q);
+
+ spin_unlock_irqrestore(q->queue_lock, flags);
+ }
+
+ return blkg;
+}
+
static void blkg_destroy(struct blkcg_gq *blkg)
{
struct blkcg *blkcg = blkg->blkcg;
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 62fdd9002c29..22b2ff0440cc 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -410,7 +410,7 @@ static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio,
if (unlikely(!blkg)) {
if (!lock)
spin_lock_irq(q->queue_lock);
- blkg = blkg_lookup_create(blkcg, q);
+ blkg = __blkg_lookup_create(blkcg, q);
if (IS_ERR(blkg))
blkg = NULL;
if (!lock)
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index 24067a1f8b36..cc0f238530f6 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -184,6 +184,8 @@ extern struct cgroup_subsys_state * const blkcg_root_css;

struct blkcg_gq *blkg_lookup_slowpath(struct blkcg *blkcg,
struct request_queue *q, bool update_hint);
+struct blkcg_gq *__blkg_lookup_create(struct blkcg *blkcg,
+ struct request_queue *q);
struct blkcg_gq *blkg_lookup_create(struct blkcg *blkcg,
struct request_queue *q);
int blkcg_init_queue(struct request_queue *q);
@@ -897,7 +899,7 @@ static inline bool blkcg_bio_issue_check(struct request_queue *q,
blkg = blkg_lookup(blkcg, q);
if (unlikely(!blkg)) {
spin_lock_irq(q->queue_lock);
- blkg = blkg_lookup_create(blkcg, q);
+ blkg = __blkg_lookup_create(blkcg, q);
if (IS_ERR(blkg))
blkg = NULL;
spin_unlock_irq(q->queue_lock);
--
2.17.1


2018-09-11 18:45:08

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 01/12] blkcg: fix ref count issue with bio_blkcg using task_css

From: "Dennis Zhou (Facebook)" <[email protected]>

The accessor function bio_blkcg either returns the blkcg associated with
the bio or finds one in the current context. This can cause an issue
when trying to associate a bio with a blkcg. Particularly, it's the
third case that is problematic:

return css_to_blkcg(task_css(current, io_cgrp_id));

As the above may race against task migration and the cgroup exiting, it
is not always ok to take a reference on the blkcg returned from
bio_blkcg.

This patch adds association ahead of calling bio_blkcg rather than
after. This makes association a required and explicit step along the
code paths for calling bio_blkcg. blk_get_rl is modified as well to get
a reference to the blkcg it may use and blk_put_rl will always put the
reference back. Association is also moved above the bio_blkcg call to
ensure it will not return NULL in blk-iolatency.

BFQ and CFQ utilize this flaw, but due to the complexity, I do not want
to address this in this series. I've created a private version of the
function with notes not to use it describing the flaw. Hopefully soon,
that code can be cleaned up.

Signed-off-by: Dennis Zhou <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
block/bfq-cgroup.c | 4 +-
block/bfq-iosched.c | 2 +-
block/bio.c | 10 +++-
block/blk-iolatency.c | 2 +-
block/cfq-iosched.c | 4 +-
include/linux/blk-cgroup.h | 101 ++++++++++++++++++++++++++++++++++---
6 files changed, 107 insertions(+), 16 deletions(-)

diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index 9fe5952d117d..d9a7916ff0ab 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -642,7 +642,7 @@ void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
uint64_t serial_nr;

rcu_read_lock();
- serial_nr = bio_blkcg(bio)->css.serial_nr;
+ serial_nr = __bio_blkcg(bio)->css.serial_nr;

/*
* Check whether blkcg has changed. The condition may trigger
@@ -651,7 +651,7 @@ void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
if (unlikely(!bfqd) || likely(bic->blkcg_serial_nr == serial_nr))
goto out;

- bfqg = __bfq_bic_change_cgroup(bfqd, bic, bio_blkcg(bio));
+ bfqg = __bfq_bic_change_cgroup(bfqd, bic, __bio_blkcg(bio));
/*
* Update blkg_path for bfq_log_* functions. We cache this
* path, and update it here, for the following
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 653100fb719e..6647355c901c 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -4297,7 +4297,7 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,

rcu_read_lock();

- bfqg = bfq_find_set_group(bfqd, bio_blkcg(bio));
+ bfqg = bfq_find_set_group(bfqd, __bio_blkcg(bio));
if (!bfqg) {
bfqq = &bfqd->oom_bfqq;
goto out;
diff --git a/block/bio.c b/block/bio.c
index f685e762809d..6ca4dda481ca 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1989,13 +1989,19 @@ int bio_associate_blkcg_from_page(struct bio *bio, struct page *page)
*
* This function takes an extra reference of @blkcg_css which will be put
* when @bio is released. The caller must own @bio and is responsible for
- * synchronizing calls to this function.
+ * synchronizing calls to this function. If @blkcg_css is NULL, a call to
+ * blkcg_get_css finds the current css from the kthread or task.
*/
int bio_associate_blkcg(struct bio *bio, struct cgroup_subsys_state *blkcg_css)
{
if (unlikely(bio->bi_css))
return -EBUSY;
- css_get(blkcg_css);
+
+ if (blkcg_css)
+ css_get(blkcg_css);
+ else
+ blkcg_css = blkcg_get_css();
+
bio->bi_css = blkcg_css;
return 0;
}
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 19923f8a029d..62fdd9002c29 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -404,8 +404,8 @@ static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio,
return;

rcu_read_lock();
+ bio_associate_blkcg(bio, NULL);
blkcg = bio_blkcg(bio);
- bio_associate_blkcg(bio, &blkcg->css);
blkg = blkg_lookup(blkcg, q);
if (unlikely(!blkg)) {
if (!lock)
diff --git a/block/cfq-iosched.c b/block/cfq-iosched.c
index 2eb87444b157..d219e9a1af65 100644
--- a/block/cfq-iosched.c
+++ b/block/cfq-iosched.c
@@ -3753,7 +3753,7 @@ static void check_blkcg_changed(struct cfq_io_cq *cic, struct bio *bio)
uint64_t serial_nr;

rcu_read_lock();
- serial_nr = bio_blkcg(bio)->css.serial_nr;
+ serial_nr = __bio_blkcg(bio)->css.serial_nr;
rcu_read_unlock();

/*
@@ -3818,7 +3818,7 @@ cfq_get_queue(struct cfq_data *cfqd, bool is_sync, struct cfq_io_cq *cic,
struct cfq_group *cfqg;

rcu_read_lock();
- cfqg = cfq_lookup_cfqg(cfqd, bio_blkcg(bio));
+ cfqg = cfq_lookup_cfqg(cfqd, __bio_blkcg(bio));
if (!cfqg) {
cfqq = &cfqd->oom_cfqq;
goto out;
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index 6d766a19f2bb..24067a1f8b36 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -230,22 +230,100 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
char *input, struct blkg_conf_ctx *ctx);
void blkg_conf_finish(struct blkg_conf_ctx *ctx);

+/**
+ * blkcg_css - find the current css
+ *
+ * Find the css associated with either the kthread or the current task.
+ * This may return a dying css, so it is up to the caller to use tryget logic
+ * to confirm it is alive and well.
+ */
+static inline struct cgroup_subsys_state *blkcg_css(void)
+{
+ struct cgroup_subsys_state *css;
+
+ css = kthread_blkcg();
+ if (css)
+ return css;
+ return task_css(current, io_cgrp_id);
+}
+
+/**
+ * blkcg_get_css - find and get a reference to the css
+ *
+ * Find the css associated with either the kthread or the current task.
+ * This takes a reference on the blkcg which will need to be managed by the
+ * caller.
+ */
+static inline struct cgroup_subsys_state *blkcg_get_css(void)
+{
+ struct cgroup_subsys_state *css;
+
+ rcu_read_lock();
+
+ css = kthread_blkcg();
+ if (css) {
+ css_get(css);
+ } else {
+ /*
+ * This is a bit complicated. It is possible task_css is seeing
+ * an old css pointer here. This is caused by the current
+ * thread migrating away from this cgroup and this cgroup dying.
+ * css_tryget() will fail when trying to take a ref on a cgroup
+ * that's ref count has hit 0.
+ *
+ * Therefore, if it does fail, this means current must have
+ * been swapped away already and this is waiting for it to
+ * propagate on the polling cpu. Hence the use of cpu_relax().
+ */
+ while (true) {
+ css = task_css(current, io_cgrp_id);
+ if (likely(css_tryget(css)))
+ break;
+ cpu_relax();
+ }
+ }
+
+ rcu_read_unlock();
+
+ return css;
+}

static inline struct blkcg *css_to_blkcg(struct cgroup_subsys_state *css)
{
return css ? container_of(css, struct blkcg, css) : NULL;
}

-static inline struct blkcg *bio_blkcg(struct bio *bio)
+/**
+ * __bio_blkcg - internal version of bio_blkcg for bfq and cfq
+ *
+ * DO NOT USE.
+ * There is a flaw using this version of the function. In particular, this was
+ * used in a broken paradigm where association was called on the given css. It
+ * is possible though that the returned css from task_css() is in the process
+ * of dying due to migration of the current task. So it is improper to assume
+ * *_get() is going to succeed. Both BFQ and CFQ rely on this logic and will
+ * take additional work to handle more gracefully.
+ */
+static inline struct blkcg *__bio_blkcg(struct bio *bio)
{
- struct cgroup_subsys_state *css;
+ if (bio && bio->bi_css)
+ return css_to_blkcg(bio->bi_css);
+ return css_to_blkcg(blkcg_css());
+}

+/**
+ * bio_blkcg - grab the blkcg associated with a bio
+ * @bio: target bio
+ *
+ * This returns the blkcg associated with a bio, NULL if not associated.
+ * Callers are expected to either handle NULL or know association has been
+ * done prior to calling this.
+ */
+static inline struct blkcg *bio_blkcg(struct bio *bio)
+{
if (bio && bio->bi_css)
return css_to_blkcg(bio->bi_css);
- css = kthread_blkcg();
- if (css)
- return css_to_blkcg(css);
- return css_to_blkcg(task_css(current, io_cgrp_id));
+ return NULL;
}

static inline bool blk_cgroup_congested(void)
@@ -534,6 +612,10 @@ static inline struct request_list *blk_get_rl(struct request_queue *q,
rcu_read_lock();

blkcg = bio_blkcg(bio);
+ if (blkcg)
+ css_get(&blkcg->css);
+ else
+ blkcg = css_to_blkcg(blkcg_get_css());

/* bypass blkg lookup and use @q->root_rl directly for root */
if (blkcg == &blkcg_root)
@@ -565,6 +647,8 @@ static inline struct request_list *blk_get_rl(struct request_queue *q,
*/
static inline void blk_put_rl(struct request_list *rl)
{
+ /* an additional ref is always taken for rl */
+ css_put(&rl->blkg->blkcg->css);
if (rl->blkg->blkcg != &blkcg_root)
blkg_put(rl->blkg);
}
@@ -805,10 +889,10 @@ static inline bool blkcg_bio_issue_check(struct request_queue *q,
bool throtl = false;

rcu_read_lock();
- blkcg = bio_blkcg(bio);

/* associate blkcg if bio hasn't attached one */
- bio_associate_blkcg(bio, &blkcg->css);
+ bio_associate_blkcg(bio, NULL);
+ blkcg = bio_blkcg(bio);

blkg = blkg_lookup(blkcg, q);
if (unlikely(!blkg)) {
@@ -930,6 +1014,7 @@ static inline int blkcg_activate_policy(struct request_queue *q,
static inline void blkcg_deactivate_policy(struct request_queue *q,
const struct blkcg_policy *pol) { }

+static inline struct blkcg *__bio_blkcg(struct bio *bio) { return NULL; }
static inline struct blkcg *bio_blkcg(struct bio *bio) { return NULL; }

static inline struct blkg_policy_data *blkg_to_pd(struct blkcg_gq *blkg,
--
2.17.1


2018-09-11 23:52:23

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 03/12] blkcg: convert blkg_lookup_create to find closest blkg

On Tue, Sep 11, 2018 at 02:41:28PM -0400, Dennis Zhou wrote:
> From: "Dennis Zhou (Facebook)" <[email protected]>
>
> There are several scenarios where blkg_lookup_create can fail. Examples
> include the blkcg dying, request_queue is dying, or simply being OOM. At
> the end of the day, most handle this by simply falling back to the
> q->root_blkg and calling it a day.
>
> This patch implements the notion of closest blkg. During
> blkg_lookup_create, if it fails to create, return the closest blkg
> found or the q->root_blkg. blkg_try_get_closest is introduced and used
> during association so a bio is always attached to a blkg.
>
> Signed-off-by: Dennis Zhou <[email protected]>

Acked-by: Tejun Heo <[email protected]>

Thanks.

--
tejun

2018-09-11 23:52:24

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 09/12] blkcg: remove additional reference to the css

On Tue, Sep 11, 2018 at 02:41:34PM -0400, Dennis Zhou wrote:
> From: "Dennis Zhou (Facebook)" <[email protected]>
>
> The previous patch in this series removed carrying around a pointer to
> the css in blkg. However, the blkg association logic still relied on
> taking a reference on the css to ensure we wouldn't fail in getting a
> reference for the blkg.
>
> Here the implicit dependency on the css is removed. The association
> continues to rely on the tryget logic walking up the blkg tree. This
> streamlines the three ways that association can happen: normal, swap,
> and writeback.
>
> Signed-off-by: Dennis Zhou <[email protected]>

Acked-by: Tejun Heo <[email protected]>

Thanks.

--
tejun

2018-09-21 20:57:33

by Dennis Zhou

[permalink] [raw]
Subject: Re: [PATCH v3 00/12] block: always associate blkg and refcount cleanup

Hi Jens,

On Tue, Sep 11, 2018 at 02:41:25PM -0400, Dennis Zhou wrote:
> Hi everyone,
>
> v3: a few minor fixes.
> 0003: Updated the comment to bio_associate_blkg to reflect closest
> association.
> Removed a return branch in __bio_lookup_create.
> 0009: Removed an unnecessary rcu_read_(un)lock pair.
> 0010: Fixed blkg null pointer... blkg->blkcg => blkcg.
>
> This is rebased onto axboe#for-4.20/block 902d53914f64.
>
> From v2 below (updated):
> ------
> This is a followup to the patch series I sent out earlier [1] containing
> the middle two points:
> 1. always associate a bio with a blkg
> 2. remove the extra css ref held by bios and utilize the blkg ref
>
> The major difference with v2 is that error handling on blkg creation
> and association failure is handled more gracefully. Rather than having
> the complex logic to fallback to root, failures walk up the blkg tree.
> This seems more natural and less prone to error with the many possible
> failure scenarios.
>
> Additionally, there are fixes for kbuild errors and some key details
> overlooked by me in the first series that were pointed out in review.
>
> Modified from the first patchset:
> First, both blk-throttle and blk-iolatency rely on blkg association
> to enable their policies. Rather than each policy (and future policies)
> implement this logic independently, this consolidates it such that
> all bios are tagged with a blkg.
>
> Second, with the addition of always having a blkg reference, the blkcg
> can now be referenced through it rather than maintaining an additional
> pointer and reference. So let's clean this up.
>
> [1] https://lore.kernel.org/lkml/[email protected]/T
>
> This patchset contains the following 12 patches:
> 0001-blkcg-fix-ref-count-issue-with-bio_blkcg-using-task_.patch
> 0002-blkcg-update-blkg_lookup_create-to-do-locking.patch
> 0003-blkcg-convert-blkg_lookup_create-to-find-closest-blk.patch
> 0004-blkcg-always-associate-a-bio-with-a-blkg.patch
> 0005-blkcg-consolidate-bio_issue_init-to-be-a-part-of-cor.patch
> 0006-blkcg-associate-a-blkg-for-pages-being-evicted-by-sw.patch
> 0007-blkcg-associate-writeback-bios-with-a-blkg.patch
> 0008-blkcg-remove-bio-bi_css-and-instead-use-bio-bi_blkg.patch
> 0009-blkcg-remove-additional-reference-to-the-css.patch
> 0010-blkcg-cleanup-and-make-blk_get_rl-use-blkg_lookup_cr.patch
> 0011-blkcg-change-blkg-reference-counting-to-use-percpu_r.patch
> 0012-blkcg-rename-blkg_try_get-to-blkg_tryget.patch
>
> This patchset is on top of axboe#for-4.20/block 902d53914f64.
>
> diffstats below:
>
> Dennis Zhou (Facebook) (12):
> blkcg: fix ref count issue with bio_blkcg using task_css
> blkcg: update blkg_lookup_create to do locking
> blkcg: convert blkg_lookup_create to find closest blkg
> blkcg: always associate a bio with a blkg
> blkcg: consolidate bio_issue_init to be a part of core
> blkcg: associate a blkg for pages being evicted by swap
> blkcg: associate writeback bios with a blkg
> blkcg: remove bio->bi_css and instead use bio->bi_blkg
> blkcg: remove additional reference to the css
> blkcg: cleanup and make blk_get_rl use blkg_lookup_create
> blkcg: change blkg reference counting to use percpu_ref
> blkcg: rename blkg_try_get to blkg_tryget
>
> Documentation/admin-guide/cgroup-v2.rst | 8 +-
> block/bfq-cgroup.c | 4 +-
> block/bfq-iosched.c | 2 +-
> block/bio.c | 158 ++++++++++++++++--------
> block/blk-cgroup.c | 123 ++++++++++++------
> block/blk-iolatency.c | 26 +---
> block/blk-throttle.c | 13 +-
> block/bounce.c | 4 +-
> block/cfq-iosched.c | 4 +-
> drivers/block/loop.c | 5 +-
> drivers/md/raid0.c | 2 +-
> fs/buffer.c | 10 +-
> fs/ext4/page-io.c | 2 +-
> include/linux/bio.h | 23 ++--
> include/linux/blk-cgroup.h | 145 +++++++++++++++-------
> include/linux/blk_types.h | 1 -
> include/linux/cgroup.h | 2 +
> include/linux/writeback.h | 5 +-
> kernel/cgroup/cgroup.c | 48 +++++--
> kernel/trace/blktrace.c | 4 +-
> mm/page_io.c | 2 +-
> 21 files changed, 381 insertions(+), 210 deletions(-)
>
> Thanks,
> Dennis

I reran some basic test again for sanity and it seems to be fine on my
end. There are at least acks, and some reviewed-by's on the series. Is
there anything else you think needs to be done before we let this bake
in for-4.20/for-next?

Thanks,
Dennis

2018-09-22 02:30:51

by Jens Axboe

[permalink] [raw]
Subject: Re: [PATCH v3 00/12] block: always associate blkg and refcount cleanup

On 9/21/18 2:56 PM, Dennis Zhou wrote:
> Hi Jens,
>
> On Tue, Sep 11, 2018 at 02:41:25PM -0400, Dennis Zhou wrote:
>> Hi everyone,
>>
>> v3: a few minor fixes.
>> 0003: Updated the comment to bio_associate_blkg to reflect closest
>> association.
>> Removed a return branch in __bio_lookup_create.
>> 0009: Removed an unnecessary rcu_read_(un)lock pair.
>> 0010: Fixed blkg null pointer... blkg->blkcg => blkcg.
>>
>> This is rebased onto axboe#for-4.20/block 902d53914f64.
>>
>> From v2 below (updated):
>> ------
>> This is a followup to the patch series I sent out earlier [1] containing
>> the middle two points:
>> 1. always associate a bio with a blkg
>> 2. remove the extra css ref held by bios and utilize the blkg ref
>>
>> The major difference with v2 is that error handling on blkg creation
>> and association failure is handled more gracefully. Rather than having
>> the complex logic to fallback to root, failures walk up the blkg tree.
>> This seems more natural and less prone to error with the many possible
>> failure scenarios.
>>
>> Additionally, there are fixes for kbuild errors and some key details
>> overlooked by me in the first series that were pointed out in review.
>>
>> Modified from the first patchset:
>> First, both blk-throttle and blk-iolatency rely on blkg association
>> to enable their policies. Rather than each policy (and future policies)
>> implement this logic independently, this consolidates it such that
>> all bios are tagged with a blkg.
>>
>> Second, with the addition of always having a blkg reference, the blkcg
>> can now be referenced through it rather than maintaining an additional
>> pointer and reference. So let's clean this up.
>>
>> [1] https://lore.kernel.org/lkml/[email protected]/T
>>
>> This patchset contains the following 12 patches:
>> 0001-blkcg-fix-ref-count-issue-with-bio_blkcg-using-task_.patch
>> 0002-blkcg-update-blkg_lookup_create-to-do-locking.patch
>> 0003-blkcg-convert-blkg_lookup_create-to-find-closest-blk.patch
>> 0004-blkcg-always-associate-a-bio-with-a-blkg.patch
>> 0005-blkcg-consolidate-bio_issue_init-to-be-a-part-of-cor.patch
>> 0006-blkcg-associate-a-blkg-for-pages-being-evicted-by-sw.patch
>> 0007-blkcg-associate-writeback-bios-with-a-blkg.patch
>> 0008-blkcg-remove-bio-bi_css-and-instead-use-bio-bi_blkg.patch
>> 0009-blkcg-remove-additional-reference-to-the-css.patch
>> 0010-blkcg-cleanup-and-make-blk_get_rl-use-blkg_lookup_cr.patch
>> 0011-blkcg-change-blkg-reference-counting-to-use-percpu_r.patch
>> 0012-blkcg-rename-blkg_try_get-to-blkg_tryget.patch
>>
>> This patchset is on top of axboe#for-4.20/block 902d53914f64.
>>
>> diffstats below:
>>
>> Dennis Zhou (Facebook) (12):
>> blkcg: fix ref count issue with bio_blkcg using task_css
>> blkcg: update blkg_lookup_create to do locking
>> blkcg: convert blkg_lookup_create to find closest blkg
>> blkcg: always associate a bio with a blkg
>> blkcg: consolidate bio_issue_init to be a part of core
>> blkcg: associate a blkg for pages being evicted by swap
>> blkcg: associate writeback bios with a blkg
>> blkcg: remove bio->bi_css and instead use bio->bi_blkg
>> blkcg: remove additional reference to the css
>> blkcg: cleanup and make blk_get_rl use blkg_lookup_create
>> blkcg: change blkg reference counting to use percpu_ref
>> blkcg: rename blkg_try_get to blkg_tryget
>>
>> Documentation/admin-guide/cgroup-v2.rst | 8 +-
>> block/bfq-cgroup.c | 4 +-
>> block/bfq-iosched.c | 2 +-
>> block/bio.c | 158 ++++++++++++++++--------
>> block/blk-cgroup.c | 123 ++++++++++++------
>> block/blk-iolatency.c | 26 +---
>> block/blk-throttle.c | 13 +-
>> block/bounce.c | 4 +-
>> block/cfq-iosched.c | 4 +-
>> drivers/block/loop.c | 5 +-
>> drivers/md/raid0.c | 2 +-
>> fs/buffer.c | 10 +-
>> fs/ext4/page-io.c | 2 +-
>> include/linux/bio.h | 23 ++--
>> include/linux/blk-cgroup.h | 145 +++++++++++++++-------
>> include/linux/blk_types.h | 1 -
>> include/linux/cgroup.h | 2 +
>> include/linux/writeback.h | 5 +-
>> kernel/cgroup/cgroup.c | 48 +++++--
>> kernel/trace/blktrace.c | 4 +-
>> mm/page_io.c | 2 +-
>> 21 files changed, 381 insertions(+), 210 deletions(-)
>>
>> Thanks,
>> Dennis
>
> I reran some basic test again for sanity and it seems to be fine on my
> end. There are at least acks, and some reviewed-by's on the series. Is
> there anything else you think needs to be done before we let this bake
> in for-4.20/for-next?

Looks like it's good to go, I have applied it for 4.20. Thanks Dennis.

--
Jens Axboe