2018-12-04 18:37:05

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH v5 00/14] block: always associate blkg and refcount cleanup

Hi everyone,

A special case with dm is the flush bio which is statically initialized
before the block device is opened and associated with a disk. This
caused blkg association to throw a NPE. 0005 addresses this case by
moving association to be on the flush path.

With v4 moving association to piggyback off of bio_set_dev(), this
caused a NPE to be thrown by the special case above. I was overly
cautious with v4 and added the bio_has_queue() check which is now
removed in v5.

Also, the addition of bio_set_dev_only() wasn't quite right due to
writeback and swap sharing the same bio init paths in many places. The
safer thing to do is double association for those paths and in a follow
up series split out the bio init paths.

Changes in v5:
All: Fixed minor grammar and syntactic issues.
0004: Removed bio_has_queue() for being overly cautious.
0005: New, properly addressed the static flush_bio in md.
0006: Removed the rcu lock in blkcg_bio_issue_check() as the bio will
own a ref on the blkg so it is unnecessary.
0011: Consolidated bio_associate_blkg_from_css() (removed __ version).

From v4:
This is respin of v3 [1] with fixes for the errors reported in [2] and
[3]. v3 was reverted in [4].

The issue in [3] was that bio->bi_disk->queue and blkg->q were out
of sync. So when I changed blk_get_rl() to use blkg->q, the wrong queue
was returned and elevator from q->elevator->type threw a NPE. Note, with
v4.21, the old block stack was removed and so this patch was dropped. I
did backport this to v4.20 and verified this series does not encounter
the error.

The biggest changes in v4 are when association occurs and clearly
defining the cases where association should happen.
1. Association is now done when the device is set to keep blkg->q and
bio->bi_disk->queue in sync.
2. When a bio is submitted directly to the device, it will not be
associated with a blkg. This is because a blkg represents the
relationship between a blkcg and a request_queue. Going directly to
the device means the request_queue may not exist meaning no blkg
will exist.

The patch updating blk_get_rl() was dropped (v3 10/12). The patch to
always associate a blkg from v3 (v3 04/12) was fixed and split into
patches 0004 and 0005. 0011 is new removing bio_disassociate_task().

Summarizing the ideas of this series:
1. Gracefully handle blkg failure to create by walking up the blkg
tree rather than fall through to root.
2. Associate a bio with a blkg in core logic rather than per
controller logic.
3. Rather than have a css and blkg reference, hold just a blkg ref
as it also holds a css ref.
4. Switch to percpu ref counting for blkg.

[1] https://lore.kernel.org/lkml/[email protected]/
[2] https://lore.kernel.org/lkml/[email protected]/
[3] https://marc.info/?l=linux-cgroups&m=154110436103723
[4] https://lore.kernel.org/lkml/[email protected]/

This patchset contains the following 14 patches:
0001-blkcg-fix-ref-count-issue-with-bio_blkcg-using-task_.patch
0002-blkcg-update-blkg_lookup_create-to-do-locking.patch
0003-blkcg-convert-blkg_lookup_create-to-find-closest-blk.patch
0004-blkcg-introduce-common-blkg-association-logic.patch
0005-dm-set-flush-bio-device-on-demand.patch
0006-blkcg-associate-blkg-when-associating-a-device.patch
0007-blkcg-consolidate-bio_issue_init-to-be-a-part-of-cor.patch
0008-blkcg-associate-a-blkg-for-pages-being-evicted-by-sw.patch
0009-blkcg-associate-writeback-bios-with-a-blkg.patch
0010-blkcg-remove-bio-bi_css-and-instead-use-bio-bi_blkg.patch
0011-blkcg-remove-additional-reference-to-the-css.patch
0012-blkcg-remove-bio_disassociate_task.patch
0013-blkcg-change-blkg-reference-counting-to-use-percpu_r.patch
0014-blkcg-rename-blkg_try_get-to-blkg_tryget.patch

This patchset is on top of linu-block#for-4.21/block 154989e45fd8.

diffstats below:

Dennis Zhou (14):
blkcg: fix ref count issue with bio_blkcg() using task_css
blkcg: update blkg_lookup_create() to do locking
blkcg: convert blkg_lookup_create() to find closest blkg
blkcg: introduce common blkg association logic
dm: set the static flush bio device on demand
blkcg: associate blkg when associating a device
blkcg: consolidate bio_issue_init() to be a part of core
blkcg: associate a blkg for pages being evicted by swap
blkcg: associate writeback bios with a blkg
blkcg: remove bio->bi_css and instead use bio->bi_blkg
blkcg: remove additional reference to the css
blkcg: remove bio_disassociate_task()
blkcg: change blkg reference counting to use percpu_ref
blkcg: rename blkg_try_get() to blkg_tryget()

Documentation/admin-guide/cgroup-v2.rst | 8 +-
block/bfq-cgroup.c | 4 +-
block/bfq-iosched.c | 2 +-
block/bio.c | 156 +++++++++++++++---------
block/blk-cgroup.c | 95 ++++++++++++---
block/blk-iolatency.c | 24 +---
block/blk-throttle.c | 11 --
block/bounce.c | 3 +-
drivers/block/loop.c | 5 +-
drivers/md/dm.c | 12 +-
drivers/md/raid0.c | 2 +-
fs/buffer.c | 10 +-
fs/ext4/page-io.c | 2 +-
include/linux/bio.h | 29 +++--
include/linux/blk-cgroup.h | 120 ++++++++++++------
include/linux/blk_types.h | 7 +-
include/linux/cgroup.h | 2 +
include/linux/writeback.h | 5 +-
kernel/cgroup/cgroup.c | 48 ++++++--
kernel/trace/blktrace.c | 4 +-
mm/page_io.c | 2 +-
21 files changed, 361 insertions(+), 190 deletions(-)

Thanks,
Dennis


2018-12-04 18:37:12

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 03/14] blkcg: convert blkg_lookup_create() to find closest blkg

There are several scenarios where blkg_lookup_create() can fail such as
the blkcg dying, request_queue is dying, or simply being OOM. Most
handle this by simply falling back to the q->root_blkg and calling it a
day.

This patch implements the notion of closest blkg. During
blkg_lookup_create(), if it fails to create, return the closest blkg
found or the q->root_blkg. blkg_try_get_closest() is introduced and used
during association so a bio is always attached to a blkg.

Signed-off-by: Dennis Zhou <[email protected]>
Acked-by: Tejun Heo <[email protected]>
Reviewed-by: Josef Bacik <[email protected]>
---
block/bio.c | 17 ++++++++++-------
block/blk-cgroup.c | 23 ++++++++++++++++-------
block/blk-iolatency.c | 14 ++------------
block/blk-throttle.c | 4 +---
include/linux/blk-cgroup.h | 24 +++++++++++++++---------
5 files changed, 44 insertions(+), 38 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 346a7f5cb2dd..5c9828524adc 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -2009,21 +2009,24 @@ int bio_associate_blkcg(struct bio *bio, struct cgroup_subsys_state *blkcg_css)
EXPORT_SYMBOL_GPL(bio_associate_blkcg);

/**
- * bio_associate_blkg - associate a bio with the specified blkg
+ * bio_associate_blkg - associate a bio with the a blkg
* @bio: target bio
* @blkg: the blkg to associate
*
- * Associate @bio with the blkg specified by @blkg. This is the queue specific
- * blkcg information associated with the @bio, a reference will be taken on the
- * @blkg and will be freed when the bio is freed.
+ * This tries to associate @bio with the specified @blkg. Association failure
+ * is handled by walking up the blkg tree. Therefore, the blkg associated can
+ * be anything between @blkg and the root_blkg. This situation only happens
+ * when a cgroup is dying and then the remaining bios will spill to the closest
+ * alive blkg.
+ *
+ * A reference will be taken on the @blkg and will be released when @bio is
+ * freed.
*/
int bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg)
{
if (unlikely(bio->bi_blkg))
return -EBUSY;
- if (!blkg_try_get(blkg))
- return -ENODEV;
- bio->bi_blkg = blkg;
+ bio->bi_blkg = blkg_try_get_closest(blkg);
return 0;
}

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index b421a9457e05..120f2e2835fb 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -258,9 +258,8 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
* that all non-root blkg's have access to the parent blkg. This function
* should be called under RCU read lock and @q->queue_lock.
*
- * Returns pointer to the looked up or created blkg on success, ERR_PTR()
- * value on error. If @q is dead, returns ERR_PTR(-EINVAL). If @q is not
- * dead and bypassing, returns ERR_PTR(-EBUSY).
+ * Returns the blkg or the closest blkg if blkg_create() fails as it walks
+ * down from root.
*/
struct blkcg_gq *__blkg_lookup_create(struct blkcg *blkcg,
struct request_queue *q)
@@ -276,19 +275,29 @@ struct blkcg_gq *__blkg_lookup_create(struct blkcg *blkcg,

/*
* Create blkgs walking down from blkcg_root to @blkcg, so that all
- * non-root blkgs have access to their parents.
+ * non-root blkgs have access to their parents. Returns the closest
+ * blkg to the intended blkg should blkg_create() fail.
*/
while (true) {
struct blkcg *pos = blkcg;
struct blkcg *parent = blkcg_parent(blkcg);
-
- while (parent && !__blkg_lookup(parent, q, false)) {
+ struct blkcg_gq *ret_blkg = q->root_blkg;
+
+ while (parent) {
+ blkg = __blkg_lookup(parent, q, false);
+ if (blkg) {
+ /* remember closest blkg */
+ ret_blkg = blkg;
+ break;
+ }
pos = parent;
parent = blkcg_parent(parent);
}

blkg = blkg_create(pos, q, NULL);
- if (pos == blkcg || IS_ERR(blkg))
+ if (IS_ERR(blkg))
+ return ret_blkg;
+ if (pos == blkcg)
return blkg;
}
}
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index e6f68f15dee9..46e86c34cf79 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -483,21 +483,11 @@ static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio)
rcu_read_lock();
bio_associate_blkcg(bio, NULL);
blkcg = bio_blkcg(bio);
- blkg = blkg_lookup(blkcg, q);
- if (unlikely(!blkg)) {
- spin_lock_irq(&q->queue_lock);
- blkg = __blkg_lookup_create(blkcg, q);
- if (IS_ERR(blkg))
- blkg = NULL;
- spin_unlock_irq(&q->queue_lock);
- }
- if (!blkg)
- goto out;
-
+ blkg = blkg_lookup_create(blkcg, q);
bio_issue_init(&bio->bi_issue, bio_sectors(bio));
bio_associate_blkg(bio, blkg);
-out:
rcu_read_unlock();
+
while (blkg && blkg->parent) {
struct iolatency_grp *iolat = blkg_to_lat(blkg);
if (!iolat) {
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 8f0a104770ee..d648d6720f46 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -2118,9 +2118,7 @@ static inline void throtl_update_latency_buckets(struct throtl_data *td)
static void blk_throtl_assoc_bio(struct throtl_grp *tg, struct bio *bio)
{
#ifdef CONFIG_BLK_DEV_THROTTLING_LOW
- /* fallback to root_blkg if we fail to get a blkg ref */
- if (bio->bi_css && (bio_associate_blkg(bio, tg_to_blkg(tg)) == -ENODEV))
- bio_associate_blkg(bio, bio->bi_disk->queue->root_blkg);
+ bio_associate_blkg(bio, tg_to_blkg(tg));
bio_issue_init(&bio->bi_issue, bio_sectors(bio));
#endif
}
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index b3b1a8187d23..c08e96e521ed 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -545,6 +545,20 @@ static inline struct blkcg_gq *blkg_try_get(struct blkcg_gq *blkg)
return NULL;
}

+/**
+ * blkg_try_get_closest - try and get a blkg ref on the closet blkg
+ * @blkg: blkg to get
+ *
+ * This walks up the blkg tree to find the closest non-dying blkg and returns
+ * the blkg that it did association with as it may not be the passed in blkg.
+ */
+static inline struct blkcg_gq *blkg_try_get_closest(struct blkcg_gq *blkg)
+{
+ while (!atomic_inc_not_zero(&blkg->refcnt))
+ blkg = blkg->parent;
+
+ return blkg;
+}

void __blkg_release_rcu(struct rcu_head *rcu);

@@ -797,15 +811,7 @@ static inline bool blkcg_bio_issue_check(struct request_queue *q,
/* associate blkcg if bio hasn't attached one */
bio_associate_blkcg(bio, NULL);
blkcg = bio_blkcg(bio);
-
- blkg = blkg_lookup(blkcg, q);
- if (unlikely(!blkg)) {
- spin_lock_irq(&q->queue_lock);
- blkg = __blkg_lookup_create(blkcg, q);
- if (IS_ERR(blkg))
- blkg = NULL;
- spin_unlock_irq(&q->queue_lock);
- }
+ blkg = blkg_lookup_create(blkcg, q);

throtl = blk_throtl_bio(q, blkg, bio);

--
2.17.1


2018-12-04 18:37:15

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 05/14] dm: set the static flush bio device on demand

The next patch changes the macro bio_set_dev() to associate a bio with a
blkg based on the device set. However, dm creates a static bio to be
used as the basis for cloning empty flush bios on creation. The
bio_set_dev() call in alloc_dev() will cause problems with the next
patch adding association to bio_set_dev() because the call is before the
bdev is associated with a gendisk (bd_disk is %NULL). To get around
this, set the device on the static bio every time and use that to clone
to the other bios.

Signed-off-by: Dennis Zhou <[email protected]>
Cc: Alasdair Kergon <[email protected]>
Cc: Mike Snitzer <[email protected]>
---
block/bio.c | 1 +
drivers/md/dm.c | 12 +++++++++++-
2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/block/bio.c b/block/bio.c
index 452b8e79b998..41ebb3f8e2fc 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -2021,6 +2021,7 @@ void bio_disassociate_blkg(struct bio *bio)
bio->bi_blkg = NULL;
}
}
+EXPORT_SYMBOL_GPL(bio_disassociate_blkg);

/**
* __bio_associate_blkg - associate a bio with the a blkg
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index a733e4c920af..a2d6f8b33d23 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -1417,10 +1417,21 @@ static int __send_empty_flush(struct clone_info *ci)
unsigned target_nr = 0;
struct dm_target *ti;

+ /*
+ * Empty flush uses a statically initialized bio as the base for
+ * cloning, &md->flush_bio. However, blkg association requires that
+ * a bdev is associated with a gendisk, which doesn't happen until the
+ * bdev is opened. So, blkg association is done at issue time of the
+ * flush rather than when the device is created in dm_alloc().
+ */
+ bio_set_dev(ci->bio, ci->io->md->bdev);
+
BUG_ON(bio_has_data(ci->bio));
while ((ti = dm_table_get_target(ci->map, target_nr++)))
__send_duplicate_bios(ci, ti, ti->num_flush_bios, NULL);

+ bio_disassociate_blkg(ci->bio);
+
return 0;
}

@@ -1939,7 +1950,6 @@ static struct mapped_device *alloc_dev(int minor)
goto bad;

bio_init(&md->flush_bio, NULL, 0);
- bio_set_dev(&md->flush_bio, md->bdev);
md->flush_bio.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC;

dm_stats_init(&md->stats);
--
2.17.1


2018-12-04 18:37:31

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 13/14] blkcg: change blkg reference counting to use percpu_ref

Every bio is now associated with a blkg putting blkg_get, blkg_try_get,
and blkg_put on the hot path. Switch over the refcnt in blkg to use
percpu_ref.

Signed-off-by: Dennis Zhou <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
block/blk-cgroup.c | 41 ++++++++++++++++++++++++++++++++++++--
include/linux/blk-cgroup.h | 15 +++++---------
2 files changed, 44 insertions(+), 12 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 120f2e2835fb..2ca7611fe274 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -81,6 +81,37 @@ static void blkg_free(struct blkcg_gq *blkg)
kfree(blkg);
}

+static void __blkg_release(struct rcu_head *rcu)
+{
+ struct blkcg_gq *blkg = container_of(rcu, struct blkcg_gq, rcu_head);
+
+ percpu_ref_exit(&blkg->refcnt);
+
+ /* release the blkcg and parent blkg refs this blkg has been holding */
+ css_put(&blkg->blkcg->css);
+ if (blkg->parent)
+ blkg_put(blkg->parent);
+
+ wb_congested_put(blkg->wb_congested);
+
+ blkg_free(blkg);
+}
+
+/*
+ * A group is RCU protected, but having an rcu lock does not mean that one
+ * can access all the fields of blkg and assume these are valid. For
+ * example, don't try to follow throtl_data and request queue links.
+ *
+ * Having a reference to blkg under an rcu allows accesses to only values
+ * local to groups like group stats and group rate limits.
+ */
+static void blkg_release(struct percpu_ref *ref)
+{
+ struct blkcg_gq *blkg = container_of(ref, struct blkcg_gq, refcnt);
+
+ call_rcu(&blkg->rcu_head, __blkg_release);
+}
+
/**
* blkg_alloc - allocate a blkg
* @blkcg: block cgroup the new blkg is associated with
@@ -107,7 +138,6 @@ static struct blkcg_gq *blkg_alloc(struct blkcg *blkcg, struct request_queue *q,
blkg->q = q;
INIT_LIST_HEAD(&blkg->q_node);
blkg->blkcg = blkcg;
- atomic_set(&blkg->refcnt, 1);

for (i = 0; i < BLKCG_MAX_POLS; i++) {
struct blkcg_policy *pol = blkcg_policy[i];
@@ -207,6 +237,11 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
blkg_get(blkg->parent);
}

+ ret = percpu_ref_init(&blkg->refcnt, blkg_release, 0,
+ GFP_NOWAIT | __GFP_NOWARN);
+ if (ret)
+ goto err_cancel_ref;
+
/* invoke per-policy init */
for (i = 0; i < BLKCG_MAX_POLS; i++) {
struct blkcg_policy *pol = blkcg_policy[i];
@@ -239,6 +274,8 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
blkg_put(blkg);
return ERR_PTR(ret);

+err_cancel_ref:
+ percpu_ref_exit(&blkg->refcnt);
err_put_congested:
wb_congested_put(wb_congested);
err_put_css:
@@ -367,7 +404,7 @@ static void blkg_destroy(struct blkcg_gq *blkg)
* Put the reference taken at the time of creation so that when all
* queues are gone, group can be destroyed.
*/
- blkg_put(blkg);
+ percpu_ref_kill(&blkg->refcnt);
}

/**
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index 68cae7961060..05909cf31ed1 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -123,7 +123,7 @@ struct blkcg_gq {
struct blkcg_gq *parent;

/* reference count */
- atomic_t refcnt;
+ struct percpu_ref refcnt;

/* is this blkg online? protected by both blkcg and q locks */
bool online;
@@ -486,8 +486,7 @@ static inline int blkg_path(struct blkcg_gq *blkg, char *buf, int buflen)
*/
static inline void blkg_get(struct blkcg_gq *blkg)
{
- WARN_ON_ONCE(atomic_read(&blkg->refcnt) <= 0);
- atomic_inc(&blkg->refcnt);
+ percpu_ref_get(&blkg->refcnt);
}

/**
@@ -499,7 +498,7 @@ static inline void blkg_get(struct blkcg_gq *blkg)
*/
static inline struct blkcg_gq *blkg_try_get(struct blkcg_gq *blkg)
{
- if (atomic_inc_not_zero(&blkg->refcnt))
+ if (percpu_ref_tryget(&blkg->refcnt))
return blkg;
return NULL;
}
@@ -513,23 +512,19 @@ static inline struct blkcg_gq *blkg_try_get(struct blkcg_gq *blkg)
*/
static inline struct blkcg_gq *blkg_try_get_closest(struct blkcg_gq *blkg)
{
- while (!atomic_inc_not_zero(&blkg->refcnt))
+ while (!percpu_ref_tryget(&blkg->refcnt))
blkg = blkg->parent;

return blkg;
}

-void __blkg_release_rcu(struct rcu_head *rcu);
-
/**
* blkg_put - put a blkg reference
* @blkg: blkg to put
*/
static inline void blkg_put(struct blkcg_gq *blkg)
{
- WARN_ON_ONCE(atomic_read(&blkg->refcnt) <= 0);
- if (atomic_dec_and_test(&blkg->refcnt))
- call_rcu(&blkg->rcu_head, __blkg_release_rcu);
+ percpu_ref_put(&blkg->refcnt);
}

/**
--
2.17.1


2018-12-04 18:37:43

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 14/14] blkcg: rename blkg_try_get() to blkg_tryget()

blkg reference counting now uses percpu_ref rather than atomic_t. Let's
make this consistent with css_tryget. This renames blkg_try_get to
blkg_tryget and now returns a bool rather than the blkg or %NULL.

Signed-off-by: Dennis Zhou <[email protected]>
Reviewed-by: Josef Bacik <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
block/bio.c | 2 +-
block/blk-cgroup.c | 3 +--
block/blk-iolatency.c | 2 +-
include/linux/blk-cgroup.h | 12 +++++-------
4 files changed, 8 insertions(+), 11 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 7ec5316e6ecc..06760543ec81 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1990,7 +1990,7 @@ static void __bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg)
{
bio_disassociate_blkg(bio);

- bio->bi_blkg = blkg_try_get_closest(blkg);
+ bio->bi_blkg = blkg_tryget_closest(blkg);
}

/**
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 2ca7611fe274..6bd0619a7d6e 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1736,8 +1736,7 @@ void blkcg_maybe_throttle_current(void)
blkg = blkg_lookup(blkcg, q);
if (!blkg)
goto out;
- blkg = blkg_try_get(blkg);
- if (!blkg)
+ if (!blkg_tryget(blkg))
goto out;
rcu_read_unlock();

diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 5a79f06a730d..0b14c3d57769 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -698,7 +698,7 @@ static void blkiolatency_timer_fn(struct timer_list *t)
* We could be exiting, don't access the pd unless we have a
* ref on the blkg.
*/
- if (!blkg_try_get(blkg))
+ if (!blkg_tryget(blkg))
continue;

iolat = blkg_to_lat(blkg);
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index 05909cf31ed1..64ac89130ae8 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -490,27 +490,25 @@ static inline void blkg_get(struct blkcg_gq *blkg)
}

/**
- * blkg_try_get - try and get a blkg reference
+ * blkg_tryget - try and get a blkg reference
* @blkg: blkg to get
*
* This is for use when doing an RCU lookup of the blkg. We may be in the midst
* of freeing this blkg, so we can only use it if the refcnt is not zero.
*/
-static inline struct blkcg_gq *blkg_try_get(struct blkcg_gq *blkg)
+static inline bool blkg_tryget(struct blkcg_gq *blkg)
{
- if (percpu_ref_tryget(&blkg->refcnt))
- return blkg;
- return NULL;
+ return percpu_ref_tryget(&blkg->refcnt);
}

/**
- * blkg_try_get_closest - try and get a blkg ref on the closet blkg
+ * blkg_tryget_closest - try and get a blkg ref on the closet blkg
* @blkg: blkg to get
*
* This walks up the blkg tree to find the closest non-dying blkg and returns
* the blkg that it did association with as it may not be the passed in blkg.
*/
-static inline struct blkcg_gq *blkg_try_get_closest(struct blkcg_gq *blkg)
+static inline struct blkcg_gq *blkg_tryget_closest(struct blkcg_gq *blkg)
{
while (!percpu_ref_tryget(&blkg->refcnt))
blkg = blkg->parent;
--
2.17.1


2018-12-04 18:37:54

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 12/14] blkcg: remove bio_disassociate_task()

Now that a bio only holds a blkg reference, so clean up is simply
putting back that reference. Remove bio_disassociate_task() as it just
calls bio_disassociate_blkg() and call the latter directly.

Signed-off-by: Dennis Zhou <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
block/bio.c | 11 +----------
include/linux/bio.h | 2 --
2 files changed, 1 insertion(+), 12 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index ce1e512dca5a..7ec5316e6ecc 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -244,7 +244,7 @@ struct bio_vec *bvec_alloc(gfp_t gfp_mask, int nr, unsigned long *idx,

void bio_uninit(struct bio *bio)
{
- bio_disassociate_task(bio);
+ bio_disassociate_blkg(bio);
}
EXPORT_SYMBOL(bio_uninit);

@@ -2073,15 +2073,6 @@ void bio_associate_blkg(struct bio *bio)
}
EXPORT_SYMBOL_GPL(bio_associate_blkg);

-/**
- * bio_disassociate_task - undo bio_associate_current()
- * @bio: target bio
- */
-void bio_disassociate_task(struct bio *bio)
-{
- bio_disassociate_blkg(bio);
-}
-
/**
* bio_clone_blkg_association - clone blkg association from src to dst bio
* @dst: destination bio
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 84e1c4dc703a..7380b094dcca 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -516,7 +516,6 @@ void bio_disassociate_blkg(struct bio *bio);
void bio_associate_blkg(struct bio *bio);
void bio_associate_blkg_from_css(struct bio *bio,
struct cgroup_subsys_state *css);
-void bio_disassociate_task(struct bio *bio);
void bio_clone_blkg_association(struct bio *dst, struct bio *src);
#else /* CONFIG_BLK_CGROUP */
static inline void bio_disassociate_blkg(struct bio *bio) { }
@@ -524,7 +523,6 @@ static inline void bio_associate_blkg(struct bio *bio) { }
static inline void bio_associate_blkg_from_css(struct bio *bio,
struct cgroup_subsys_state *css)
{ }
-static inline void bio_disassociate_task(struct bio *bio) { }
static inline void bio_clone_blkg_association(struct bio *dst,
struct bio *src) { }
#endif /* CONFIG_BLK_CGROUP */
--
2.17.1


2018-12-04 18:38:02

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 08/14] blkcg: associate a blkg for pages being evicted by swap

A prior patch in this series added blkg association to bios issued by
cgroups. There are two other paths that we want to attribute work back
to the appropriate cgroup: swap and writeback. Here we modify the way
swap tags bios to include the blkg. Writeback will be tackle in the next
patch.

Signed-off-by: Dennis Zhou <[email protected]>
Reviewed-by: Josef Bacik <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
block/bio.c | 62 +++++++++++++++++++++++++++------------------
include/linux/bio.h | 6 ++---
mm/page_io.c | 2 +-
3 files changed, 42 insertions(+), 28 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 90089124b512..f0f069c1823c 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1957,30 +1957,6 @@ EXPORT_SYMBOL(bioset_init_from_src);

#ifdef CONFIG_BLK_CGROUP

-#ifdef CONFIG_MEMCG
-/**
- * bio_associate_blkcg_from_page - associate a bio with the page's blkcg
- * @bio: target bio
- * @page: the page to lookup the blkcg from
- *
- * Associate @bio with the blkcg from @page's owning memcg. This works like
- * every other associate function wrt references.
- */
-int bio_associate_blkcg_from_page(struct bio *bio, struct page *page)
-{
- struct cgroup_subsys_state *blkcg_css;
-
- if (unlikely(bio->bi_css))
- return -EBUSY;
- if (!page->mem_cgroup)
- return 0;
- blkcg_css = cgroup_get_e_css(page->mem_cgroup->css.cgroup,
- &io_cgrp_subsys);
- bio->bi_css = blkcg_css;
- return 0;
-}
-#endif /* CONFIG_MEMCG */
-
/**
* bio_associate_blkcg - associate a bio with the specified blkcg
* @bio: target bio
@@ -2045,6 +2021,44 @@ static void __bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg)
bio->bi_blkg = blkg_try_get_closest(blkg);
}

+static void __bio_associate_blkg_from_css(struct bio *bio,
+ struct cgroup_subsys_state *css)
+{
+ struct blkcg_gq *blkg;
+
+ rcu_read_lock();
+
+ blkg = blkg_lookup_create(css_to_blkcg(css), bio->bi_disk->queue);
+ __bio_associate_blkg(bio, blkg);
+
+ rcu_read_unlock();
+}
+
+#ifdef CONFIG_MEMCG
+/**
+ * bio_associate_blkg_from_page - associate a bio with the page's blkg
+ * @bio: target bio
+ * @page: the page to lookup the blkcg from
+ *
+ * Associate @bio with the blkg from @page's owning memcg and the respective
+ * request_queue. This works like every other associate function wrt
+ * references.
+ */
+void bio_associate_blkg_from_page(struct bio *bio, struct page *page)
+{
+ struct cgroup_subsys_state *css;
+
+ if (unlikely(bio->bi_css))
+ return;
+ if (!page->mem_cgroup)
+ return;
+
+ css = cgroup_get_e_css(page->mem_cgroup->css.cgroup, &io_cgrp_subsys);
+ bio->bi_css = css;
+ __bio_associate_blkg_from_css(bio, css);
+}
+#endif /* CONFIG_MEMCG */
+
/**
* bio_associate_blkg - associate a bio with a blkg
* @bio: target bio
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 6ee2ea8b378a..f13572c254a7 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -505,10 +505,10 @@ do { \
disk_devt((bio)->bi_disk)

#if defined(CONFIG_MEMCG) && defined(CONFIG_BLK_CGROUP)
-int bio_associate_blkcg_from_page(struct bio *bio, struct page *page);
+void bio_associate_blkg_from_page(struct bio *bio, struct page *page);
#else
-static inline int bio_associate_blkcg_from_page(struct bio *bio,
- struct page *page) { return 0; }
+static inline void bio_associate_blkg_from_page(struct bio *bio,
+ struct page *page) { }
#endif

#ifdef CONFIG_BLK_CGROUP
diff --git a/mm/page_io.c b/mm/page_io.c
index 5bdfd21c1bd9..3475733b1926 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -339,7 +339,7 @@ int __swap_writepage(struct page *page, struct writeback_control *wbc,
goto out;
}
bio->bi_opf = REQ_OP_WRITE | REQ_SWAP | wbc_to_write_flags(wbc);
- bio_associate_blkcg_from_page(bio, page);
+ bio_associate_blkg_from_page(bio, page);
count_swpout_vm_event(page);
set_page_writeback(page);
unlock_page(page);
--
2.17.1


2018-12-04 18:38:08

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 11/14] blkcg: remove additional reference to the css

The previous patch in this series removed carrying around a pointer to
the css in blkg. However, the blkg association logic still relied on
taking a reference on the css to ensure we wouldn't fail in getting a
reference for the blkg.

Here the implicit dependency on the css is removed. The association
continues to rely on the tryget logic walking up the blkg tree. This
streamlines the three ways that association can happen: normal, swap,
and writeback.

Signed-off-by: Dennis Zhou <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
block/bio.c | 66 ++++++++++++++++----------------------
include/linux/blk-cgroup.h | 41 -----------------------
include/linux/cgroup.h | 2 ++
kernel/cgroup/cgroup.c | 48 +++++++++++++++++++++------
4 files changed, 69 insertions(+), 88 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 2b6bc7b805ec..ce1e512dca5a 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1966,8 +1966,6 @@ EXPORT_SYMBOL(bioset_init_from_src);
void bio_disassociate_blkg(struct bio *bio)
{
if (bio->bi_blkg) {
- /* a ref is always taken on css */
- css_put(&bio_blkcg(bio)->css);
blkg_put(bio->bi_blkg);
bio->bi_blkg = NULL;
}
@@ -1995,33 +1993,31 @@ static void __bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg)
bio->bi_blkg = blkg_try_get_closest(blkg);
}

-static void __bio_associate_blkg_from_css(struct bio *bio,
- struct cgroup_subsys_state *css)
-{
- struct blkcg_gq *blkg;
-
- rcu_read_lock();
-
- blkg = blkg_lookup_create(css_to_blkcg(css), bio->bi_disk->queue);
- __bio_associate_blkg(bio, blkg);
-
- rcu_read_unlock();
-}
-
/**
* bio_associate_blkg_from_css - associate a bio with a specified css
* @bio: target bio
* @css: target css
*
* Associate @bio with the blkg found by combining the css's blkg and the
- * request_queue of the @bio. This takes a reference on the css that will
- * be put upon freeing of @bio.
+ * request_queue of the @bio. This falls back to the queue's root_blkg if
+ * the association fails with the css.
*/
void bio_associate_blkg_from_css(struct bio *bio,
struct cgroup_subsys_state *css)
{
- css_get(css);
- __bio_associate_blkg_from_css(bio, css);
+ struct request_queue *q = bio->bi_disk->queue;
+ struct blkcg_gq *blkg;
+
+ rcu_read_lock();
+
+ if (!css || !css->parent)
+ blkg = q->root_blkg;
+ else
+ blkg = blkg_lookup_create(css_to_blkcg(css), q);
+
+ __bio_associate_blkg(bio, blkg);
+
+ rcu_read_unlock();
}
EXPORT_SYMBOL_GPL(bio_associate_blkg_from_css);

@@ -2032,8 +2028,8 @@ EXPORT_SYMBOL_GPL(bio_associate_blkg_from_css);
* @page: the page to lookup the blkcg from
*
* Associate @bio with the blkg from @page's owning memcg and the respective
- * request_queue. This works like every other associate function wrt
- * references.
+ * request_queue. If cgroup_e_css returns %NULL, fall back to the queue's
+ * root_blkg.
*/
void bio_associate_blkg_from_page(struct bio *bio, struct page *page)
{
@@ -2042,8 +2038,12 @@ void bio_associate_blkg_from_page(struct bio *bio, struct page *page)
if (!page->mem_cgroup)
return;

- css = cgroup_get_e_css(page->mem_cgroup->css.cgroup, &io_cgrp_subsys);
- __bio_associate_blkg_from_css(bio, css);
+ rcu_read_lock();
+
+ css = cgroup_e_css(page->mem_cgroup->css.cgroup, &io_cgrp_subsys);
+ bio_associate_blkg_from_css(bio, css);
+
+ rcu_read_unlock();
}
#endif /* CONFIG_MEMCG */

@@ -2058,24 +2058,16 @@ void bio_associate_blkg_from_page(struct bio *bio, struct page *page)
*/
void bio_associate_blkg(struct bio *bio)
{
- struct request_queue *q = bio->bi_disk->queue;
- struct blkcg *blkcg;
- struct blkcg_gq *blkg;
+ struct cgroup_subsys_state *css;

rcu_read_lock();

if (bio->bi_blkg)
- blkcg = bio->bi_blkg->blkcg;
+ css = &bio_blkcg(bio)->css;
else
- blkcg = css_to_blkcg(blkcg_get_css());
+ css = blkcg_css();

- if (!blkcg->css.parent) {
- __bio_associate_blkg(bio, q->root_blkg);
- } else {
- blkg = blkg_lookup_create(blkcg, q);
-
- __bio_associate_blkg(bio, blkg);
- }
+ bio_associate_blkg_from_css(bio, css);

rcu_read_unlock();
}
@@ -2097,10 +2089,8 @@ void bio_disassociate_task(struct bio *bio)
*/
void bio_clone_blkg_association(struct bio *dst, struct bio *src)
{
- if (src->bi_blkg) {
- css_get(&bio_blkcg(src)->css);
+ if (src->bi_blkg)
__bio_associate_blkg(dst, src->bi_blkg);
- }
}
EXPORT_SYMBOL_GPL(bio_clone_blkg_association);
#endif /* CONFIG_BLK_CGROUP */
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index 3674ff8d8e50..68cae7961060 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -246,47 +246,6 @@ static inline struct cgroup_subsys_state *blkcg_css(void)
return task_css(current, io_cgrp_id);
}

-/**
- * blkcg_get_css - find and get a reference to the css
- *
- * Find the css associated with either the kthread or the current task.
- * This takes a reference on the blkcg which will need to be managed by the
- * caller.
- */
-static inline struct cgroup_subsys_state *blkcg_get_css(void)
-{
- struct cgroup_subsys_state *css;
-
- rcu_read_lock();
-
- css = kthread_blkcg();
- if (css) {
- css_get(css);
- } else {
- /*
- * This is a bit complicated. It is possible task_css() is
- * seeing an old css pointer here. This is caused by the
- * current thread migrating away from this cgroup and this
- * cgroup dying. css_tryget() will fail when trying to take a
- * ref on a cgroup that's ref count has hit 0.
- *
- * Therefore, if it does fail, this means current must have
- * been swapped away already and this is waiting for it to
- * propagate on the polling cpu. Hence the use of cpu_relax().
- */
- while (true) {
- css = task_css(current, io_cgrp_id);
- if (likely(css_tryget(css)))
- break;
- cpu_relax();
- }
- }
-
- rcu_read_unlock();
-
- return css;
-}
-
static inline struct blkcg *css_to_blkcg(struct cgroup_subsys_state *css)
{
return css ? container_of(css, struct blkcg, css) : NULL;
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index 9d12757a65b0..9968332cceed 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -93,6 +93,8 @@ extern struct css_set init_css_set;

bool css_has_online_children(struct cgroup_subsys_state *css);
struct cgroup_subsys_state *css_from_id(int id, struct cgroup_subsys *ss);
+struct cgroup_subsys_state *cgroup_e_css(struct cgroup *cgroup,
+ struct cgroup_subsys *ss);
struct cgroup_subsys_state *cgroup_get_e_css(struct cgroup *cgroup,
struct cgroup_subsys *ss);
struct cgroup_subsys_state *css_tryget_online_from_dir(struct dentry *dentry,
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index 6aaf5dd5383b..8b79318810ad 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -493,7 +493,7 @@ static struct cgroup_subsys_state *cgroup_tryget_css(struct cgroup *cgrp,
}

/**
- * cgroup_e_css - obtain a cgroup's effective css for the specified subsystem
+ * cgroup_e_css_by_mask - obtain a cgroup's effective css for the specified ss
* @cgrp: the cgroup of interest
* @ss: the subsystem of interest (%NULL returns @cgrp->self)
*
@@ -502,8 +502,8 @@ static struct cgroup_subsys_state *cgroup_tryget_css(struct cgroup *cgrp,
* enabled. If @ss is associated with the hierarchy @cgrp is on, this
* function is guaranteed to return non-NULL css.
*/
-static struct cgroup_subsys_state *cgroup_e_css(struct cgroup *cgrp,
- struct cgroup_subsys *ss)
+static struct cgroup_subsys_state *cgroup_e_css_by_mask(struct cgroup *cgrp,
+ struct cgroup_subsys *ss)
{
lockdep_assert_held(&cgroup_mutex);

@@ -523,6 +523,35 @@ static struct cgroup_subsys_state *cgroup_e_css(struct cgroup *cgrp,
return cgroup_css(cgrp, ss);
}

+/**
+ * cgroup_e_css - obtain a cgroup's effective css for the specified subsystem
+ * @cgrp: the cgroup of interest
+ * @ss: the subsystem of interest
+ *
+ * Find and get the effective css of @cgrp for @ss. The effective css is
+ * defined as the matching css of the nearest ancestor including self which
+ * has @ss enabled. If @ss is not mounted on the hierarchy @cgrp is on,
+ * the root css is returned, so this function always returns a valid css.
+ *
+ * The returned css is not guaranteed to be online, and therefore it is the
+ * callers responsiblity to tryget a reference for it.
+ */
+struct cgroup_subsys_state *cgroup_e_css(struct cgroup *cgrp,
+ struct cgroup_subsys *ss)
+{
+ struct cgroup_subsys_state *css;
+
+ do {
+ css = cgroup_css(cgrp, ss);
+
+ if (css)
+ return css;
+ cgrp = cgroup_parent(cgrp);
+ } while (cgrp);
+
+ return init_css_set.subsys[ss->id];
+}
+
/**
* cgroup_get_e_css - get a cgroup's effective css for the specified subsystem
* @cgrp: the cgroup of interest
@@ -605,10 +634,11 @@ EXPORT_SYMBOL_GPL(of_css);
*
* Should be called under cgroup_[tree_]mutex.
*/
-#define for_each_e_css(css, ssid, cgrp) \
- for ((ssid) = 0; (ssid) < CGROUP_SUBSYS_COUNT; (ssid)++) \
- if (!((css) = cgroup_e_css(cgrp, cgroup_subsys[(ssid)]))) \
- ; \
+#define for_each_e_css(css, ssid, cgrp) \
+ for ((ssid) = 0; (ssid) < CGROUP_SUBSYS_COUNT; (ssid)++) \
+ if (!((css) = cgroup_e_css_by_mask(cgrp, \
+ cgroup_subsys[(ssid)]))) \
+ ; \
else

/**
@@ -1007,7 +1037,7 @@ static struct css_set *find_existing_css_set(struct css_set *old_cset,
* @ss is in this hierarchy, so we want the
* effective css from @cgrp.
*/
- template[i] = cgroup_e_css(cgrp, ss);
+ template[i] = cgroup_e_css_by_mask(cgrp, ss);
} else {
/*
* @ss is not in this hierarchy, so we don't want
@@ -3024,7 +3054,7 @@ static int cgroup_apply_control(struct cgroup *cgrp)
return ret;

/*
- * At this point, cgroup_e_css() results reflect the new csses
+ * At this point, cgroup_e_css_by_mask() results reflect the new csses
* making the following cgroup_update_dfl_csses() properly update
* css associations of all tasks in the subtree.
*/
--
2.17.1


2018-12-04 18:38:17

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 06/14] blkcg: associate blkg when associating a device

Previously, blkg association was handled by controller specific code in
blk-throttle and blk-iolatency. However, because a blkg represents a
relationship between a blkcg and a request_queue, it makes sense to keep
the blkg->q and bio->bi_disk->queue consistent.

This patch moves association into the bio_set_dev macro(). This should
cover the majority of cases where the device is set/changed keeping the
two pointers consistent. Fallback code is added to
blkcg_bio_issue_check() to catch any missing paths.

Signed-off-by: Dennis Zhou <[email protected]>
---
block/bio.c | 1 +
block/blk-iolatency.c | 4 +---
block/blk-throttle.c | 1 -
include/linux/bio.h | 2 ++
include/linux/blk-cgroup.h | 11 +++--------
5 files changed, 7 insertions(+), 12 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 41ebb3f8e2fc..1e852ab904aa 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -2074,6 +2074,7 @@ void bio_associate_blkg(struct bio *bio)

rcu_read_unlock();
}
+EXPORT_SYMBOL_GPL(bio_associate_blkg);

/**
* bio_disassociate_task - undo bio_associate_current()
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index cdbd10564e66..e6b47c255521 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -472,14 +472,12 @@ static void check_scale_change(struct iolatency_grp *iolat)
static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio)
{
struct blk_iolatency *blkiolat = BLKIOLATENCY(rqos);
- struct blkcg_gq *blkg;
+ struct blkcg_gq *blkg = bio->bi_blkg;
bool issue_as_root = bio_issue_as_root_blkg(bio);

if (!blk_iolatency_enabled(blkiolat))
return;

- bio_associate_blkg(bio);
- blkg = bio->bi_blkg;
bio_issue_init(&bio->bi_issue, bio_sectors(bio));

while (blkg && blkg->parent) {
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 228c3a007ebc..1c6529df2002 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -2118,7 +2118,6 @@ static inline void throtl_update_latency_buckets(struct throtl_data *td)
static void blk_throtl_assoc_bio(struct bio *bio)
{
#ifdef CONFIG_BLK_DEV_THROTTLING_LOW
- bio_associate_blkg(bio);
bio_issue_init(&bio->bi_issue, bio_sectors(bio));
#endif
}
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 62715a5a4f32..6ee2ea8b378a 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -491,12 +491,14 @@ do { \
bio_clear_flag(bio, BIO_THROTTLED);\
(bio)->bi_disk = (bdev)->bd_disk; \
(bio)->bi_partno = (bdev)->bd_partno; \
+ bio_associate_blkg(bio); \
} while (0)

#define bio_copy_dev(dst, src) \
do { \
(dst)->bi_disk = (src)->bi_disk; \
(dst)->bi_partno = (src)->bi_partno; \
+ bio_clone_blkcg_association(dst, src); \
} while (0)

#define bio_dev(bio) \
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index c08e96e521ed..3c87ae71156f 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -802,21 +802,17 @@ static inline bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg
static inline bool blkcg_bio_issue_check(struct request_queue *q,
struct bio *bio)
{
- struct blkcg *blkcg;
struct blkcg_gq *blkg;
bool throtl = false;

- rcu_read_lock();
+ if (!bio->bi_blkg)
+ bio_associate_blkg(bio);

- /* associate blkcg if bio hasn't attached one */
- bio_associate_blkcg(bio, NULL);
- blkcg = bio_blkcg(bio);
- blkg = blkg_lookup_create(blkcg, q);
+ blkg = bio->bi_blkg;

throtl = blk_throtl_bio(q, blkg, bio);

if (!throtl) {
- blkg = blkg ?: q->root_blkg;
/*
* If the bio is flagged with BIO_QUEUE_ENTERED it means this
* is a split bio and we would have already accounted for the
@@ -828,7 +824,6 @@ static inline bool blkcg_bio_issue_check(struct request_queue *q,
blkg_rwstat_add(&blkg->stat_ios, bio->bi_opf, 1);
}

- rcu_read_unlock();
return !throtl;
}

--
2.17.1


2018-12-04 18:38:18

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 09/14] blkcg: associate writeback bios with a blkg

One of the goals of this series is to remove a separate reference to
the css of the bio. This can and should be accessed via bio_blkcg(). In
this patch, wbc_init_bio() now requires a bio to have a device
associated with it.

Signed-off-by: Dennis Zhou <[email protected]>
Reviewed-by: Josef Bacik <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
Documentation/admin-guide/cgroup-v2.rst | 8 +++++---
block/bio.c | 18 ++++++++++++++++++
fs/buffer.c | 10 +++++-----
fs/ext4/page-io.c | 2 +-
include/linux/bio.h | 5 +++++
include/linux/writeback.h | 5 +++--
6 files changed, 37 insertions(+), 11 deletions(-)

diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index 476722b7b636..baf19bf28385 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -1879,8 +1879,10 @@ following two functions.

wbc_init_bio(@wbc, @bio)
Should be called for each bio carrying writeback data and
- associates the bio with the inode's owner cgroup. Can be
- called anytime between bio allocation and submission.
+ associates the bio with the inode's owner cgroup and the
+ corresponding request queue. This must be called after
+ a queue (device) has been associated with the bio and
+ before submission.

wbc_account_io(@wbc, @page, @bytes)
Should be called for each data segment being written out.
@@ -1899,7 +1901,7 @@ the configuration, the bio may be executed at a lower priority and if
the writeback session is holding shared resources, e.g. a journal
entry, may lead to priority inversion. There is no one easy solution
for the problem. Filesystems can try to work around specific problem
-cases by skipping wbc_init_bio() or using bio_associate_blkcg()
+cases by skipping wbc_init_bio() and using bio_associate_blkg()
directly.


diff --git a/block/bio.c b/block/bio.c
index f0f069c1823c..b42477b6a225 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -2034,6 +2034,24 @@ static void __bio_associate_blkg_from_css(struct bio *bio,
rcu_read_unlock();
}

+/**
+ * bio_associate_blkg_from_css - associate a bio with a specified css
+ * @bio: target bio
+ * @css: target css
+ *
+ * Associate @bio with the blkg found by combining the css's blkg and the
+ * request_queue of the @bio. This takes a reference on the css that will
+ * be put upon freeing of @bio.
+ */
+void bio_associate_blkg_from_css(struct bio *bio,
+ struct cgroup_subsys_state *css)
+{
+ css_get(css);
+ bio->bi_css = css;
+ __bio_associate_blkg_from_css(bio, css);
+}
+EXPORT_SYMBOL_GPL(bio_associate_blkg_from_css);
+
#ifdef CONFIG_MEMCG
/**
* bio_associate_blkg_from_page - associate a bio with the page's blkg
diff --git a/fs/buffer.c b/fs/buffer.c
index 1286c2b95498..d60d61e8ed7d 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -3060,11 +3060,6 @@ static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh,
*/
bio = bio_alloc(GFP_NOIO, 1);

- if (wbc) {
- wbc_init_bio(wbc, bio);
- wbc_account_io(wbc, bh->b_page, bh->b_size);
- }
-
bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
bio_set_dev(bio, bh->b_bdev);
bio->bi_write_hint = write_hint;
@@ -3084,6 +3079,11 @@ static int submit_bh_wbc(int op, int op_flags, struct buffer_head *bh,
op_flags |= REQ_PRIO;
bio_set_op_attrs(bio, op, op_flags);

+ if (wbc) {
+ wbc_init_bio(wbc, bio);
+ wbc_account_io(wbc, bh->b_page, bh->b_size);
+ }
+
submit_bio(bio);
return 0;
}
diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
index db7590178dfc..2aa62d58d8dd 100644
--- a/fs/ext4/page-io.c
+++ b/fs/ext4/page-io.c
@@ -374,13 +374,13 @@ static int io_submit_init_bio(struct ext4_io_submit *io,
bio = bio_alloc(GFP_NOIO, BIO_MAX_PAGES);
if (!bio)
return -ENOMEM;
- wbc_init_bio(io->io_wbc, bio);
bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
bio_set_dev(bio, bh->b_bdev);
bio->bi_end_io = ext4_end_bio;
bio->bi_private = ext4_get_io_end(io->io_end);
io->io_bio = bio;
io->io_next_block = bh->b_blocknr;
+ wbc_init_bio(io->io_wbc, bio);
return 0;
}

diff --git a/include/linux/bio.h b/include/linux/bio.h
index f13572c254a7..f0438061a5a3 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -515,6 +515,8 @@ static inline void bio_associate_blkg_from_page(struct bio *bio,
int bio_associate_blkcg(struct bio *bio, struct cgroup_subsys_state *blkcg_css);
void bio_disassociate_blkg(struct bio *bio);
void bio_associate_blkg(struct bio *bio);
+void bio_associate_blkg_from_css(struct bio *bio,
+ struct cgroup_subsys_state *css);
void bio_disassociate_task(struct bio *bio);
void bio_clone_blkcg_association(struct bio *dst, struct bio *src);
#else /* CONFIG_BLK_CGROUP */
@@ -522,6 +524,9 @@ static inline int bio_associate_blkcg(struct bio *bio,
struct cgroup_subsys_state *blkcg_css) { return 0; }
static inline void bio_disassociate_blkg(struct bio *bio) { }
static inline void bio_associate_blkg(struct bio *bio) { }
+static inline void bio_associate_blkg_from_css(struct bio *bio,
+ struct cgroup_subsys_state *css)
+{ }
static inline void bio_disassociate_task(struct bio *bio) { }
static inline void bio_clone_blkcg_association(struct bio *dst,
struct bio *src) { }
diff --git a/include/linux/writeback.h b/include/linux/writeback.h
index fdfd04e348f6..738a0c24874f 100644
--- a/include/linux/writeback.h
+++ b/include/linux/writeback.h
@@ -246,7 +246,8 @@ static inline void wbc_attach_fdatawrite_inode(struct writeback_control *wbc,
*
* @bio is a part of the writeback in progress controlled by @wbc. Perform
* writeback specific initialization. This is used to apply the cgroup
- * writeback context.
+ * writeback context. Must be called after the bio has been associated with
+ * a device.
*/
static inline void wbc_init_bio(struct writeback_control *wbc, struct bio *bio)
{
@@ -257,7 +258,7 @@ static inline void wbc_init_bio(struct writeback_control *wbc, struct bio *bio)
* regular writeback instead of writing things out itself.
*/
if (wbc->wb)
- bio_associate_blkcg(bio, wbc->wb->blkcg_css);
+ bio_associate_blkg_from_css(bio, wbc->wb->blkcg_css);
}

#else /* CONFIG_CGROUP_WRITEBACK */
--
2.17.1


2018-12-04 18:38:28

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 07/14] blkcg: consolidate bio_issue_init() to be a part of core

bio_issue_init among other things initializes the timestamp for an IO.
Rather than have this logic handled by policies, this consolidates it to
be on the init paths (normal, clone, bounce clone).

Signed-off-by: Dennis Zhou <[email protected]>
Acked-by: Tejun Heo <[email protected]>
Reviewed-by: Liu Bo <[email protected]>
---
block/bio.c | 1 +
block/blk-iolatency.c | 2 --
block/blk-throttle.c | 8 --------
block/bounce.c | 1 +
include/linux/blk-cgroup.h | 9 +++++++++
5 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 1e852ab904aa..90089124b512 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -611,6 +611,7 @@ void __bio_clone_fast(struct bio *bio, struct bio *bio_src)
bio->bi_io_vec = bio_src->bi_io_vec;

bio_clone_blkcg_association(bio, bio_src);
+ blkcg_bio_issue_init(bio);
}
EXPORT_SYMBOL(__bio_clone_fast);

diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index e6b47c255521..5a79f06a730d 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -478,8 +478,6 @@ static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio)
if (!blk_iolatency_enabled(blkiolat))
return;

- bio_issue_init(&bio->bi_issue, bio_sectors(bio));
-
while (blkg && blkg->parent) {
struct iolatency_grp *iolat = blkg_to_lat(blkg);
if (!iolat) {
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 1c6529df2002..1b97a73d2fb1 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -2115,13 +2115,6 @@ static inline void throtl_update_latency_buckets(struct throtl_data *td)
}
#endif

-static void blk_throtl_assoc_bio(struct bio *bio)
-{
-#ifdef CONFIG_BLK_DEV_THROTTLING_LOW
- bio_issue_init(&bio->bi_issue, bio_sectors(bio));
-#endif
-}
-
bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg,
struct bio *bio)
{
@@ -2142,7 +2135,6 @@ bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg,

throtl_update_latency_buckets(td);

- blk_throtl_assoc_bio(bio);
blk_throtl_update_idletime(tg);

sq = &tg->service_queue;
diff --git a/block/bounce.c b/block/bounce.c
index 559c55bda040..cfb96d5170d0 100644
--- a/block/bounce.c
+++ b/block/bounce.c
@@ -278,6 +278,7 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask,
}

bio_clone_blkcg_association(bio, bio_src);
+ blkcg_bio_issue_init(bio);

return bio;
}
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index 3c87ae71156f..5cca4ffcdae5 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -799,6 +799,12 @@ static inline bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg
struct bio *bio) { return false; }
#endif

+
+static inline void blkcg_bio_issue_init(struct bio *bio)
+{
+ bio_issue_init(&bio->bi_issue, bio_sectors(bio));
+}
+
static inline bool blkcg_bio_issue_check(struct request_queue *q,
struct bio *bio)
{
@@ -824,6 +830,8 @@ static inline bool blkcg_bio_issue_check(struct request_queue *q,
blkg_rwstat_add(&blkg->stat_ios, bio->bi_opf, 1);
}

+ blkcg_bio_issue_init(bio);
+
return !throtl;
}

@@ -929,6 +937,7 @@ static inline char *blkg_path(struct blkcg_gq *blkg) { return NULL; }
static inline void blkg_get(struct blkcg_gq *blkg) { }
static inline void blkg_put(struct blkcg_gq *blkg) { }

+static inline void blkcg_bio_issue_init(struct bio *bio) { }
static inline bool blkcg_bio_issue_check(struct request_queue *q,
struct bio *bio) { return true; }

--
2.17.1


2018-12-04 18:38:44

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 04/14] blkcg: introduce common blkg association logic

There are 3 ways blkg association can happen: association with the
current css, with the page css (swap), or from the wbc css (writeback).

This patch handles how association is done for the first case where we
are associating bsaed on the current css. If there is already a blkg
associated, the css will be reused and association will be redone as the
request_queue may have changed.

Signed-off-by: Dennis Zhou <[email protected]>
Reviewed-by: Josef Bacik <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
block/bio.c | 62 ++++++++++++++++++++++++++++++++++++-------
block/blk-iolatency.c | 10 ++-----
block/blk-throttle.c | 6 ++---
include/linux/bio.h | 5 +++-
4 files changed, 62 insertions(+), 21 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 5c9828524adc..452b8e79b998 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -2009,7 +2009,21 @@ int bio_associate_blkcg(struct bio *bio, struct cgroup_subsys_state *blkcg_css)
EXPORT_SYMBOL_GPL(bio_associate_blkcg);

/**
- * bio_associate_blkg - associate a bio with the a blkg
+ * bio_disassociate_blkg - puts back the blkg reference if associated
+ * @bio: target bio
+ *
+ * Helper to disassociate the blkg from @bio if a blkg is associated.
+ */
+void bio_disassociate_blkg(struct bio *bio)
+{
+ if (bio->bi_blkg) {
+ blkg_put(bio->bi_blkg);
+ bio->bi_blkg = NULL;
+ }
+}
+
+/**
+ * __bio_associate_blkg - associate a bio with the a blkg
* @bio: target bio
* @blkg: the blkg to associate
*
@@ -2022,12 +2036,42 @@ EXPORT_SYMBOL_GPL(bio_associate_blkcg);
* A reference will be taken on the @blkg and will be released when @bio is
* freed.
*/
-int bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg)
+static void __bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg)
{
- if (unlikely(bio->bi_blkg))
- return -EBUSY;
+ bio_disassociate_blkg(bio);
+
bio->bi_blkg = blkg_try_get_closest(blkg);
- return 0;
+}
+
+/**
+ * bio_associate_blkg - associate a bio with a blkg
+ * @bio: target bio
+ *
+ * Associate @bio with the blkg found from the bio's css and request_queue.
+ * If one is not found, bio_lookup_blkg() creates the blkg. If a blkg is
+ * already associated, the css is reused and association redone as the
+ * request_queue may have changed.
+ */
+void bio_associate_blkg(struct bio *bio)
+{
+ struct request_queue *q = bio->bi_disk->queue;
+ struct blkcg *blkcg;
+ struct blkcg_gq *blkg;
+
+ rcu_read_lock();
+
+ bio_associate_blkcg(bio, NULL);
+ blkcg = bio_blkcg(bio);
+
+ if (!blkcg->css.parent) {
+ __bio_associate_blkg(bio, q->root_blkg);
+ } else {
+ blkg = blkg_lookup_create(blkcg, q);
+
+ __bio_associate_blkg(bio, blkg);
+ }
+
+ rcu_read_unlock();
}

/**
@@ -2040,10 +2084,7 @@ void bio_disassociate_task(struct bio *bio)
css_put(bio->bi_css);
bio->bi_css = NULL;
}
- if (bio->bi_blkg) {
- blkg_put(bio->bi_blkg);
- bio->bi_blkg = NULL;
- }
+ bio_disassociate_blkg(bio);
}

/**
@@ -2055,6 +2096,9 @@ void bio_clone_blkcg_association(struct bio *dst, struct bio *src)
{
if (src->bi_css)
WARN_ON(bio_associate_blkcg(dst, src->bi_css));
+
+ if (src->bi_blkg)
+ __bio_associate_blkg(dst, src->bi_blkg);
}
EXPORT_SYMBOL_GPL(bio_clone_blkcg_association);
#endif /* CONFIG_BLK_CGROUP */
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 46e86c34cf79..cdbd10564e66 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -472,21 +472,15 @@ static void check_scale_change(struct iolatency_grp *iolat)
static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio)
{
struct blk_iolatency *blkiolat = BLKIOLATENCY(rqos);
- struct blkcg *blkcg;
struct blkcg_gq *blkg;
- struct request_queue *q = rqos->q;
bool issue_as_root = bio_issue_as_root_blkg(bio);

if (!blk_iolatency_enabled(blkiolat))
return;

- rcu_read_lock();
- bio_associate_blkcg(bio, NULL);
- blkcg = bio_blkcg(bio);
- blkg = blkg_lookup_create(blkcg, q);
+ bio_associate_blkg(bio);
+ blkg = bio->bi_blkg;
bio_issue_init(&bio->bi_issue, bio_sectors(bio));
- bio_associate_blkg(bio, blkg);
- rcu_read_unlock();

while (blkg && blkg->parent) {
struct iolatency_grp *iolat = blkg_to_lat(blkg);
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index d648d6720f46..228c3a007ebc 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -2115,10 +2115,10 @@ static inline void throtl_update_latency_buckets(struct throtl_data *td)
}
#endif

-static void blk_throtl_assoc_bio(struct throtl_grp *tg, struct bio *bio)
+static void blk_throtl_assoc_bio(struct bio *bio)
{
#ifdef CONFIG_BLK_DEV_THROTTLING_LOW
- bio_associate_blkg(bio, tg_to_blkg(tg));
+ bio_associate_blkg(bio);
bio_issue_init(&bio->bi_issue, bio_sectors(bio));
#endif
}
@@ -2143,7 +2143,7 @@ bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg,

throtl_update_latency_buckets(td);

- blk_throtl_assoc_bio(tg, bio);
+ blk_throtl_assoc_bio(bio);
blk_throtl_update_idletime(tg);

sq = &tg->service_queue;
diff --git a/include/linux/bio.h b/include/linux/bio.h
index 056fb627edb3..62715a5a4f32 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -511,12 +511,15 @@ static inline int bio_associate_blkcg_from_page(struct bio *bio,

#ifdef CONFIG_BLK_CGROUP
int bio_associate_blkcg(struct bio *bio, struct cgroup_subsys_state *blkcg_css);
-int bio_associate_blkg(struct bio *bio, struct blkcg_gq *blkg);
+void bio_disassociate_blkg(struct bio *bio);
+void bio_associate_blkg(struct bio *bio);
void bio_disassociate_task(struct bio *bio);
void bio_clone_blkcg_association(struct bio *dst, struct bio *src);
#else /* CONFIG_BLK_CGROUP */
static inline int bio_associate_blkcg(struct bio *bio,
struct cgroup_subsys_state *blkcg_css) { return 0; }
+static inline void bio_disassociate_blkg(struct bio *bio) { }
+static inline void bio_associate_blkg(struct bio *bio) { }
static inline void bio_disassociate_task(struct bio *bio) { }
static inline void bio_clone_blkcg_association(struct bio *dst,
struct bio *src) { }
--
2.17.1


2018-12-04 18:38:52

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 01/14] blkcg: fix ref count issue with bio_blkcg() using task_css

The bio_blkcg() function turns out to be inconsistent and consequently
dangerous to use. The first part returns a blkcg where a reference is
owned by the bio meaning it does not need to be rcu protected. However,
the third case, the last line, is problematic:

return css_to_blkcg(task_css(current, io_cgrp_id));

This can race against task migration and the cgroup dying. It is also
semantically different as it must be called rcu protected and is
susceptible to failure when trying to get a reference to it.

This patch adds association ahead of calling bio_blkcg() rather than
after. This makes association a required and explicit step along the
code paths for calling bio_blkcg(). In blk-iolatency, association is
moved above the bio_blkcg() call to ensure it will not return %NULL.

BFQ uses the old bio_blkcg() function, but I do not want to address it
in this series due to the complexity. I have created a private version
documenting the inconsistency and noting not to use it.

Signed-off-by: Dennis Zhou <[email protected]>
Acked-by: Tejun Heo <[email protected]>
Reviewed-by: Josef Bacik <[email protected]>
---
block/bfq-cgroup.c | 4 +-
block/bfq-iosched.c | 2 +-
block/bio.c | 10 +++-
block/blk-iolatency.c | 2 +-
include/linux/blk-cgroup.h | 98 ++++++++++++++++++++++++++++++++++----
5 files changed, 102 insertions(+), 14 deletions(-)

diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index a7a1712632b0..c6113af31960 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -642,7 +642,7 @@ void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
uint64_t serial_nr;

rcu_read_lock();
- serial_nr = bio_blkcg(bio)->css.serial_nr;
+ serial_nr = __bio_blkcg(bio)->css.serial_nr;

/*
* Check whether blkcg has changed. The condition may trigger
@@ -651,7 +651,7 @@ void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
if (unlikely(!bfqd) || likely(bic->blkcg_serial_nr == serial_nr))
goto out;

- bfqg = __bfq_bic_change_cgroup(bfqd, bic, bio_blkcg(bio));
+ bfqg = __bfq_bic_change_cgroup(bfqd, bic, __bio_blkcg(bio));
/*
* Update blkg_path for bfq_log_* functions. We cache this
* path, and update it here, for the following
diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
index 67b22c924aee..3d1f319fe977 100644
--- a/block/bfq-iosched.c
+++ b/block/bfq-iosched.c
@@ -4384,7 +4384,7 @@ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,

rcu_read_lock();

- bfqg = bfq_find_set_group(bfqd, bio_blkcg(bio));
+ bfqg = bfq_find_set_group(bfqd, __bio_blkcg(bio));
if (!bfqg) {
bfqq = &bfqd->oom_bfqq;
goto out;
diff --git a/block/bio.c b/block/bio.c
index 03895cc0d74a..346a7f5cb2dd 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -1990,13 +1990,19 @@ int bio_associate_blkcg_from_page(struct bio *bio, struct page *page)
*
* This function takes an extra reference of @blkcg_css which will be put
* when @bio is released. The caller must own @bio and is responsible for
- * synchronizing calls to this function.
+ * synchronizing calls to this function. If @blkcg_css is %NULL, a call to
+ * blkcg_get_css() finds the current css from the kthread or task.
*/
int bio_associate_blkcg(struct bio *bio, struct cgroup_subsys_state *blkcg_css)
{
if (unlikely(bio->bi_css))
return -EBUSY;
- css_get(blkcg_css);
+
+ if (blkcg_css)
+ css_get(blkcg_css);
+ else
+ blkcg_css = blkcg_get_css();
+
bio->bi_css = blkcg_css;
return 0;
}
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 5f7f1773be61..fe0c4ca312ff 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -481,8 +481,8 @@ static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio)
return;

rcu_read_lock();
+ bio_associate_blkcg(bio, NULL);
blkcg = bio_blkcg(bio);
- bio_associate_blkcg(bio, &blkcg->css);
blkg = blkg_lookup(blkcg, q);
if (unlikely(!blkg)) {
spin_lock_irq(&q->queue_lock);
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index a9e2e2037129..f619307171a6 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -227,22 +227,103 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
char *input, struct blkg_conf_ctx *ctx);
void blkg_conf_finish(struct blkg_conf_ctx *ctx);

+/**
+ * blkcg_css - find the current css
+ *
+ * Find the css associated with either the kthread or the current task.
+ * This may return a dying css, so it is up to the caller to use tryget logic
+ * to confirm it is alive and well.
+ */
+static inline struct cgroup_subsys_state *blkcg_css(void)
+{
+ struct cgroup_subsys_state *css;
+
+ css = kthread_blkcg();
+ if (css)
+ return css;
+ return task_css(current, io_cgrp_id);
+}
+
+/**
+ * blkcg_get_css - find and get a reference to the css
+ *
+ * Find the css associated with either the kthread or the current task.
+ * This takes a reference on the blkcg which will need to be managed by the
+ * caller.
+ */
+static inline struct cgroup_subsys_state *blkcg_get_css(void)
+{
+ struct cgroup_subsys_state *css;
+
+ rcu_read_lock();
+
+ css = kthread_blkcg();
+ if (css) {
+ css_get(css);
+ } else {
+ /*
+ * This is a bit complicated. It is possible task_css() is
+ * seeing an old css pointer here. This is caused by the
+ * current thread migrating away from this cgroup and this
+ * cgroup dying. css_tryget() will fail when trying to take a
+ * ref on a cgroup that's ref count has hit 0.
+ *
+ * Therefore, if it does fail, this means current must have
+ * been swapped away already and this is waiting for it to
+ * propagate on the polling cpu. Hence the use of cpu_relax().
+ */
+ while (true) {
+ css = task_css(current, io_cgrp_id);
+ if (likely(css_tryget(css)))
+ break;
+ cpu_relax();
+ }
+ }
+
+ rcu_read_unlock();
+
+ return css;
+}

static inline struct blkcg *css_to_blkcg(struct cgroup_subsys_state *css)
{
return css ? container_of(css, struct blkcg, css) : NULL;
}

-static inline struct blkcg *bio_blkcg(struct bio *bio)
+/**
+ * __bio_blkcg - internal, inconsistent version to get blkcg
+ *
+ * DO NOT USE.
+ * This function is inconsistent and consequently is dangerous to use. The
+ * first part of the function returns a blkcg where a reference is owned by the
+ * bio. This means it does not need to be rcu protected as it cannot go away
+ * with the bio owning a reference to it. However, the latter potentially gets
+ * it from task_css(). This can race against task migration and the cgroup
+ * dying. It is also semantically different as it must be called rcu protected
+ * and is susceptible to failure when trying to get a reference to it.
+ * Therefore, it is not ok to assume that *_get() will always succeed on the
+ * blkcg returned here.
+ */
+static inline struct blkcg *__bio_blkcg(struct bio *bio)
{
- struct cgroup_subsys_state *css;
+ if (bio && bio->bi_css)
+ return css_to_blkcg(bio->bi_css);
+ return css_to_blkcg(blkcg_css());
+}

+/**
+ * bio_blkcg - grab the blkcg associated with a bio
+ * @bio: target bio
+ *
+ * This returns the blkcg associated with a bio, %NULL if not associated.
+ * Callers are expected to either handle %NULL or know association has been
+ * done prior to calling this.
+ */
+static inline struct blkcg *bio_blkcg(struct bio *bio)
+{
if (bio && bio->bi_css)
return css_to_blkcg(bio->bi_css);
- css = kthread_blkcg();
- if (css)
- return css_to_blkcg(css);
- return css_to_blkcg(task_css(current, io_cgrp_id));
+ return NULL;
}

static inline bool blk_cgroup_congested(void)
@@ -710,10 +791,10 @@ static inline bool blkcg_bio_issue_check(struct request_queue *q,
bool throtl = false;

rcu_read_lock();
- blkcg = bio_blkcg(bio);

/* associate blkcg if bio hasn't attached one */
- bio_associate_blkcg(bio, &blkcg->css);
+ bio_associate_blkcg(bio, NULL);
+ blkcg = bio_blkcg(bio);

blkg = blkg_lookup(blkcg, q);
if (unlikely(!blkg)) {
@@ -835,6 +916,7 @@ static inline int blkcg_activate_policy(struct request_queue *q,
static inline void blkcg_deactivate_policy(struct request_queue *q,
const struct blkcg_policy *pol) { }

+static inline struct blkcg *__bio_blkcg(struct bio *bio) { return NULL; }
static inline struct blkcg *bio_blkcg(struct bio *bio) { return NULL; }

static inline struct blkg_policy_data *blkg_to_pd(struct blkcg_gq *blkg,
--
2.17.1


2018-12-04 18:38:57

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 10/14] blkcg: remove bio->bi_css and instead use bio->bi_blkg

Prior patches ensured that any bio that interacts with a request_queue
is properly associated with a blkg. This makes bio->bi_css unnecessary
as blkg maintains a reference to blkcg already.

This removes the bio field bi_css and transfers corresponding uses to
access via bi_blkg.

Signed-off-by: Dennis Zhou <[email protected]>
Reviewed-by: Josef Bacik <[email protected]>
Acked-by: Tejun Heo <[email protected]>
---
block/bio.c | 59 +++++++++-----------------------------
block/bounce.c | 2 +-
drivers/block/loop.c | 5 ++--
drivers/md/raid0.c | 2 +-
include/linux/bio.h | 11 +++----
include/linux/blk-cgroup.h | 8 +++---
include/linux/blk_types.h | 7 +++--
kernel/trace/blktrace.c | 4 +--
8 files changed, 32 insertions(+), 66 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index b42477b6a225..2b6bc7b805ec 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -610,7 +610,7 @@ void __bio_clone_fast(struct bio *bio, struct bio *bio_src)
bio->bi_iter = bio_src->bi_iter;
bio->bi_io_vec = bio_src->bi_io_vec;

- bio_clone_blkcg_association(bio, bio_src);
+ bio_clone_blkg_association(bio, bio_src);
blkcg_bio_issue_init(bio);
}
EXPORT_SYMBOL(__bio_clone_fast);
@@ -1957,34 +1957,6 @@ EXPORT_SYMBOL(bioset_init_from_src);

#ifdef CONFIG_BLK_CGROUP

-/**
- * bio_associate_blkcg - associate a bio with the specified blkcg
- * @bio: target bio
- * @blkcg_css: css of the blkcg to associate
- *
- * Associate @bio with the blkcg specified by @blkcg_css. Block layer will
- * treat @bio as if it were issued by a task which belongs to the blkcg.
- *
- * This function takes an extra reference of @blkcg_css which will be put
- * when @bio is released. The caller must own @bio and is responsible for
- * synchronizing calls to this function. If @blkcg_css is %NULL, a call to
- * blkcg_get_css() finds the current css from the kthread or task.
- */
-int bio_associate_blkcg(struct bio *bio, struct cgroup_subsys_state *blkcg_css)
-{
- if (unlikely(bio->bi_css))
- return -EBUSY;
-
- if (blkcg_css)
- css_get(blkcg_css);
- else
- blkcg_css = blkcg_get_css();
-
- bio->bi_css = blkcg_css;
- return 0;
-}
-EXPORT_SYMBOL_GPL(bio_associate_blkcg);
-
/**
* bio_disassociate_blkg - puts back the blkg reference if associated
* @bio: target bio
@@ -1994,6 +1966,8 @@ EXPORT_SYMBOL_GPL(bio_associate_blkcg);
void bio_disassociate_blkg(struct bio *bio)
{
if (bio->bi_blkg) {
+ /* a ref is always taken on css */
+ css_put(&bio_blkcg(bio)->css);
blkg_put(bio->bi_blkg);
bio->bi_blkg = NULL;
}
@@ -2047,7 +2021,6 @@ void bio_associate_blkg_from_css(struct bio *bio,
struct cgroup_subsys_state *css)
{
css_get(css);
- bio->bi_css = css;
__bio_associate_blkg_from_css(bio, css);
}
EXPORT_SYMBOL_GPL(bio_associate_blkg_from_css);
@@ -2066,13 +2039,10 @@ void bio_associate_blkg_from_page(struct bio *bio, struct page *page)
{
struct cgroup_subsys_state *css;

- if (unlikely(bio->bi_css))
- return;
if (!page->mem_cgroup)
return;

css = cgroup_get_e_css(page->mem_cgroup->css.cgroup, &io_cgrp_subsys);
- bio->bi_css = css;
__bio_associate_blkg_from_css(bio, css);
}
#endif /* CONFIG_MEMCG */
@@ -2094,8 +2064,10 @@ void bio_associate_blkg(struct bio *bio)

rcu_read_lock();

- bio_associate_blkcg(bio, NULL);
- blkcg = bio_blkcg(bio);
+ if (bio->bi_blkg)
+ blkcg = bio->bi_blkg->blkcg;
+ else
+ blkcg = css_to_blkcg(blkcg_get_css());

if (!blkcg->css.parent) {
__bio_associate_blkg(bio, q->root_blkg);
@@ -2115,27 +2087,22 @@ EXPORT_SYMBOL_GPL(bio_associate_blkg);
*/
void bio_disassociate_task(struct bio *bio)
{
- if (bio->bi_css) {
- css_put(bio->bi_css);
- bio->bi_css = NULL;
- }
bio_disassociate_blkg(bio);
}

/**
- * bio_clone_blkcg_association - clone blkcg association from src to dst bio
+ * bio_clone_blkg_association - clone blkg association from src to dst bio
* @dst: destination bio
* @src: source bio
*/
-void bio_clone_blkcg_association(struct bio *dst, struct bio *src)
+void bio_clone_blkg_association(struct bio *dst, struct bio *src)
{
- if (src->bi_css)
- WARN_ON(bio_associate_blkcg(dst, src->bi_css));
-
- if (src->bi_blkg)
+ if (src->bi_blkg) {
+ css_get(&bio_blkcg(src)->css);
__bio_associate_blkg(dst, src->bi_blkg);
+ }
}
-EXPORT_SYMBOL_GPL(bio_clone_blkcg_association);
+EXPORT_SYMBOL_GPL(bio_clone_blkg_association);
#endif /* CONFIG_BLK_CGROUP */

static void __init biovec_init_slabs(void)
diff --git a/block/bounce.c b/block/bounce.c
index cfb96d5170d0..ffb9e9ecfa7e 100644
--- a/block/bounce.c
+++ b/block/bounce.c
@@ -277,7 +277,7 @@ static struct bio *bounce_clone_bio(struct bio *bio_src, gfp_t gfp_mask,
}
}

- bio_clone_blkcg_association(bio, bio_src);
+ bio_clone_blkg_association(bio, bio_src);
blkcg_bio_issue_init(bio);

return bio;
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 176ab1f28eca..0770004616de 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -77,6 +77,7 @@
#include <linux/falloc.h>
#include <linux/uio.h>
#include <linux/ioprio.h>
+#include <linux/blk-cgroup.h>

#include "loop.h"

@@ -1820,8 +1821,8 @@ static blk_status_t loop_queue_rq(struct blk_mq_hw_ctx *hctx,

/* always use the first bio's css */
#ifdef CONFIG_BLK_CGROUP
- if (cmd->use_aio && rq->bio && rq->bio->bi_css) {
- cmd->css = rq->bio->bi_css;
+ if (cmd->use_aio && rq->bio && rq->bio->bi_blkg) {
+ cmd->css = &bio_blkcg(rq->bio)->css;
css_get(cmd->css);
} else
#endif
diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index ac1cffd2a09b..f3fb5bb8c82a 100644
--- a/drivers/md/raid0.c
+++ b/drivers/md/raid0.c
@@ -542,7 +542,7 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
!discard_bio)
continue;
bio_chain(discard_bio, bio);
- bio_clone_blkcg_association(discard_bio, bio);
+ bio_clone_blkg_association(discard_bio, bio);
if (mddev->gendisk)
trace_block_bio_remap(bdev_get_queue(rdev->bdev),
discard_bio, disk_devt(mddev->gendisk),
diff --git a/include/linux/bio.h b/include/linux/bio.h
index f0438061a5a3..84e1c4dc703a 100644
--- a/include/linux/bio.h
+++ b/include/linux/bio.h
@@ -498,7 +498,7 @@ do { \
do { \
(dst)->bi_disk = (src)->bi_disk; \
(dst)->bi_partno = (src)->bi_partno; \
- bio_clone_blkcg_association(dst, src); \
+ bio_clone_blkg_association(dst, src); \
} while (0)

#define bio_dev(bio) \
@@ -512,24 +512,21 @@ static inline void bio_associate_blkg_from_page(struct bio *bio,
#endif

#ifdef CONFIG_BLK_CGROUP
-int bio_associate_blkcg(struct bio *bio, struct cgroup_subsys_state *blkcg_css);
void bio_disassociate_blkg(struct bio *bio);
void bio_associate_blkg(struct bio *bio);
void bio_associate_blkg_from_css(struct bio *bio,
struct cgroup_subsys_state *css);
void bio_disassociate_task(struct bio *bio);
-void bio_clone_blkcg_association(struct bio *dst, struct bio *src);
+void bio_clone_blkg_association(struct bio *dst, struct bio *src);
#else /* CONFIG_BLK_CGROUP */
-static inline int bio_associate_blkcg(struct bio *bio,
- struct cgroup_subsys_state *blkcg_css) { return 0; }
static inline void bio_disassociate_blkg(struct bio *bio) { }
static inline void bio_associate_blkg(struct bio *bio) { }
static inline void bio_associate_blkg_from_css(struct bio *bio,
struct cgroup_subsys_state *css)
{ }
static inline void bio_disassociate_task(struct bio *bio) { }
-static inline void bio_clone_blkcg_association(struct bio *dst,
- struct bio *src) { }
+static inline void bio_clone_blkg_association(struct bio *dst,
+ struct bio *src) { }
#endif /* CONFIG_BLK_CGROUP */

#ifdef CONFIG_HIGHMEM
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index 5cca4ffcdae5..3674ff8d8e50 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -308,8 +308,8 @@ static inline struct blkcg *css_to_blkcg(struct cgroup_subsys_state *css)
*/
static inline struct blkcg *__bio_blkcg(struct bio *bio)
{
- if (bio && bio->bi_css)
- return css_to_blkcg(bio->bi_css);
+ if (bio && bio->bi_blkg)
+ return bio->bi_blkg->blkcg;
return css_to_blkcg(blkcg_css());
}

@@ -323,8 +323,8 @@ static inline struct blkcg *__bio_blkcg(struct bio *bio)
*/
static inline struct blkcg *bio_blkcg(struct bio *bio)
{
- if (bio && bio->bi_css)
- return css_to_blkcg(bio->bi_css);
+ if (bio && bio->bi_blkg)
+ return bio->bi_blkg->blkcg;
return NULL;
}

diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index c0ba1a038ff3..46c005d601ac 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -174,10 +174,11 @@ struct bio {
void *bi_private;
#ifdef CONFIG_BLK_CGROUP
/*
- * Optional css associated with this bio. Put on bio
- * release. Read comment on top of bio_associate_current().
+ * Represents the association of the css and request_queue for the bio.
+ * If a bio goes direct to device, it will not have a blkg as it will
+ * not have a request_queue associated with it. The reference is put
+ * on release of the bio.
*/
- struct cgroup_subsys_state *bi_css;
struct blkcg_gq *bi_blkg;
struct bio_issue bi_issue;
#endif
diff --git a/kernel/trace/blktrace.c b/kernel/trace/blktrace.c
index 2868d85f1fb1..fac0ddf8a8e2 100644
--- a/kernel/trace/blktrace.c
+++ b/kernel/trace/blktrace.c
@@ -764,9 +764,9 @@ blk_trace_bio_get_cgid(struct request_queue *q, struct bio *bio)
if (!bt || !(blk_tracer_flags.val & TRACE_BLK_OPT_CGROUP))
return NULL;

- if (!bio->bi_css)
+ if (!bio->bi_blkg)
return NULL;
- return cgroup_get_kernfs_id(bio->bi_css->cgroup);
+ return cgroup_get_kernfs_id(bio_blkcg(bio)->css.cgroup);
}
#else
static union kernfs_node_id *
--
2.17.1


2018-12-04 18:40:12

by Dennis Zhou

[permalink] [raw]
Subject: [PATCH 02/14] blkcg: update blkg_lookup_create() to do locking

To know when to create a blkg, the general pattern is to do a
blkg_lookup() and if that fails, lock and do the lookup again, and if
that fails finally create. It doesn't make much sense for everyone who
wants to do creation to write this themselves.

This changes blkg_lookup_create() to do locking and implement this
pattern. The old blkg_lookup_create() is renamed to
__blkg_lookup_create(). If a call site wants to do its own error
handling or already owns the queue lock, they can use
__blkg_lookup_create(). This will be used in upcoming patches.

Signed-off-by: Dennis Zhou <[email protected]>
Reviewed-by: Josef Bacik <[email protected]>
Acked-by: Tejun Heo <[email protected]>
Reviewed-by: Liu Bo <[email protected]>
---
block/blk-cgroup.c | 28 +++++++++++++++++++++++++---
block/blk-iolatency.c | 2 +-
include/linux/blk-cgroup.h | 4 +++-
3 files changed, 29 insertions(+), 5 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 63d226a084cd..b421a9457e05 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -249,7 +249,7 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
}

/**
- * blkg_lookup_create - lookup blkg, try to create one if not there
+ * __blkg_lookup_create - lookup blkg, try to create one if not there
* @blkcg: blkcg of interest
* @q: request_queue of interest
*
@@ -262,8 +262,8 @@ static struct blkcg_gq *blkg_create(struct blkcg *blkcg,
* value on error. If @q is dead, returns ERR_PTR(-EINVAL). If @q is not
* dead and bypassing, returns ERR_PTR(-EBUSY).
*/
-struct blkcg_gq *blkg_lookup_create(struct blkcg *blkcg,
- struct request_queue *q)
+struct blkcg_gq *__blkg_lookup_create(struct blkcg *blkcg,
+ struct request_queue *q)
{
struct blkcg_gq *blkg;

@@ -293,6 +293,28 @@ struct blkcg_gq *blkg_lookup_create(struct blkcg *blkcg,
}
}

+/**
+ * blkg_lookup_create - find or create a blkg
+ * @blkcg: target block cgroup
+ * @q: target request_queue
+ *
+ * This looks up or creates the blkg representing the unique pair
+ * of the blkcg and the request_queue.
+ */
+struct blkcg_gq *blkg_lookup_create(struct blkcg *blkcg,
+ struct request_queue *q)
+{
+ struct blkcg_gq *blkg = blkg_lookup(blkcg, q);
+
+ if (unlikely(!blkg)) {
+ spin_lock_irq(&q->queue_lock);
+ blkg = __blkg_lookup_create(blkcg, q);
+ spin_unlock_irq(&q->queue_lock);
+ }
+
+ return blkg;
+}
+
static void blkg_destroy(struct blkcg_gq *blkg)
{
struct blkcg *blkcg = blkg->blkcg;
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index fe0c4ca312ff..e6f68f15dee9 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -486,7 +486,7 @@ static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio)
blkg = blkg_lookup(blkcg, q);
if (unlikely(!blkg)) {
spin_lock_irq(&q->queue_lock);
- blkg = blkg_lookup_create(blkcg, q);
+ blkg = __blkg_lookup_create(blkcg, q);
if (IS_ERR(blkg))
blkg = NULL;
spin_unlock_irq(&q->queue_lock);
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index f619307171a6..b3b1a8187d23 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -181,6 +181,8 @@ extern struct cgroup_subsys_state * const blkcg_root_css;

struct blkcg_gq *blkg_lookup_slowpath(struct blkcg *blkcg,
struct request_queue *q, bool update_hint);
+struct blkcg_gq *__blkg_lookup_create(struct blkcg *blkcg,
+ struct request_queue *q);
struct blkcg_gq *blkg_lookup_create(struct blkcg *blkcg,
struct request_queue *q);
int blkcg_init_queue(struct request_queue *q);
@@ -799,7 +801,7 @@ static inline bool blkcg_bio_issue_check(struct request_queue *q,
blkg = blkg_lookup(blkcg, q);
if (unlikely(!blkg)) {
spin_lock_irq(&q->queue_lock);
- blkg = blkg_lookup_create(blkcg, q);
+ blkg = __blkg_lookup_create(blkcg, q);
if (IS_ERR(blkg))
blkg = NULL;
spin_unlock_irq(&q->queue_lock);
--
2.17.1


2018-12-04 19:50:42

by Josef Bacik

[permalink] [raw]
Subject: Re: [PATCH 06/14] blkcg: associate blkg when associating a device

On Tue, Dec 04, 2018 at 01:35:52PM -0500, Dennis Zhou wrote:
> Previously, blkg association was handled by controller specific code in
> blk-throttle and blk-iolatency. However, because a blkg represents a
> relationship between a blkcg and a request_queue, it makes sense to keep
> the blkg->q and bio->bi_disk->queue consistent.
>
> This patch moves association into the bio_set_dev macro(). This should
> cover the majority of cases where the device is set/changed keeping the
> two pointers consistent. Fallback code is added to
> blkcg_bio_issue_check() to catch any missing paths.
>
> Signed-off-by: Dennis Zhou <[email protected]>
> ---
> block/bio.c | 1 +
> block/blk-iolatency.c | 4 +---
> block/blk-throttle.c | 1 -
> include/linux/bio.h | 2 ++
> include/linux/blk-cgroup.h | 11 +++--------
> 5 files changed, 7 insertions(+), 12 deletions(-)
>
> diff --git a/block/bio.c b/block/bio.c
> index 41ebb3f8e2fc..1e852ab904aa 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -2074,6 +2074,7 @@ void bio_associate_blkg(struct bio *bio)
>
> rcu_read_unlock();
> }
> +EXPORT_SYMBOL_GPL(bio_associate_blkg);
>
> /**
> * bio_disassociate_task - undo bio_associate_current()
> diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
> index cdbd10564e66..e6b47c255521 100644
> --- a/block/blk-iolatency.c
> +++ b/block/blk-iolatency.c
> @@ -472,14 +472,12 @@ static void check_scale_change(struct iolatency_grp *iolat)
> static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio)
> {
> struct blk_iolatency *blkiolat = BLKIOLATENCY(rqos);
> - struct blkcg_gq *blkg;
> + struct blkcg_gq *blkg = bio->bi_blkg;
> bool issue_as_root = bio_issue_as_root_blkg(bio);
>
> if (!blk_iolatency_enabled(blkiolat))
> return;
>
> - bio_associate_blkg(bio);
> - blkg = bio->bi_blkg;
> bio_issue_init(&bio->bi_issue, bio_sectors(bio));
>
> while (blkg && blkg->parent) {
> diff --git a/block/blk-throttle.c b/block/blk-throttle.c
> index 228c3a007ebc..1c6529df2002 100644
> --- a/block/blk-throttle.c
> +++ b/block/blk-throttle.c
> @@ -2118,7 +2118,6 @@ static inline void throtl_update_latency_buckets(struct throtl_data *td)
> static void blk_throtl_assoc_bio(struct bio *bio)
> {
> #ifdef CONFIG_BLK_DEV_THROTTLING_LOW
> - bio_associate_blkg(bio);
> bio_issue_init(&bio->bi_issue, bio_sectors(bio));
> #endif
> }
> diff --git a/include/linux/bio.h b/include/linux/bio.h
> index 62715a5a4f32..6ee2ea8b378a 100644
> --- a/include/linux/bio.h
> +++ b/include/linux/bio.h
> @@ -491,12 +491,14 @@ do { \
> bio_clear_flag(bio, BIO_THROTTLED);\
> (bio)->bi_disk = (bdev)->bd_disk; \
> (bio)->bi_partno = (bdev)->bd_partno; \
> + bio_associate_blkg(bio); \
> } while (0)
>
> #define bio_copy_dev(dst, src) \
> do { \
> (dst)->bi_disk = (src)->bi_disk; \
> (dst)->bi_partno = (src)->bi_partno; \
> + bio_clone_blkcg_association(dst, src); \
> } while (0)
>
> #define bio_dev(bio) \
> diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
> index c08e96e521ed..3c87ae71156f 100644
> --- a/include/linux/blk-cgroup.h
> +++ b/include/linux/blk-cgroup.h
> @@ -802,21 +802,17 @@ static inline bool blk_throtl_bio(struct request_queue *q, struct blkcg_gq *blkg
> static inline bool blkcg_bio_issue_check(struct request_queue *q,
> struct bio *bio)
> {
> - struct blkcg *blkcg;
> struct blkcg_gq *blkg;
> bool throtl = false;
>
> - rcu_read_lock();
> + if (!bio->bi_blkg)
> + bio_associate_blkg(bio);
>

Should we maybe WARN_ON_ONCE() here since this really shouldn't happen?
Otherwise you can add

Reviewed-by: Josef Bacik <[email protected]>

Thanks,

Josef

2018-12-04 19:51:37

by Josef Bacik

[permalink] [raw]
Subject: Re: [PATCH 07/14] blkcg: consolidate bio_issue_init() to be a part of core

On Tue, Dec 04, 2018 at 01:35:53PM -0500, Dennis Zhou wrote:
> bio_issue_init among other things initializes the timestamp for an IO.
> Rather than have this logic handled by policies, this consolidates it to
> be on the init paths (normal, clone, bounce clone).
>
> Signed-off-by: Dennis Zhou <[email protected]>
> Acked-by: Tejun Heo <[email protected]>
> Reviewed-by: Liu Bo <[email protected]>
> ---

Reviewed-by: Josef Bacik <[email protected]>

Thanks,

Josef

2018-12-04 19:54:03

by Josef Bacik

[permalink] [raw]
Subject: Re: [PATCH 11/14] blkcg: remove additional reference to the css

On Tue, Dec 04, 2018 at 01:35:57PM -0500, Dennis Zhou wrote:
> The previous patch in this series removed carrying around a pointer to
> the css in blkg. However, the blkg association logic still relied on
> taking a reference on the css to ensure we wouldn't fail in getting a
> reference for the blkg.
>
> Here the implicit dependency on the css is removed. The association
> continues to rely on the tryget logic walking up the blkg tree. This
> streamlines the three ways that association can happen: normal, swap,
> and writeback.
>
> Signed-off-by: Dennis Zhou <[email protected]>
> Acked-by: Tejun Heo <[email protected]>

Reviewed-by: Josef Bacik <[email protected]>

Thanks,

Josef

2018-12-04 19:54:23

by Josef Bacik

[permalink] [raw]
Subject: Re: [PATCH 12/14] blkcg: remove bio_disassociate_task()

On Tue, Dec 04, 2018 at 01:35:58PM -0500, Dennis Zhou wrote:
> Now that a bio only holds a blkg reference, so clean up is simply
> putting back that reference. Remove bio_disassociate_task() as it just
> calls bio_disassociate_blkg() and call the latter directly.
>
> Signed-off-by: Dennis Zhou <[email protected]>
> Acked-by: Tejun Heo <[email protected]>

Reviewed-by: Josef Bacik <[email protected]>

Thanks,

Josef

2018-12-04 19:57:33

by Josef Bacik

[permalink] [raw]
Subject: Re: [PATCH 13/14] blkcg: change blkg reference counting to use percpu_ref

On Tue, Dec 04, 2018 at 01:35:59PM -0500, Dennis Zhou wrote:
> Every bio is now associated with a blkg putting blkg_get, blkg_try_get,
> and blkg_put on the hot path. Switch over the refcnt in blkg to use
> percpu_ref.
>
> Signed-off-by: Dennis Zhou <[email protected]>
> Acked-by: Tejun Heo <[email protected]>

Reviewed-by: Josef Bacik <[email protected]>

Thanks,

Josef

2018-12-04 20:30:12

by Mike Snitzer

[permalink] [raw]
Subject: Re: [PATCH 05/14] dm: set the static flush bio device on demand

On Tue, Dec 04 2018 at 1:35pm -0500,
Dennis Zhou <[email protected]> wrote:

> The next patch changes the macro bio_set_dev() to associate a bio with a
> blkg based on the device set. However, dm creates a static bio to be
> used as the basis for cloning empty flush bios on creation. The
> bio_set_dev() call in alloc_dev() will cause problems with the next
> patch adding association to bio_set_dev() because the call is before the
> bdev is associated with a gendisk (bd_disk is %NULL). To get around
> this, set the device on the static bio every time and use that to clone
> to the other bios.
>
> Signed-off-by: Dennis Zhou <[email protected]>
> Cc: Alasdair Kergon <[email protected]>
> Cc: Mike Snitzer <[email protected]>
> ---
> block/bio.c | 1 +
> drivers/md/dm.c | 12 +++++++++++-
> 2 files changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/block/bio.c b/block/bio.c
> index 452b8e79b998..41ebb3f8e2fc 100644
> --- a/block/bio.c
> +++ b/block/bio.c
> @@ -2021,6 +2021,7 @@ void bio_disassociate_blkg(struct bio *bio)
> bio->bi_blkg = NULL;
> }
> }
> +EXPORT_SYMBOL_GPL(bio_disassociate_blkg);
>
> /**
> * __bio_associate_blkg - associate a bio with the a blkg
> diff --git a/drivers/md/dm.c b/drivers/md/dm.c
> index a733e4c920af..a2d6f8b33d23 100644
> --- a/drivers/md/dm.c
> +++ b/drivers/md/dm.c
> @@ -1417,10 +1417,21 @@ static int __send_empty_flush(struct clone_info *ci)
> unsigned target_nr = 0;
> struct dm_target *ti;
>
> + /*
> + * Empty flush uses a statically initialized bio as the base for
> + * cloning, &md->flush_bio. However, blkg association requires that

Would prefer:
"Empty flush uses a statically initialized bio, &md->flush_bio, as the
base for cloning. ..."

> + * a bdev is associated with a gendisk, which doesn't happen until the
> + * bdev is opened. So, blkg association is done at issue time of the
> + * flush rather than when the device is created in dm_alloc().

Another nit but I think you mean "alloc_dev()" here .......^

> + */
> + bio_set_dev(ci->bio, ci->io->md->bdev);
> +
> BUG_ON(bio_has_data(ci->bio));
> while ((ti = dm_table_get_target(ci->map, target_nr++)))
> __send_duplicate_bios(ci, ti, ti->num_flush_bios, NULL);
>
> + bio_disassociate_blkg(ci->bio);
> +
> return 0;
> }
>
> @@ -1939,7 +1950,6 @@ static struct mapped_device *alloc_dev(int minor)
> goto bad;
>
> bio_init(&md->flush_bio, NULL, 0);
> - bio_set_dev(&md->flush_bio, md->bdev);
> md->flush_bio.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC;
>
> dm_stats_init(&md->stats);

The top-level DM device's bdev->bd_disk is only assigned after the first
blkdev_get(), so I can see why this is needed.

Think this type of life-cycle quirk was the kind of thing that caused
blk cgroup issues with DM.

Left wondering whether there would be a better way of handling N flush
bios for DM devices. But for the now this will have to do.

Acked-by: Mike Snitzer <[email protected]>

Thanks,
Mike

2018-12-04 22:16:25

by Dennis Zhou

[permalink] [raw]
Subject: Re: [PATCH 05/14] dm: set the static flush bio device on demand

On Tue, Dec 04, 2018 at 03:28:07PM -0500, Mike Snitzer wrote:
> On Tue, Dec 04 2018 at 1:35pm -0500,
> Dennis Zhou <[email protected]> wrote:
>
> > The next patch changes the macro bio_set_dev() to associate a bio with a
> > blkg based on the device set. However, dm creates a static bio to be
> > used as the basis for cloning empty flush bios on creation. The
> > bio_set_dev() call in alloc_dev() will cause problems with the next
> > patch adding association to bio_set_dev() because the call is before the
> > bdev is associated with a gendisk (bd_disk is %NULL). To get around
> > this, set the device on the static bio every time and use that to clone
> > to the other bios.
> >
> > Signed-off-by: Dennis Zhou <[email protected]>
> > Cc: Alasdair Kergon <[email protected]>
> > Cc: Mike Snitzer <[email protected]>
> > ---
> > block/bio.c | 1 +
> > drivers/md/dm.c | 12 +++++++++++-
> > 2 files changed, 12 insertions(+), 1 deletion(-)
> >
> > diff --git a/block/bio.c b/block/bio.c
> > index 452b8e79b998..41ebb3f8e2fc 100644
> > --- a/block/bio.c
> > +++ b/block/bio.c
> > @@ -2021,6 +2021,7 @@ void bio_disassociate_blkg(struct bio *bio)
> > bio->bi_blkg = NULL;
> > }
> > }
> > +EXPORT_SYMBOL_GPL(bio_disassociate_blkg);
> >
> > /**
> > * __bio_associate_blkg - associate a bio with the a blkg
> > diff --git a/drivers/md/dm.c b/drivers/md/dm.c
> > index a733e4c920af..a2d6f8b33d23 100644
> > --- a/drivers/md/dm.c
> > +++ b/drivers/md/dm.c
> > @@ -1417,10 +1417,21 @@ static int __send_empty_flush(struct clone_info *ci)
> > unsigned target_nr = 0;
> > struct dm_target *ti;
> >
> > + /*
> > + * Empty flush uses a statically initialized bio as the base for
> > + * cloning, &md->flush_bio. However, blkg association requires that
>
> Would prefer:
> "Empty flush uses a statically initialized bio, &md->flush_bio, as the
> base for cloning. ..."
>

Cool, that reads better.

> > + * a bdev is associated with a gendisk, which doesn't happen until the
> > + * bdev is opened. So, blkg association is done at issue time of the
> > + * flush rather than when the device is created in dm_alloc().
>
> Another nit but I think you mean "alloc_dev()" here .......^
>

Yeah, I did mean alloc_dev(). Fixed.

> > + */
> > + bio_set_dev(ci->bio, ci->io->md->bdev);
> > +
> > BUG_ON(bio_has_data(ci->bio));
> > while ((ti = dm_table_get_target(ci->map, target_nr++)))
> > __send_duplicate_bios(ci, ti, ti->num_flush_bios, NULL);
> >
> > + bio_disassociate_blkg(ci->bio);
> > +
> > return 0;
> > }
> >
> > @@ -1939,7 +1950,6 @@ static struct mapped_device *alloc_dev(int minor)
> > goto bad;
> >
> > bio_init(&md->flush_bio, NULL, 0);
> > - bio_set_dev(&md->flush_bio, md->bdev);
> > md->flush_bio.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH | REQ_SYNC;
> >
> > dm_stats_init(&md->stats);
>
> The top-level DM device's bdev->bd_disk is only assigned after the first
> blkdev_get(), so I can see why this is needed.
>
> Think this type of life-cycle quirk was the kind of thing that caused
> blk cgroup issues with DM.
>
> Left wondering whether there would be a better way of handling N flush
> bios for DM devices. But for the now this will have to do.
>
> Acked-by: Mike Snitzer <[email protected]>
>

Thanks,
Dennis