2019-06-14 01:56:52

by Tejun Heo

[permalink] [raw]
Subject: [PATCHSET block/for-next] IO cost model based work-conserving porportional controller

One challenge of controlling IO resources is the lack of trivially
observable cost metric. This is distinguished from CPU and memory
where wallclock time and the number of bytes can serve as accurate
enough approximations.

Bandwidth and iops are the most commonly used metrics for IO devices
but depending on the type and specifics of the device, different IO
patterns easily lead to multiple orders of magnitude variations
rendering them useless for the purpose of IO capacity distribution.
While on-device time, with a lot of clutches, could serve as a useful
approximation for non-queued rotational devices, this is no longer
viable with modern devices, even the rotational ones.

While there is no cost metric we can trivially observe, it isn't a
complete mystery. For example, on a rotational device, seek cost
dominates while a contiguous transfer contributes a smaller amount
proportional to the size. If we can characterize at least the
relative costs of these different types of IOs, it should be possible
to implement a reasonable work-conserving proportional IO resource
distribution.

This patchset implements IO cost model based work-conserving
proportional controller. It currently has a simple linear cost model
builtin where each IO is classified as sequential or random and given
a base cost accordingly and additional size-proportional cost is added
on top. Each IO is given a cost based on the model and the controller
issues IOs for each cgroup according to their hierarchical weight.

By default, the controller adapts its overall IO rate so that it
doesn't build up buffer bloat in the request_queue layer, which
guarantees that the controller doesn't lose significant amount of
total work. However, this may not provide sufficient differentiation
as the underlying device may have a deep queue and not be fair in how
the queued IOs are serviced. The controller provides extra QoS
control knobs which allow tightening control feedback loop as
necessary.

For more details on the control mechanism, implementation and
interface, please refer to the comment at the top of
block/blk-ioweight.c and Documentation/admin-guide/cgroup-v2.rst
changes in the "blkcg: implement blk-ioweight" patch.

Here are some test results. Each test run goes through the following
combinations with each combination running for a minute. All tests
are performed against regular files on btrfs w/ deadline as the IO
scheduler. Random IOs are direct w/ queue depth of 64. Sequential
are normal buffered IOs.

high priority (weight=500) low priority (weight=100)

Rand read None
ditto Rand read
ditto Seq read
ditto Rand write
ditto Seq write
Seq read None
ditto Rand read
ditto Seq read
ditto Rand write
ditto Seq write
Rand write None
ditto Rand read
ditto Seq read
ditto Rand write
ditto Seq write
Seq write None
ditto Rand read
ditto Seq read
ditto Rand write
ditto Seq write

* 7200RPM SATA hard disk
* No IO control
https://photos.app.goo.gl/1KBHn7ykpC1LXRkB8
* ioweight, QoS: None
https://photos.app.goo.gl/MLNQGxCtBQ8wAmjm7
* ioweight, QoS: rpct=95.00 rlat=40000 wpct=95.00 wlat=40000 min=25.00 max=200.00
https://photos.app.goo.gl/XqXHm3Mkbm9w6Db46
* NCQ-blacklisted SATA SSD (QD==1)
* No IO control
https://photos.app.goo.gl/wCTXeu2uJ6LYL4pk8
* ioweight, QoS: None
https://photos.app.goo.gl/T2HedKD2sywQgj7R9
* ioweight, QoS: rpct=95.00 rlat=20000 wpct=95.00 wlat=20000 min=50.00 max=200.00
https://photos.app.goo.gl/urBTV8XQc1UqPJJw7
* SATA SSD (QD==32)
* No IO control
https://photos.app.goo.gl/TjEVykuVudSQcryh6
* ioweight, QoS: None
https://photos.app.goo.gl/iyQBsky7bmM54Xiq7
* ioweight, QoS: rpct=95.00 rlat=10000 wpct=95.00 wlat=20000 min=50.00 max=400.00
https://photos.app.goo.gl/q1a6URLDxPLMrnHy5

Even without explicit QoS configuration, read-heavy scenarios can
obtain acceptable differentiation. However, when write-heavy, the
deep buffering on the device side makes it difficult to maintain
control. With QoS parameters set, the differentiation is acceptable
across all combinations.

The implementation comes with default cost model parameters which are
selected automatically which should provide acceptable behavior across
most common devices. The parameters for hdd and consumer-grade SSDs
seem pretty robust. The default parameter set and selection criteria
for highend SSDs might need further adjustments.

It is fairly easy to configure the QoS parameters and, if needed, cost
model coefficients. We'll follow up with tooling and further
documentation. Also, the last RFC patch in the series implements
support for bpf-based custom cost function. Originally we thought
that we'd need per-device-type cost functions but the simple linear
model now seem good enough to cover all common device classes. In
case custom cost functions become necessary, we can fully develop the
bpf based extension and also easily add different builtin cost models.

Andy Newell did the heavy lifting of analyzing IO workloads and device
characteristics, exploring various cost models, determining the
default model and parameters to use.

Josef Bacik implemented a prototype which explored the use of
different types of cost metrics including on-device time and Andy's
linear model.

This patchset is on top of
cgroup/for-5.3
git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git for-5.3
+ [PATCHSET block/for-linus] Assorted blkcg fixes
http://lkml.kernel.org/r/[email protected]
+ [PATCHSET btrfs/for-next] btrfs: fix cgroup writeback support
http://[email protected]

This patchset contains the following 10 patches.

0001-blkcg-pass-q-and-blkcg-into-blkcg_pol_alloc_pd_fn.patch
0002-blkcg-make-cpd_init_fn-optional.patch
0003-blkcg-separate-blkcg_conf_get_disk-out-of-blkg_conf_.patch
0004-block-rq_qos-add-rq_qos_merge.patch
0005-block-rq_qos-implement-rq_qos_ops-queue_depth_change.patch
0006-blkcg-s-RQ_QOS_CGROUP-RQ_QOS_LATENCY.patch
0007-blk-mq-add-optional-request-pre_start_time_ns.patch
0008-blkcg-implement-blk-ioweight.patch
0009-blkcg-add-tools-cgroup-monitor_ioweight.py.patch
0010-RFC-blkcg-implement-BPF_PROG_TYPE_IO_COST.patch

0001-0007 are prep patches.
0008 implements blk-ioweight.
0009 adds monitoring script.
0010 is the RFC patch for BPF cost function.

The patchset is also available in the following git branch.

git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git review-iow

diffstat follows, Thanks.

Documentation/admin-guide/cgroup-v2.rst | 93
block/Kconfig | 12
block/Makefile | 1
block/bfq-cgroup.c | 5
block/blk-cgroup.c | 71
block/blk-core.c | 4
block/blk-iolatency.c | 8
block/blk-ioweight.c | 2509 +++++++++++++++++
block/blk-mq.c | 11
block/blk-rq-qos.c | 18
block/blk-rq-qos.h | 28
block/blk-settings.c | 2
block/blk-throttle.c | 6
block/blk-wbt.c | 18
block/blk-wbt.h | 4
block/blk.h | 8
block/ioctl.c | 4
include/linux/blk-cgroup.h | 4
include/linux/blk_types.h | 3
include/linux/blkdev.h | 7
include/linux/bpf_types.h | 3
include/trace/events/ioweight.h | 174 +
include/uapi/linux/bpf.h | 11
include/uapi/linux/fs.h | 2
tools/bpf/bpftool/feature.c | 3
tools/bpf/bpftool/main.h | 1
tools/cgroup/monitor_ioweight.py | 264 +
tools/include/uapi/linux/bpf.h | 11
tools/include/uapi/linux/fs.h | 2
tools/lib/bpf/libbpf.c | 2
tools/lib/bpf/libbpf_probes.c | 1
tools/testing/selftests/bpf/Makefile | 2
tools/testing/selftests/bpf/iocost_ctrl.c | 43
tools/testing/selftests/bpf/progs/iocost_linear_prog.c | 52
34 files changed, 3333 insertions(+), 54 deletions(-)

--
tejun


2019-06-14 01:56:59

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 01/10] blkcg: pass @q and @blkcg into blkcg_pol_alloc_pd_fn()

Instead of @node, pass in @q and @blkcg so that the alloc function has
more context. This doesn't cause any behavior change and will be used
by io.weight implementation.

Signed-off-by: Tejun Heo <[email protected]>
---
block/bfq-cgroup.c | 5 +++--
block/blk-cgroup.c | 6 +++---
block/blk-iolatency.c | 6 ++++--
block/blk-throttle.c | 6 ++++--
include/linux/blk-cgroup.h | 3 ++-
5 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
index b3796a40a61a..4193172ad20f 100644
--- a/block/bfq-cgroup.c
+++ b/block/bfq-cgroup.c
@@ -425,11 +425,12 @@ static void bfq_cpd_free(struct blkcg_policy_data *cpd)
kfree(cpd_to_bfqgd(cpd));
}

-static struct blkg_policy_data *bfq_pd_alloc(gfp_t gfp, int node)
+static struct blkg_policy_data *bfq_pd_alloc(gfp_t gfp, struct request_queue *q,
+ struct blkcg *blkcg)
{
struct bfq_group *bfqg;

- bfqg = kzalloc_node(sizeof(*bfqg), gfp, node);
+ bfqg = kzalloc_node(sizeof(*bfqg), gfp, q->node);
if (!bfqg)
return NULL;

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 238d5d2d0691..30d3a0fbccac 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -174,7 +174,7 @@ static struct blkcg_gq *blkg_alloc(struct blkcg *blkcg, struct request_queue *q,
continue;

/* alloc per-policy data and attach it to blkg */
- pd = pol->pd_alloc_fn(gfp_mask, q->node);
+ pd = pol->pd_alloc_fn(gfp_mask, q, blkcg);
if (!pd)
goto err_free;

@@ -1405,7 +1405,7 @@ int blkcg_activate_policy(struct request_queue *q,
blk_mq_freeze_queue(q);
pd_prealloc:
if (!pd_prealloc) {
- pd_prealloc = pol->pd_alloc_fn(GFP_KERNEL, q->node);
+ pd_prealloc = pol->pd_alloc_fn(GFP_KERNEL, q, &blkcg_root);
if (!pd_prealloc) {
ret = -ENOMEM;
goto out_bypass_end;
@@ -1421,7 +1421,7 @@ int blkcg_activate_policy(struct request_queue *q,
if (blkg->pd[pol->plid])
continue;

- pd = pol->pd_alloc_fn(GFP_NOWAIT | __GFP_NOWARN, q->node);
+ pd = pol->pd_alloc_fn(GFP_NOWAIT | __GFP_NOWARN, q, &blkcg_root);
if (!pd)
swap(pd, pd_prealloc);
if (!pd) {
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 17896bb3aaf2..fa47a6485725 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -950,11 +950,13 @@ static size_t iolatency_pd_stat(struct blkg_policy_data *pd, char *buf,
}


-static struct blkg_policy_data *iolatency_pd_alloc(gfp_t gfp, int node)
+static struct blkg_policy_data *iolatency_pd_alloc(gfp_t gfp,
+ struct request_queue *q,
+ struct blkcg *blkcg)
{
struct iolatency_grp *iolat;

- iolat = kzalloc_node(sizeof(*iolat), gfp, node);
+ iolat = kzalloc_node(sizeof(*iolat), gfp, q->node);
if (!iolat)
return NULL;
iolat->stats = __alloc_percpu_gfp(sizeof(struct latency_stat),
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 9ea7c0ecad10..3bb69a17c4b3 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -478,12 +478,14 @@ static void throtl_service_queue_init(struct throtl_service_queue *sq)
timer_setup(&sq->pending_timer, throtl_pending_timer_fn, 0);
}

-static struct blkg_policy_data *throtl_pd_alloc(gfp_t gfp, int node)
+static struct blkg_policy_data *throtl_pd_alloc(gfp_t gfp,
+ struct request_queue *q,
+ struct blkcg *blkcg)
{
struct throtl_grp *tg;
int rw;

- tg = kzalloc_node(sizeof(*tg), gfp, node);
+ tg = kzalloc_node(sizeof(*tg), gfp, q->node);
if (!tg)
return NULL;

diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index ffb2f88e87c6..1ed27977f88f 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -151,7 +151,8 @@ typedef struct blkcg_policy_data *(blkcg_pol_alloc_cpd_fn)(gfp_t gfp);
typedef void (blkcg_pol_init_cpd_fn)(struct blkcg_policy_data *cpd);
typedef void (blkcg_pol_free_cpd_fn)(struct blkcg_policy_data *cpd);
typedef void (blkcg_pol_bind_cpd_fn)(struct blkcg_policy_data *cpd);
-typedef struct blkg_policy_data *(blkcg_pol_alloc_pd_fn)(gfp_t gfp, int node);
+typedef struct blkg_policy_data *(blkcg_pol_alloc_pd_fn)(gfp_t gfp,
+ struct request_queue *q, struct blkcg *blkcg);
typedef void (blkcg_pol_init_pd_fn)(struct blkg_policy_data *pd);
typedef void (blkcg_pol_online_pd_fn)(struct blkg_policy_data *pd);
typedef void (blkcg_pol_offline_pd_fn)(struct blkg_policy_data *pd);
--
2.17.1

2019-06-14 01:57:03

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 02/10] blkcg: make ->cpd_init_fn() optional

For policies which can do enough initialization from ->cpd_alloc_fn(),
make ->cpd_init_fn() optional.

Signed-off-by: Tejun Heo <[email protected]>
---
block/blk-cgroup.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 30d3a0fbccac..60ad9b96e6eb 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -1534,7 +1534,8 @@ int blkcg_policy_register(struct blkcg_policy *pol)
blkcg->cpd[pol->plid] = cpd;
cpd->blkcg = blkcg;
cpd->plid = pol->plid;
- pol->cpd_init_fn(cpd);
+ if (pol->cpd_init_fn)
+ pol->cpd_init_fn(cpd);
}
}

--
2.17.1

2019-06-14 01:57:28

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 05/10] block/rq_qos: implement rq_qos_ops->queue_depth_changed()

wbt already gets queue depth changed notification through
wbt_set_queue_depth(). Generalize it into
rq_qos_ops->queue_depth_changed() so that other rq_qos policies can
easily hook into the events too.

Signed-off-by: Tejun Heo <[email protected]>
---
block/blk-rq-qos.c | 9 +++++++++
block/blk-rq-qos.h | 8 ++++++++
block/blk-settings.c | 2 +-
block/blk-wbt.c | 18 ++++++++----------
block/blk-wbt.h | 4 ----
5 files changed, 26 insertions(+), 15 deletions(-)

diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
index 7debcaf1ee53..fb11652348fc 100644
--- a/block/blk-rq-qos.c
+++ b/block/blk-rq-qos.c
@@ -101,6 +101,15 @@ void __rq_qos_done_bio(struct rq_qos *rqos, struct bio *bio)
} while (rqos);
}

+void __rq_qos_queue_depth_changed(struct rq_qos *rqos)
+{
+ do {
+ if (rqos->ops->queue_depth_changed)
+ rqos->ops->queue_depth_changed(rqos);
+ rqos = rqos->next;
+ } while (rqos);
+}
+
/*
* Return true, if we can't increase the depth further by scaling
*/
diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
index 8e426a8505b6..e15b6907b76d 100644
--- a/block/blk-rq-qos.h
+++ b/block/blk-rq-qos.h
@@ -41,6 +41,7 @@ struct rq_qos_ops {
void (*done)(struct rq_qos *, struct request *);
void (*done_bio)(struct rq_qos *, struct bio *);
void (*cleanup)(struct rq_qos *, struct bio *);
+ void (*queue_depth_changed)(struct rq_qos *);
void (*exit)(struct rq_qos *);
const struct blk_mq_debugfs_attr *debugfs_attrs;
};
@@ -138,6 +139,7 @@ void __rq_qos_throttle(struct rq_qos *rqos, struct bio *bio);
void __rq_qos_track(struct rq_qos *rqos, struct request *rq, struct bio *bio);
void __rq_qos_merge(struct rq_qos *rqos, struct request *rq, struct bio *bio);
void __rq_qos_done_bio(struct rq_qos *rqos, struct bio *bio);
+void __rq_qos_queue_depth_changed(struct rq_qos *rqos);

static inline void rq_qos_cleanup(struct request_queue *q, struct bio *bio)
{
@@ -194,6 +196,12 @@ static inline void rq_qos_merge(struct request_queue *q, struct request *rq,
__rq_qos_merge(q->rq_qos, rq, bio);
}

+static inline void rq_qos_queue_depth_changed(struct request_queue *q)
+{
+ if (q->rq_qos)
+ __rq_qos_queue_depth_changed(q->rq_qos);
+}
+
void rq_qos_exit(struct request_queue *);

#endif
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 2ae348c101a0..df323ea448de 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -804,7 +804,7 @@ EXPORT_SYMBOL(blk_queue_update_dma_alignment);
void blk_set_queue_depth(struct request_queue *q, unsigned int depth)
{
q->queue_depth = depth;
- wbt_set_queue_depth(q, depth);
+ rq_qos_queue_depth_changed(q);
}
EXPORT_SYMBOL(blk_set_queue_depth);

diff --git a/block/blk-wbt.c b/block/blk-wbt.c
index 313f45a37e9d..8118f95a194b 100644
--- a/block/blk-wbt.c
+++ b/block/blk-wbt.c
@@ -629,15 +629,6 @@ static void wbt_requeue(struct rq_qos *rqos, struct request *rq)
}
}

-void wbt_set_queue_depth(struct request_queue *q, unsigned int depth)
-{
- struct rq_qos *rqos = wbt_rq_qos(q);
- if (rqos) {
- RQWB(rqos)->rq_depth.queue_depth = depth;
- __wbt_update_limits(RQWB(rqos));
- }
-}
-
void wbt_set_write_cache(struct request_queue *q, bool write_cache_on)
{
struct rq_qos *rqos = wbt_rq_qos(q);
@@ -689,6 +680,12 @@ static int wbt_data_dir(const struct request *rq)
return -1;
}

+static void wbt_queue_depth_changed(struct rq_qos *rqos)
+{
+ RQWB(rqos)->rq_depth.queue_depth = blk_queue_depth(rqos->q);
+ __wbt_update_limits(RQWB(rqos));
+}
+
static void wbt_exit(struct rq_qos *rqos)
{
struct rq_wb *rwb = RQWB(rqos);
@@ -811,6 +808,7 @@ static struct rq_qos_ops wbt_rqos_ops = {
.requeue = wbt_requeue,
.done = wbt_done,
.cleanup = wbt_cleanup,
+ .queue_depth_changed = wbt_queue_depth_changed,
.exit = wbt_exit,
#ifdef CONFIG_BLK_DEBUG_FS
.debugfs_attrs = wbt_debugfs_attrs,
@@ -853,7 +851,7 @@ int wbt_init(struct request_queue *q)

rwb->min_lat_nsec = wbt_default_latency_nsec(q);

- wbt_set_queue_depth(q, blk_queue_depth(q));
+ wbt_queue_depth_changed(&rwb->rqos);
wbt_set_write_cache(q, test_bit(QUEUE_FLAG_WC, &q->queue_flags));

return 0;
diff --git a/block/blk-wbt.h b/block/blk-wbt.h
index f47218d5b3b2..8e4e37660971 100644
--- a/block/blk-wbt.h
+++ b/block/blk-wbt.h
@@ -95,7 +95,6 @@ void wbt_enable_default(struct request_queue *);
u64 wbt_get_min_lat(struct request_queue *q);
void wbt_set_min_lat(struct request_queue *q, u64 val);

-void wbt_set_queue_depth(struct request_queue *, unsigned int);
void wbt_set_write_cache(struct request_queue *, bool);

u64 wbt_default_latency_nsec(struct request_queue *);
@@ -118,9 +117,6 @@ static inline void wbt_disable_default(struct request_queue *q)
static inline void wbt_enable_default(struct request_queue *q)
{
}
-static inline void wbt_set_queue_depth(struct request_queue *q, unsigned int depth)
-{
-}
static inline void wbt_set_write_cache(struct request_queue *q, bool wc)
{
}
--
2.17.1

2019-06-14 01:57:33

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 06/10] blkcg: s/RQ_QOS_CGROUP/RQ_QOS_LATENCY/

io.weight is gonna be another rq_qos cgroup mechanism. Let's rename
RQ_QOS_CGROUP which is being used by io.latency to RQ_QOS_LATENCY in
preparation.

Signed-off-by: Tejun Heo <[email protected]>
---
block/blk-iolatency.c | 2 +-
block/blk-rq-qos.h | 8 ++++----
2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index fa47a6485725..7d1dbe757063 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -744,7 +744,7 @@ int blk_iolatency_init(struct request_queue *q)
return -ENOMEM;

rqos = &blkiolat->rqos;
- rqos->id = RQ_QOS_CGROUP;
+ rqos->id = RQ_QOS_LATENCY;
rqos->ops = &blkcg_iolatency_ops;
rqos->q = q;

diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
index e15b6907b76d..5f8b75826a98 100644
--- a/block/blk-rq-qos.h
+++ b/block/blk-rq-qos.h
@@ -14,7 +14,7 @@ struct blk_mq_debugfs_attr;

enum rq_qos_id {
RQ_QOS_WBT,
- RQ_QOS_CGROUP,
+ RQ_QOS_LATENCY,
};

struct rq_wait {
@@ -74,7 +74,7 @@ static inline struct rq_qos *wbt_rq_qos(struct request_queue *q)

static inline struct rq_qos *blkcg_rq_qos(struct request_queue *q)
{
- return rq_qos_id(q, RQ_QOS_CGROUP);
+ return rq_qos_id(q, RQ_QOS_LATENCY);
}

static inline const char *rq_qos_id_to_name(enum rq_qos_id id)
@@ -82,8 +82,8 @@ static inline const char *rq_qos_id_to_name(enum rq_qos_id id)
switch (id) {
case RQ_QOS_WBT:
return "wbt";
- case RQ_QOS_CGROUP:
- return "cgroup";
+ case RQ_QOS_LATENCY:
+ return "latency";
}
return "unknown";
}
--
2.17.1

2019-06-14 01:57:39

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 09/10] blkcg: add tools/cgroup/monitor_ioweight.py

Instead of mucking with debugfs and ->pd_stat(), add drgn based
monitoring script.

Signed-off-by: Tejun Heo <[email protected]>
Cc: Omar Sandoval <[email protected]>
---
block/blk-ioweight.c | 21 +++
tools/cgroup/monitor_ioweight.py | 264 +++++++++++++++++++++++++++++++
2 files changed, 285 insertions(+)
create mode 100644 tools/cgroup/monitor_ioweight.py

diff --git a/block/blk-ioweight.c b/block/blk-ioweight.c
index d10249f5774e..3d9fc1a631be 100644
--- a/block/blk-ioweight.c
+++ b/block/blk-ioweight.c
@@ -146,6 +146,27 @@
* donate and should take back how much requires hweight propagations
* anyway making it easier to implement and understand as a separate
* mechanism.
+ *
+ * 3. Monitoring
+ *
+ * Instead of debugfs or other clumsy monitoring mechanisms, this
+ * controller uses a drgn based monitoring script -
+ * tools/cgroup/monitor_ioweight.py. For details on drgn, please see
+ * https://github.com/osandov/drgn. The ouput looks like the following.
+ *
+ * sdb RUN per=300ms cur_per=234.218:v203.695 busy= +1 vrate= 62.12%
+ * active weight hweight% inflt% del_ms usages%
+ * test/a * 50/ 50 33.33/ 33.33 27.65 0*041 033:033:033
+ * test/b * 100/ 100 66.67/ 66.67 17.56 0*000 066:079:077
+ *
+ * - per : Timer period
+ * - cur_per : Internal wall and device vtime clock
+ * - vrate : Device virtual time rate against wall clock
+ * - weight : Surplus-adjusted and configured weights
+ * - hweight : Surplus-adjusted and configured hierarchical weights
+ * - inflt : The percentage of in-flight IO cost at the end of last period
+ * - del_ms : Deferred issuer delay induction level and duration
+ * - usages : Usage history
*/

#include <linux/kernel.h>
diff --git a/tools/cgroup/monitor_ioweight.py b/tools/cgroup/monitor_ioweight.py
new file mode 100644
index 000000000000..3cd432772e52
--- /dev/null
+++ b/tools/cgroup/monitor_ioweight.py
@@ -0,0 +1,264 @@
+#!/usr/bin/env -S drgn -k
+#
+# This is a drgn script to monitor the blk-ioweight cgroup controller.
+# See the comment at the top of block/blk-ioweight.c for more details.
+# For drgn, visit https://github.com/osandov/drgn.
+#
+
+import sys
+import re
+import time
+import json
+
+import drgn
+from drgn import container_of
+from drgn.helpers.linux.list import list_for_each_entry,list_empty
+from drgn.helpers.linux.radixtree import radix_tree_for_each,radix_tree_lookup
+
+import argparse
+parser = argparse.ArgumentParser()
+parser.add_argument('devname', metavar='DEV',
+ help='Target block device name (e.g. sda)')
+parser.add_argument('--cgroup', action='append', metavar='REGEX',
+ help='Regex for target cgroups, ')
+parser.add_argument('--interval', '-i', metavar='SECONDS', type=float, default=1,
+ help='Monitoring interval in seconds')
+parser.add_argument('--json', action='store_true',
+ help='Output in json')
+args = parser.parse_args()
+
+def err(s):
+ print(s, file=sys.stderr, flush=True)
+ sys.exit(1)
+
+try:
+ blkcg_root = prog['blkcg_root']
+ plid = prog['blkcg_policy_iow'].plid.value_()
+except:
+ err('The kernel does not have ioweight enabled')
+
+IOW_RUNNING = prog['IOW_RUNNING'].value_()
+NR_USAGE_SLOTS = prog['NR_USAGE_SLOTS'].value_()
+HWEIGHT_WHOLE = prog['HWEIGHT_WHOLE'].value_()
+VTIME_PER_SEC = prog['VTIME_PER_SEC'].value_()
+VTIME_PER_USEC = prog['VTIME_PER_USEC'].value_()
+AUTOP_SSD_FAST = prog['AUTOP_SSD_FAST'].value_()
+AUTOP_SSD_DFL = prog['AUTOP_SSD_DFL'].value_()
+AUTOP_SSD_QD1 = prog['AUTOP_SSD_QD1'].value_()
+AUTOP_HDD = prog['AUTOP_HDD'].value_()
+
+autop_names = {
+ AUTOP_SSD_FAST: 'ssd_fast',
+ AUTOP_SSD_DFL: 'ssd_dfl',
+ AUTOP_SSD_QD1: 'ssd_qd1',
+ AUTOP_HDD: 'hdd',
+}
+
+class BlkgIterator:
+ def blkcg_name(blkcg):
+ return blkcg.css.cgroup.kn.name.string_().decode('utf-8')
+
+ def walk(self, blkcg, q_id, parent_path):
+ if not self.include_dying and \
+ not (blkcg.css.flags.value_() & prog['CSS_ONLINE'].value_()):
+ return
+
+ name = BlkgIterator.blkcg_name(blkcg)
+ path = parent_path + '/' + name if parent_path else name
+ blkg = drgn.Object(prog, 'struct blkcg_gq',
+ address=radix_tree_lookup(blkcg.blkg_tree, q_id))
+ if not blkg.address_:
+ return
+
+ self.blkgs.append((path if path else '/', blkg))
+
+ for c in list_for_each_entry('struct blkcg',
+ blkcg.css.children.address_of_(), 'css.sibling'):
+ self.walk(c, q_id, path)
+
+ def __init__(self, root_blkcg, q_id, include_dying=False):
+ self.include_dying = include_dying
+ self.blkgs = []
+ self.walk(root_blkcg, q_id, '')
+
+ def __iter__(self):
+ return iter(self.blkgs)
+
+class IowStat:
+ def __init__(self, iow):
+ global autop_names
+
+ self.enabled = iow.enabled.value_()
+ self.running = iow.running.value_() == IOW_RUNNING
+ self.period_ms = round(iow.period_us.value_() / 1_000)
+ self.period_at = iow.period_at.value_() / 1_000_000
+ self.vperiod_at = iow.period_at_vtime.value_() / VTIME_PER_SEC
+ self.vrate_pct = iow.vtime_rate.counter.value_() * 100 / VTIME_PER_USEC
+ self.busy_level = iow.busy_level.value_()
+ self.autop_idx = iow.autop_idx.value_()
+ self.user_cost_model = iow.user_cost_model.value_()
+ self.user_qos_params = iow.user_qos_params.value_()
+
+ if self.autop_idx in autop_names:
+ self.autop_name = autop_names[self.autop_idx]
+ else:
+ self.autop_name = '?'
+
+ def dict(self, now):
+ return { 'device' : devname,
+ 'timestamp' : now,
+ 'enabled' : self.enabled,
+ 'running' : self.running,
+ 'period_ms' : self.period_ms,
+ 'period_at' : self.period_at,
+ 'period_vtime_at' : self.vperiod_at,
+ 'busy_level' : self.busy_level,
+ 'vrate_pct' : self.vrate_pct, }
+
+ def table_preamble_str(self):
+ state = ('RUN' if self.running else 'IDLE') if self.enabled else 'OFF'
+ output = f'{devname} {state:4} ' \
+ f'per={self.period_ms}ms ' \
+ f'cur_per={self.period_at:.3f}:v{self.vperiod_at:.3f} ' \
+ f'busy={self.busy_level:+3} ' \
+ f'vrate={self.vrate_pct:6.2f}% ' \
+ f'params={self.autop_name}'
+ if self.user_cost_model or self.user_qos_params:
+ output += f'({"C" if self.user_cost_model else ""}{"Q" if self.user_qos_params else ""})'
+ return output
+
+ def table_header_str(self):
+ return f'{"":25} active {"weight":>9} {"hweight%":>13} {"inflt%":>6} ' \
+ f'{"del_ms":>6} {"usages%"}'
+
+class IowgStat:
+ def __init__(self, iowg):
+ iow = iowg.iow
+ blkg = iowg.pd.blkg
+
+ self.is_active = not list_empty(iowg.active_list.address_of_())
+ self.weight = iowg.weight.value_()
+ self.active = iowg.active.value_()
+ self.inuse = iowg.inuse.value_()
+ self.hwa_pct = iowg.hweight_active.value_() * 100 / HWEIGHT_WHOLE
+ self.hwi_pct = iowg.hweight_inuse.value_() * 100 / HWEIGHT_WHOLE
+
+ vdone = iowg.done_vtime.counter.value_()
+ vtime = iowg.vtime.counter.value_()
+ vrate = iow.vtime_rate.counter.value_()
+ period_vtime = iow.period_us.value_() * vrate
+ if period_vtime:
+ self.inflight_pct = (vtime - vdone) * 100 / period_vtime
+ else:
+ self.inflight_pct = 0
+
+ self.use_delay = min(blkg.use_delay.counter.value_(), 99)
+ self.delay_ms = min(round(blkg.delay_nsec.counter.value_() / 1_000_000), 999)
+
+ usage_idx = iowg.usage_idx.value_()
+ self.usages = []
+ self.usage = 0
+ for i in range(NR_USAGE_SLOTS):
+ usage = iowg.usages[(usage_idx + i) % NR_USAGE_SLOTS].value_()
+ upct = min(usage * 100 / HWEIGHT_WHOLE, 999)
+ self.usages.append(upct)
+ self.usage = max(self.usage, upct)
+
+ def dict(self, now, path):
+ out = { 'cgroup' : path,
+ 'timestamp' : now,
+ 'is_active' : self.is_active,
+ 'weight' : self.weight,
+ 'weight_active' : self.active,
+ 'weight_inuse' : self.inuse,
+ 'hweight_active_pct' : self.hwa_pct,
+ 'hweight_inuse_pct' : self.hwi_pct,
+ 'inflight_pct' : self.inflight_pct,
+ 'use_delay' : self.use_delay,
+ 'delay_ms' : self.delay_ms,
+ 'usage_pct' : self.usage }
+ for i in range(len(self.usages)):
+ out[f'usage_pct_{i}'] = f'{self.usages[i]}'
+ return out
+
+ def table_row_str(self, path):
+ out = f'{path[-28:]:28} ' \
+ f'{"*" if self.is_active else " "} ' \
+ f'{self.inuse:5}/{self.active:5} ' \
+ f'{self.hwi_pct:6.2f}/{self.hwa_pct:6.2f} ' \
+ f'{self.inflight_pct:6.2f} ' \
+ f'{self.use_delay:2}*{self.delay_ms:03} '
+ for u in self.usages:
+ out += f'{round(u):03d}:'
+ out = out.rstrip(':')
+ return out
+
+# handle args
+table_fmt = not args.json
+interval = args.interval
+devname = args.devname
+
+if args.json:
+ table_fmt = False
+
+re_str = None
+for r in args.cgroup:
+ if re_str is None:
+ re_str = r
+ else:
+ re_str += '|' + r
+
+filter_re = re.compile(re_str) if re_str else None
+
+# Locate the roots
+q_id = None
+root_iowg = None
+iow = None
+
+for i, ptr in radix_tree_for_each(blkcg_root.blkg_tree):
+ blkg = drgn.Object(prog, 'struct blkcg_gq', address=ptr)
+ try:
+ if devname == blkg.q.kobj.parent.name.string_().decode('utf-8'):
+ q_id = blkg.q.id.value_()
+ if blkg.pd[plid]:
+ root_iowg = container_of(blkg.pd[plid], 'struct iow_gq', 'pd')
+ iow = root_iowg.iow
+ break
+ except:
+ pass
+
+if iow is None:
+ err(f'Could not find iow for {devname}');
+
+# Keep printing
+while True:
+ now = time.time()
+ iowstat = IowStat(iow)
+ output = ''
+
+ if table_fmt:
+ output += '\n' + iowstat.table_preamble_str()
+ output += '\n' + iowstat.table_header_str()
+ else:
+ output += json.dumps(iowstat.dict(now))
+
+ for path, blkg in BlkgIterator(blkcg_root, q_id):
+ if filter_re and not filter_re.match(path):
+ continue
+ if not blkg.pd[plid]:
+ continue
+
+ iowg = container_of(blkg.pd[plid], 'struct iow_gq', 'pd')
+ iowg_stat = IowgStat(iowg)
+
+ if not filter_re and not iowg_stat.is_active:
+ continue
+
+ if table_fmt:
+ output += '\n' + iowg_stat.table_row_str(path)
+ else:
+ output += '\n' + json.dumps(iowg_stat.dict(now, path))
+
+ print(output)
+ sys.stdout.flush()
+ time.sleep(interval)
--
2.17.1

2019-06-14 01:57:39

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 08/10] blkcg: implement blk-ioweight

This patchset implements IO cost model based work-conserving
proportional controller.

While io.latency provides the capability to comprehensively prioritize
and protect IOs depending on the cgroups, its protection is binary -
the lowest latency target cgroup which is suffering is protected at
the cost of all others. In many use cases including stacking multiple
workload containers in a single system, it's necessary to distribute
IO capacity with better granularity.

One challenge of controlling IO resources is the lack of trivially
observable cost metric. The most common metrics - bandwidth and iops
- can be off by orders of magnitude depending on the device type and
IO pattern. However, the cost isn't a complete mystery. Given
several key attributes, we can make fairly reliable predictions on how
expensive a given stream of IOs would be, at least compared to other
IO patterns.

The function which determines the cost of a given IO is the IO cost
model for the device. This controller distributes IO capacity based
on the costs estimated by such model. The more accurate the cost
model the better but the controller adapts based on IO completion
latency and as long as the relative costs across differents IO
patterns are consistent and sensible, it'll adapt to the actual
performance of the device.

Currently, the only implemented cost model is a simple linear one with
a few sets of default parameters for different classes of device.
This covers most common devices reasonably well. All the
infrastructure to tune and add different cost models is already in
place and a later patch will also allow using bpf progs for cost
models.

Please see the top comment in blk-ioweight.c and documentation for
more details.

Signed-off-by: Tejun Heo <[email protected]>
Cc: Andy Newell <[email protected]>
Cc: Josef Bacik <[email protected]>
---
Documentation/admin-guide/cgroup-v2.rst | 93 +
block/Kconfig | 9 +
block/Makefile | 1 +
block/blk-ioweight.c | 2356 +++++++++++++++++++++++
block/blk-rq-qos.h | 3 +
include/linux/blk_types.h | 3 +
include/trace/events/ioweight.h | 174 ++
7 files changed, 2639 insertions(+)
create mode 100644 block/blk-ioweight.c
create mode 100644 include/trace/events/ioweight.h

diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index a5c845338d6d..e66ee2c20b3b 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -1425,6 +1425,99 @@ IO Interface Files
8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021

+ io.weight.qos
+ A read-write nested-keyed file with exists only on the root
+ cgroup.
+
+ This file configures the Quality of Service of the IO cost
+ model based proportional controller
+ (CONFIG_BLK_CGROUP_IOWEIGHT). Lines are keyed by $MAJ:$MIN
+ device numbers and not ordered. The line for a given device
+ is populated on the first write for the device on
+ "io.weight.qos" or "io.weight.cost_model". The following
+ nested keys are defined.
+
+ ====== =====================================
+ enable Weight-based control enable
+ ctrl "auto" or "user"
+ rpct Read latency percentile [0, 100]
+ rlat Read latency threshold
+ wpct Write latency percentile [0, 100]
+ wlat Write latency threshold
+ min Minimum scaling percentage [1, 10000]
+ max Maximum scaling percentage [1, 10000]
+ ====== =====================================
+
+ The controller is disabled by default and can be enabled by
+ setting "enable" to 1. "rpct" and "wpct" parameters default
+ to zero and the controller uses internal device saturation
+ state to adjust the overall IO rate between "min" and "max".
+
+ When a better control quality is needed, latency QoS
+ parameters can be configured. For example::
+
+ 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
+
+ shows that on sdb, the controller is enabled, will consider
+ the device saturated if the 95th percentile of read completion
+ latencies is above 75ms or write 150ms, and adjust the overall
+ IO issue rate between 50% and 150% accordingly.
+
+ The lower the saturation point, the better the latency QoS at
+ the cost of aggregate bandwidth. The narrower the allowed
+ adjustment range between "min" and "max", the more conformant
+ to the cost model the IO behavior. Note that the IO issue
+ base rate may be far off from 100% and setting "min" and "max"
+ blindly can lead to a significant loss of device capacity or
+ control quality. "min" and "max" are useful for regulating
+ devices which show wide temporary behavior changes - e.g. a
+ ssd which accepts writes at the line speed for a while and
+ then completely stalls for multiple seconds.
+
+ When "ctrl" is "auto", the parameters are controlled by the
+ kernel and may change automatically. Setting "ctrl" to "user"
+ or setting any of the percentile and latency parameters puts
+ it into "user" mode and disables the automatic changes. The
+ automatic mode can be restored by setting "ctrl" to "auto".
+
+ io.weight.cost_model
+ A read-write nested-keyed file with exists only on the root
+ cgroup.
+
+ This file configures the cost model of the IO cost model based
+ proportional controller (CONFIG_BLK_CGROUP_IOWEIGHT). Lines
+ are keyed by $MAJ:$MIN device numbers and not ordered. The
+ line for a given device is populated on the first write for
+ the device on "io.weight.qos" or "io.weight.cost_model". The
+ following nested keys are defined.
+
+ ===== ================================
+ ctrl "auto" or "user"
+ model The cost model in use - "linear"
+ ===== ================================
+
+ When "ctrl" is "auto", the kernel may change all parameters
+ dynamically. When "ctrl" is set to "user" or any other
+ parameters are written to, "ctrl" become "user" and the
+ automatic changes are disabled.
+
+ When "model" is "linear", the following model parameters are
+ defined.
+
+ ============= ========================================
+ [r|w]bps The maximum sequential IO throughput
+ [r|w]seqiops The maximum 4k sequential IOs per second
+ [r|w]randiops The maximum 4k random IOs per second
+ ============= ========================================
+
+ From the above, the builtin linear model determines the base
+ costs of a sequential and random IO and the cost coefficient
+ for the IO size. While simple, this model can cover most
+ common device classes acceptably.
+
+ The IO cost model isn't expected to be accurate in absolute
+ sense and is scaled to the device behavior dynamically.
+
io.weight
A read-write flat-keyed file which exists on non-root cgroups.
The default is "default 100".
diff --git a/block/Kconfig b/block/Kconfig
index 2466dcc3ef1d..15b3de28a264 100644
--- a/block/Kconfig
+++ b/block/Kconfig
@@ -132,6 +132,15 @@ config BLK_CGROUP_IOLATENCY

Note, this is an experimental interface and could be changed someday.

+config BLK_CGROUP_IOWEIGHT
+ bool "Enable support for weight based cgroup IO protection"
+ depends on BLK_CGROUP=y
+ ---help---
+ Enabling this option enables the .weight interface for IO throttling.
+ The IO controller will attempt to maintain a IO distribution between
+ different groups based on their percentage of share of the overall
+ weight distribution.
+
config BLK_WBT_MQ
bool "Multiqueue writeback throttling"
default y
diff --git a/block/Makefile b/block/Makefile
index eee1b4ceecf9..e8a8ef16dbff 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -18,6 +18,7 @@ obj-$(CONFIG_BLK_DEV_BSGLIB) += bsg-lib.o
obj-$(CONFIG_BLK_CGROUP) += blk-cgroup.o
obj-$(CONFIG_BLK_DEV_THROTTLING) += blk-throttle.o
obj-$(CONFIG_BLK_CGROUP_IOLATENCY) += blk-iolatency.o
+obj-$(CONFIG_BLK_CGROUP_IOWEIGHT) += blk-ioweight.o
obj-$(CONFIG_MQ_IOSCHED_DEADLINE) += mq-deadline.o
obj-$(CONFIG_MQ_IOSCHED_KYBER) += kyber-iosched.o
bfq-y := bfq-iosched.o bfq-wf2q.o bfq-cgroup.o
diff --git a/block/blk-ioweight.c b/block/blk-ioweight.c
new file mode 100644
index 000000000000..d10249f5774e
--- /dev/null
+++ b/block/blk-ioweight.c
@@ -0,0 +1,2356 @@
+/* SPDX-License-Identifier: GPL-2.0
+ *
+ * IO cost model based work-conserving proportional controller.
+ *
+ * Copyright (C) 2019 Tejun Heo <[email protected]>
+ * Copyright (C) 2019 Andy Newell <[email protected]>
+ * Copyright (C) 2019 Facebook
+ *
+ * One challenge of controlling IO resources is the lack of trivially
+ * observable cost metric. This is distinguished from CPU and memory where
+ * wallclock time and the number of bytes can serve as accurate enough
+ * approximations.
+ *
+ * Bandwidth and iops are the most commonly used metrics for IO devices but
+ * depending on the type and specifics of the device, different IO patterns
+ * easily lead to multiple orders of magnitude variations rendering them
+ * useless for the purpose of IO capacity distribution. While on-device
+ * time, with a lot of clutches, could serve as a useful approximation for
+ * non-queued rotational devices, this is no longer viable with modern
+ * devices, even the rotational ones.
+ *
+ * While there is no cost metric we can trivially observe, it isn't a
+ * complete mystery. For example, on a rotational device, seek cost
+ * dominates while a contiguous transfer contributes a smaller amount
+ * proportional to the size. If we can characterize at least the relative
+ * costs of these different types of IOs, it should be possible to
+ * implement a reasonable work-conserving proportional IO resource
+ * distribution.
+ *
+ * 1. IO Cost Model
+ *
+ * IO cost model estimates the cost of an IO given its basic parameters and
+ * history (e.g. the end sector of the last IO). The cost is measured in
+ * device time. If a given IO is estimated to cost 10ms, the device should
+ * be able to process ~100 of those IOs in a second.
+ *
+ * Currently, there's only one builtin cost model - linear. Each IO is
+ * classified as sequential or random and given a base cost accordingly.
+ * On top of that, a size cost proportional to the length of the IO is
+ * added. While simple, this model captures the operational
+ * characteristics of a wide varienty of devices well enough. Default
+ * paramters for several different classes of devices are provided and the
+ * parameters can be configured from userspace via
+ * /sys/block/DEV/queue/io_cost_model.
+ *
+ * 2. Control Strategy
+ *
+ * The device virtual time (vtime) is used as the primary control metric.
+ * The control strategy is composed of the following three parts.
+ *
+ * 2-1. Vtime Distribution
+ *
+ * When a cgroup becomes active in terms of IOs, its hierarchical share is
+ * calculated. Please consider the following hierarchy where the numbers
+ * inside parentheses denote the configured weights.
+ *
+ * root
+ * / \
+ * A (w:100) B (w:300)
+ * / \
+ * A0 (w:100) A1 (w:100)
+ *
+ * If B is idle and only A0 and A1 are actively issuing IOs, as the two are
+ * of equal weight, each gets 50% share. If then B starts issuing IOs, B
+ * gets 300/(100+300) or 75% share, and A0 and A1 equally splits the rest,
+ * 12.5% each. The distribution mechanism only cares about these flattened
+ * shares. They're called hweights (hierarchical weights) and always add
+ * upto 1 (HWEIGHT_WHOLE).
+ *
+ * A given cgroup's vtime runs slower in inverse proportion to its hweight.
+ * For example, with 12.5% weight, A0's time runs 8 times slower (100/12.5)
+ * against the device vtime - an IO which takes 10ms on the underlying
+ * device is considered to take 80ms on A0.
+ *
+ * This constitutes the basis of IO capacity distribution. Each cgroup's
+ * vtime is running at a rate determined by its hweight. A cgroup tracks
+ * the vtime consumed by past IOs and can issue a new IO iff doing so
+ * wouldn't outrun the current device vtime. Otherwise, the IO is
+ * suspended until the vtime has progressed enough to cover it.
+ *
+ * 2-2. Vrate Adjustment
+ *
+ * It's unrealistic to expect the cost model to be perfect. There are too
+ * many devices and even on the same device the overall performance
+ * fluctuates depending on numerous factors such as IO mixture and device
+ * internal garbage collection. The controller needs to adapt dynamically.
+ *
+ * This is achieved by adjusting the overall IO rate according to how busy
+ * the device is. If the device becomes overloaded, we're sending down too
+ * many IOs and should generally slow down. If there are waiting issuers
+ * but the device isn't saturated, we're issuing too few and should
+ * generally speed up.
+ *
+ * To slow down, we lower the vrate - the rate at which the device vtime
+ * passes compared to the wall clock. For example, if the vtime is running
+ * at the vrate of 75%, all cgroups added up would only be able to issue
+ * 750ms worth of IOs per second, and vice-versa for speeding up.
+ *
+ * Device business is determined using two criteria - rq wait and
+ * completion latencies.
+ *
+ * When a device gets saturated, the on-device and then the request queues
+ * fill up and a bio which is ready to be issued has to wait for a request
+ * to become available. When this delay becomes noticeable, it's a clear
+ * indication that the device is saturated and we lower the vrate. This
+ * saturation signal is fairly conservative as it only triggers when both
+ * hardware and software queues are filled up, and is used as the default
+ * busy signal.
+ *
+ * As devices can have deep queues and be unfair in how the queued commands
+ * are executed, soley depending on rq wait may not result in satisfactory
+ * control quality. For a better control quality, completion latency QoS
+ * parameters can be configured so that the device is considered saturated
+ * if N'th percentile completion latency rises above the set point.
+ *
+ * The completion latency requirements are a function of both the
+ * underlying device characteristics and the desired IO latency quality of
+ * service. There is an inherent trade-off - the tighter the latency QoS,
+ * the higher the bandwidth lossage. Latency QoS is disabled by default
+ * and can be set through /sys/fs/cgroup/io.weight.qos.
+ *
+ * 2-3. Work Conservation
+ *
+ * Imagine two cgroups A and B with equal weights. A is issuing a small IO
+ * periodically while B is sending out enough parallel IOs to saturate the
+ * device on its own. Let's say A's usage amounts to 100ms worth of IO
+ * cost per second, i.e., 10% of the device capacity. The naive
+ * distribution of half and half would lead to 60% utilization of the
+ * device, a significant reduction in the total amount of work done
+ * compared to free-for-all competition. This is too high a cost to pay
+ * for IO control.
+ *
+ * To conserve the total amount of work done, we keep track of how much
+ * each active cgroup is actually using and yield part of its weight if
+ * there are other cgroups which can make use of it. In the above case,
+ * A's weight will be lowered so that it hovers above the actual usage and
+ * B would be able to use the rest.
+ *
+ * As we don't want to penalize a cgroup for donating its weight, the
+ * surplus weight adjustment factors in a margin and has an immediate
+ * snapback mechanism in case the cgroup needs more IO vtime for itself.
+ *
+ * Note that adjusting down surplus weights has the same effects as
+ * accelerating vtime for other cgroups and work conservation can also be
+ * implemented by adjusting vrate dynamically. However, squaring who can
+ * donate and should take back how much requires hweight propagations
+ * anyway making it easier to implement and understand as a separate
+ * mechanism.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/timer.h>
+#include <linux/time64.h>
+#include <linux/parser.h>
+#include <linux/sched/signal.h>
+#include <linux/blk-cgroup.h>
+#include "blk-rq-qos.h"
+#include "blk-stat.h"
+#include "blk-wbt.h"
+
+#ifdef CONFIG_TRACEPOINTS
+
+/* copied from TRACE_CGROUP_PATH, see cgroup-internal.h */
+#define TRACE_IOWG_PATH_LEN 1024
+static DEFINE_SPINLOCK(trace_iowg_path_lock);
+static char trace_iowg_path[TRACE_IOWG_PATH_LEN];
+
+#define TRACE_IOWG_PATH(type, iowg, ...) \
+ do { \
+ unsigned long flags; \
+ if (trace_ioweight_##type##_enabled()) { \
+ spin_lock_irqsave(&trace_iowg_path_lock, flags); \
+ cgroup_path(iowg_to_blkg(iowg)->blkcg->css.cgroup, \
+ trace_iowg_path, TRACE_IOWG_PATH_LEN); \
+ trace_ioweight_##type(iowg, trace_iowg_path, \
+ ##__VA_ARGS__); \
+ spin_unlock_irqrestore(&trace_iowg_path_lock, flags); \
+ } \
+ } while (0)
+
+#endif /* CONFIG_TRACE_POINTS */
+
+enum {
+ MILLION = 1000000,
+
+ /* timer period is calculated from latency requirements, bound it */
+ MIN_PERIOD = USEC_PER_MSEC,
+ MAX_PERIOD = USEC_PER_SEC,
+
+ /*
+ * A cgroup's vtime can run 50% behind the device vtime, which
+ * serves as its IO credit buffer. Surplus weight adjustment is
+ * immediately canceled if the vtime margin runs below 10%.
+ */
+ MARGIN_PCT = 50,
+ INUSE_MARGIN_PCT = 10,
+
+ /* Have some play in waitq timer operations */
+ WAITQ_TIMER_MARGIN_PCT = 5,
+
+ /*
+ * vtime can wrap well within a reasonable uptime when vrate is
+ * consistently raised. Don't trust recorded cgroup vtime if the
+ * period counter indicates that it's older than 5mins.
+ */
+ VTIME_VALID_DUR = 300 * USEC_PER_SEC,
+
+ /*
+ * Remember the past three non-zero usages and use the max for
+ * surplus calculation. Three slots guarantee that we remember one
+ * full period usage from the last active stretch even after
+ * partial deactivation and re-activation periods. Don't start
+ * giving away weight before collecting two data points to prevent
+ * hweight adjustments based on one partial activation period.
+ */
+ NR_USAGE_SLOTS = 3,
+ MIN_VALID_USAGES = 2,
+
+ /* 1/64k is granular enough and can easily be handled w/ u32 */
+ HWEIGHT_WHOLE = 1 << 16,
+
+ /*
+ * As vtime is used to calculate the cost of each IO, it needs to
+ * be fairly high precision. For example, it should be able to
+ * represent the cost of a single page worth of discard with
+ * suffificient accuracy. At the same time, it should be able to
+ * represent reasonably long enough durations to be useful and
+ * convenient during operation.
+ *
+ * 1s worth of vtime is 2^37. This gives us both sub-nanosecond
+ * granularity and days of wrap-around time even at extreme vrates.
+ */
+ VTIME_PER_SEC_SHIFT = 37,
+ VTIME_PER_SEC = 1LLU << VTIME_PER_SEC_SHIFT,
+ VTIME_PER_USEC = VTIME_PER_SEC / USEC_PER_SEC,
+
+ /* bound vrate adjustments within two orders of magnitude */
+ VRATE_MIN_PPM = 10000, /* 1% */
+ VRATE_MAX_PPM = 100000000, /* 10000% */
+
+ VRATE_MIN = VTIME_PER_USEC * VRATE_MIN_PPM / MILLION,
+ VRATE_CLAMP_ADJ_PCT = 4,
+
+ /* if IOs end up waiting for requests, issue less */
+ RQ_WAIT_BUSY_PCT = 5,
+
+ /* unbusy hysterisis */
+ UNBUSY_THR_PCT = 75,
+
+ /* don't let cmds which take a very long time pin lagging for too long */
+ MAX_LAGGING_PERIODS = 10,
+
+ /*
+ * If usage% * 1.25 + 2% is lower than hweight% by more than 3%,
+ * donate the surplus.
+ */
+ SURPLUS_SCALE_PCT = 125, /* * 125% */
+ SURPLUS_SCALE_ABS = HWEIGHT_WHOLE / 50, /* + 2% */
+ SURPLUS_MIN_ADJ_DELTA = HWEIGHT_WHOLE / 33, /* 3% */
+
+ /* switch iff the conditions are met for longer than this */
+ AUTOP_CYCLE_NSEC = 10 * NSEC_PER_SEC,
+
+ /*
+ * Count IO size in 4k pages. The 12bit shift helps keeping
+ * size-proportional components of cost calculation in closer
+ * numbers of digits to per-IO cost components.
+ */
+ IOW_PAGE_SHIFT = 12,
+ IOW_PAGE_SIZE = 1 << IOW_PAGE_SHIFT,
+ IOW_SECT_TO_PAGE_SHIFT = IOW_PAGE_SHIFT - SECTOR_SHIFT,
+
+ /* if apart further than 16M, consider randio for linear model */
+ LCOEF_RANDIO_PAGES = 4096,
+};
+
+enum iow_running {
+ IOW_IDLE,
+ IOW_RUNNING,
+ IOW_STOP,
+};
+
+/* IO latency QoS controls including per-dev enable of the whole controller */
+enum {
+ QOS_ENABLE,
+ QOS_CTRL,
+ NR_QOS_CTRL_PARAMS,
+};
+
+/* IO latency QoS params */
+enum {
+ QOS_RPPM,
+ QOS_RLAT,
+ QOS_WPPM,
+ QOS_WLAT,
+ QOS_MIN,
+ QOS_MAX,
+ NR_QOS_PARAMS,
+};
+
+/* cost model controls */
+enum {
+ COST_CTRL,
+ COST_MODEL,
+ NR_COST_CTRL_PARAMS,
+};
+
+/* builtin linear cost model coefficients */
+enum {
+ I_LCOEF_RBPS,
+ I_LCOEF_RSEQIOPS,
+ I_LCOEF_RRANDIOPS,
+ I_LCOEF_WBPS,
+ I_LCOEF_WSEQIOPS,
+ I_LCOEF_WRANDIOPS,
+ NR_I_LCOEFS,
+};
+
+enum {
+ LCOEF_RPAGE,
+ LCOEF_RSEQIO,
+ LCOEF_RRANDIO,
+ LCOEF_WPAGE,
+ LCOEF_WSEQIO,
+ LCOEF_WRANDIO,
+ NR_LCOEFS,
+};
+
+enum {
+ AUTOP_INVALID,
+ AUTOP_HDD,
+ AUTOP_SSD_QD1,
+ AUTOP_SSD_DFL,
+ AUTOP_SSD_FAST,
+};
+
+struct iow_gq;
+
+struct iow_params {
+ u32 qos[NR_QOS_PARAMS];
+ u64 i_lcoefs[NR_I_LCOEFS];
+ u64 lcoefs[NR_LCOEFS];
+ u32 too_fast_vrate_pct;
+ u32 too_slow_vrate_pct;
+};
+
+struct iow_missed {
+ u32 nr_met;
+ u32 nr_missed;
+ u32 last_met;
+ u32 last_missed;
+};
+
+struct iow_pcpu_stat {
+ struct iow_missed missed[2];
+
+ u64 rq_wait_ns;
+ u64 last_rq_wait_ns;
+};
+
+/* per device */
+struct iow {
+ struct rq_qos rqos;
+
+ bool enabled;
+
+ struct iow_params params;
+ u32 period_us;
+ u32 margin_us;
+ u64 vrate_min;
+ u64 vrate_max;
+
+ spinlock_t lock;
+ struct timer_list timer;
+ struct list_head active_iowgs; /* active cgroups */
+ struct iow_pcpu_stat __percpu *pcpu_stat;
+
+ enum iow_running running;
+ atomic64_t vtime_rate;
+
+ seqcount_t period_seqcount;
+ u32 period_at; /* wallclock starttime */
+ u64 period_at_vtime; /* vtime starttime */
+
+ atomic64_t cur_period; /* inc'd each period */
+ int busy_level; /* saturation history */
+
+ u64 inuse_margin_vtime;
+ bool weights_updated;
+ atomic_t hweight_gen; /* for lazy hweights */
+
+ u64 autop_too_fast_at;
+ u64 autop_too_slow_at;
+ int autop_idx;
+ bool user_qos_params:1;
+ bool user_cost_model:1;
+};
+
+/* per device-cgroup pair */
+struct iow_gq {
+ struct blkg_policy_data pd;
+ struct iow *iow;
+
+ /*
+ * A iowg can get its weight from two sources - an explicit
+ * per-device-cgroup configuration or the default weight of the
+ * cgroup. `cfg_weight` is the explicit per-device-cgroup
+ * configuration. `weight` is the effective considering both
+ * sources.
+ *
+ * When an idle cgroup becomes active its `active` goes from 0 to
+ * `weight`. `inuse` is the surplus adjusted active weight.
+ * `active` and `inuse` are used to calculate `hweight_active` and
+ * `hweight_inuse`.
+ *
+ * `last_inuse` remembers `inuse` while an iowg is idle to persist
+ * surplus adjustments.
+ */
+ u32 cfg_weight;
+ u32 weight;
+ u32 active;
+ u32 inuse;
+ u32 last_inuse;
+
+ sector_t cursor; /* to detect randio */
+
+ /*
+ * `vtime` is this iowg's vtime cursor which progresses as IOs are
+ * issued. If lagging behind device vtime, the delta represents
+ * the currently available IO budget. If runnning ahead, the
+ * overage.
+ *
+ * `vtime_done` is the same but progressed on completion rather
+ * than issue. The delta behind `vtime` represents the cost of
+ * currently in-flight IOs.
+ *
+ * `last_vtime` is used to remember `vtime` at the end of the last
+ * period to calculate utilization.
+ */
+ atomic64_t vtime;
+ atomic64_t done_vtime;
+ u64 last_vtime;
+
+ /*
+ * The period this iowg was last active in. Used for deactivation
+ * and invalidating `vtime`.
+ */
+ atomic64_t active_period;
+ struct list_head active_list;
+
+ /* see __propagate_active_weight() and current_hweight() for details */
+ u64 child_active_sum;
+ u64 child_inuse_sum;
+ int hweight_gen;
+ u32 hweight_active;
+ u32 hweight_inuse;
+ bool has_surplus;
+
+ struct wait_queue_head waitq;
+ struct hrtimer waitq_timer;
+ struct hrtimer delay_timer;
+
+ /* usage is recorded as fractions of HWEIGHT_WHOLE */
+ int usage_idx;
+ u32 usages[NR_USAGE_SLOTS];
+
+ /* this iowg's depth in the hierarchy and ancestors including self */
+ int level;
+ struct iow_gq *ancestors[];
+};
+
+/* per cgroup */
+struct iow_cgrp {
+ struct blkcg_policy_data cpd;
+ unsigned int dfl_weight;
+};
+
+struct iow_now {
+ u64 now_ns;
+ u32 now;
+ u64 vnow;
+ u64 vrate;
+};
+
+struct iowg_wait {
+ struct wait_queue_entry wait;
+ struct bio *bio;
+ u64 abs_cost;
+ bool committed;
+};
+
+struct iowg_wake_ctx {
+ struct iow_gq *iowg;
+ u32 hw_inuse;
+ s64 vbudget;
+};
+
+static const struct iow_params autop[] = {
+ [AUTOP_HDD] = {
+ .qos = {
+ [QOS_RLAT] = 50000, /* 50ms */
+ [QOS_WLAT] = 50000,
+ [QOS_MIN] = VRATE_MIN_PPM,
+ [QOS_MAX] = VRATE_MAX_PPM,
+ },
+ .i_lcoefs = {
+ [I_LCOEF_RBPS] = 174019176,
+ [I_LCOEF_RSEQIOPS] = 41708,
+ [I_LCOEF_RRANDIOPS] = 370,
+ [I_LCOEF_WBPS] = 178075866,
+ [I_LCOEF_WSEQIOPS] = 42705,
+ [I_LCOEF_WRANDIOPS] = 378,
+ },
+ },
+ [AUTOP_SSD_QD1] = {
+ .qos = {
+ [QOS_RLAT] = 25000, /* 25ms */
+ [QOS_WLAT] = 25000,
+ [QOS_MIN] = VRATE_MIN_PPM,
+ [QOS_MAX] = VRATE_MAX_PPM,
+ },
+ .i_lcoefs = {
+ [I_LCOEF_RBPS] = 245855193,
+ [I_LCOEF_RSEQIOPS] = 61575,
+ [I_LCOEF_RRANDIOPS] = 6946,
+ [I_LCOEF_WBPS] = 141365009,
+ [I_LCOEF_WSEQIOPS] = 33716,
+ [I_LCOEF_WRANDIOPS] = 26796,
+ },
+ },
+ [AUTOP_SSD_DFL] = {
+ .qos = {
+ [QOS_RLAT] = 25000, /* 25ms */
+ [QOS_WLAT] = 25000,
+ [QOS_MIN] = VRATE_MIN_PPM,
+ [QOS_MAX] = VRATE_MAX_PPM,
+ },
+ .i_lcoefs = {
+ [I_LCOEF_RBPS] = 488636629,
+ [I_LCOEF_RSEQIOPS] = 8932,
+ [I_LCOEF_RRANDIOPS] = 8518,
+ [I_LCOEF_WBPS] = 427891549,
+ [I_LCOEF_WSEQIOPS] = 28755,
+ [I_LCOEF_WRANDIOPS] = 21940,
+ },
+ .too_fast_vrate_pct = 500,
+ },
+ [AUTOP_SSD_FAST] = {
+ .qos = {
+ [QOS_RLAT] = 5000, /* 5ms */
+ [QOS_WLAT] = 5000,
+ [QOS_MIN] = VRATE_MIN_PPM,
+ [QOS_MAX] = VRATE_MAX_PPM,
+ },
+ .i_lcoefs = {
+ [I_LCOEF_RBPS] = 2338259289,
+ [I_LCOEF_RSEQIOPS] = 15336,
+ [I_LCOEF_RRANDIOPS] = 14588,
+ [I_LCOEF_WBPS] = 1897549999,
+ [I_LCOEF_WSEQIOPS] = 51317,
+ [I_LCOEF_WRANDIOPS] = 37157,
+ },
+ .too_slow_vrate_pct = 10,
+ },
+};
+
+/*
+ * vrate adjust percentages indexed by iow->busy_level. We adjust up on
+ * vtime credit shortage and down on device saturation.
+ */
+static u32 vrate_adj_pct[] =
+ { 0, 0, 0, 0,
+ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
+ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
+ 4, 4, 4, 4, 4, 4, 4, 4, 8, 8, 8, 8, 8, 8, 8, 8, 16 };
+
+static struct blkcg_policy blkcg_policy_iow;
+
+/* accessors and helpers */
+static struct iow *rqos_to_iow(struct rq_qos *rqos)
+{
+ return container_of(rqos, struct iow, rqos);
+}
+
+static struct iow *q_to_iow(struct request_queue *q)
+{
+ return rqos_to_iow(rq_qos_id(q, RQ_QOS_WEIGHT));
+}
+
+static const char *q_name(struct request_queue *q)
+{
+ if (test_bit(QUEUE_FLAG_REGISTERED, &q->queue_flags))
+ return kobject_name(q->kobj.parent);
+ else
+ return "<unknown>";
+}
+
+static const char *iow_name(struct iow *iow)
+{
+ return q_name(iow->rqos.q);
+}
+
+static struct iow_gq *pd_to_iowg(struct blkg_policy_data *pd)
+{
+ return pd ? container_of(pd, struct iow_gq, pd) : NULL;
+}
+
+static struct iow_gq *blkg_to_iowg(struct blkcg_gq *blkg)
+{
+ return pd_to_iowg(blkg_to_pd(blkg, &blkcg_policy_iow));
+}
+
+struct blkcg_gq *iowg_to_blkg(struct iow_gq *iowg)
+{
+ return pd_to_blkg(&iowg->pd);
+}
+
+static struct iow_cgrp *blkcg_to_iowc(struct blkcg *blkcg)
+{
+ return container_of(blkcg_to_cpd(blkcg, &blkcg_policy_iow),
+ struct iow_cgrp, cpd);
+}
+
+/*
+ * Scale @abs_cost to the inverse of @hw_inuse. The lower the hierarchical
+ * weight, the more expensive each IO.
+ */
+static u64 abs_cost_to_cost(u64 abs_cost, u32 hw_inuse)
+{
+ return DIV64_U64_ROUND_UP(abs_cost * HWEIGHT_WHOLE, hw_inuse);
+}
+
+static void iowg_commit_bio(struct iow_gq *iowg, struct bio *bio, u64 cost)
+{
+ bio->bi_ioweight_cost = cost;
+ atomic64_add(cost, &iowg->vtime);
+}
+
+#define CREATE_TRACE_POINTS
+#include <trace/events/ioweight.h>
+
+/* latency Qos params changed, update period_us and all the dependent params */
+static void iow_refresh_period_us(struct iow *iow)
+{
+ u32 ppm, lat, multi, period_us;
+
+ lockdep_assert_held(&iow->lock);
+
+ /* pick the higher latency target */
+ if (iow->params.qos[QOS_RLAT] >= iow->params.qos[QOS_WLAT]) {
+ ppm = iow->params.qos[QOS_RPPM];
+ lat = iow->params.qos[QOS_RLAT];
+ } else {
+ ppm = iow->params.qos[QOS_WPPM];
+ lat = iow->params.qos[QOS_WLAT];
+ }
+
+ /*
+ * We want the period to be long enough to contain a healthy number
+ * of IOs while short enough for granular control. Define it as a
+ * multiple of the latency target. Ideally, the multiplier should
+ * be scaled according to the percentile so that it would nominally
+ * contain a certain number of requests. Let's be simpler and
+ * scale it linearly so that it's 2x >= pct(90) and 10x at pct(50).
+ */
+ if (ppm)
+ multi = max_t(u32, (MILLION - ppm) / 50000, 2);
+ else
+ multi = 2;
+ period_us = multi * lat;
+ period_us = clamp_t(u32, period_us, MIN_PERIOD, MAX_PERIOD);
+
+ /* calculate dependent params */
+ iow->period_us = period_us;
+ iow->margin_us = period_us * MARGIN_PCT / 100;
+ iow->inuse_margin_vtime = DIV64_U64_ROUND_UP(
+ period_us * VTIME_PER_USEC * INUSE_MARGIN_PCT, 100);
+}
+
+static int iow_autop_idx(struct iow *iow)
+{
+ int idx = iow->autop_idx;
+ const struct iow_params *p = &autop[idx];
+ u32 vrate_pct;
+ u64 now_ns;
+
+ /* rotational? */
+ if (!blk_queue_nonrot(iow->rqos.q))
+ return AUTOP_HDD;
+
+ /* handle SATA SSDs w/ broken NCQ */
+ if (blk_queue_depth(iow->rqos.q) == 1)
+ return AUTOP_SSD_QD1;
+
+ /* use one of the normal ssd sets */
+ if (idx < AUTOP_SSD_DFL)
+ return AUTOP_SSD_DFL;
+
+ /* if user is overriding anything, maintain what was there */
+ if (iow->user_qos_params || iow->user_cost_model)
+ return idx;
+
+ /* step up/down based on the vrate */
+ vrate_pct = div64_u64(atomic64_read(&iow->vtime_rate) * 100,
+ VTIME_PER_USEC);
+ now_ns = ktime_get_ns();
+
+ if (p->too_fast_vrate_pct && p->too_fast_vrate_pct <= vrate_pct) {
+ if (!iow->autop_too_fast_at)
+ iow->autop_too_fast_at = now_ns;
+ if (now_ns - iow->autop_too_fast_at >= AUTOP_CYCLE_NSEC)
+ return idx + 1;
+ } else {
+ iow->autop_too_fast_at = 0;
+ }
+
+ if (p->too_slow_vrate_pct && p->too_slow_vrate_pct >= vrate_pct) {
+ if (!iow->autop_too_slow_at)
+ iow->autop_too_slow_at = now_ns;
+ if (now_ns - iow->autop_too_slow_at >= AUTOP_CYCLE_NSEC)
+ return idx - 1;
+ } else {
+ iow->autop_too_slow_at = 0;
+ }
+
+ return idx;
+}
+
+/*
+ * Take the followings as input
+ *
+ * @bps maximum sequential throughput
+ * @seqiops maximum sequential 4k iops
+ * @randiops maximum random 4k iops
+ *
+ * and calculate the linear model cost coefficients.
+ *
+ * *@page per-page cost 1s / (@bps / 4096)
+ * *@seqio base cost of a seq IO max((1s / @seqiops) - *@page, 0)
+ * @randiops base cost of a rand IO max((1s / @randiops) - *@page, 0)
+ */
+static void calc_lcoefs(u64 bps, u64 seqiops, u64 randiops,
+ u64 *page, u64 *seqio, u64 *randio)
+{
+ u64 v;
+
+ *page = *seqio = *randio = 0;
+
+ if (bps)
+ *page = DIV64_U64_ROUND_UP(VTIME_PER_SEC,
+ DIV_ROUND_UP_ULL(bps, IOW_PAGE_SIZE));
+
+ if (seqiops) {
+ v = DIV64_U64_ROUND_UP(VTIME_PER_SEC, seqiops);
+ if (v > *page)
+ *seqio = v - *page;
+ }
+
+ if (randiops) {
+ v = DIV64_U64_ROUND_UP(VTIME_PER_SEC, randiops);
+ if (v > *page)
+ *randio = v - *page;
+ }
+}
+
+static void iow_refresh_lcoefs(struct iow *iow)
+{
+ u64 *u = iow->params.i_lcoefs;
+ u64 *c = iow->params.lcoefs;
+
+ calc_lcoefs(u[I_LCOEF_RBPS], u[I_LCOEF_RSEQIOPS], u[I_LCOEF_RRANDIOPS],
+ &c[LCOEF_RPAGE], &c[LCOEF_RSEQIO], &c[LCOEF_RRANDIO]);
+ calc_lcoefs(u[I_LCOEF_WBPS], u[I_LCOEF_WSEQIOPS], u[I_LCOEF_WRANDIOPS],
+ &c[LCOEF_WPAGE], &c[LCOEF_WSEQIO], &c[LCOEF_WRANDIO]);
+}
+
+static bool iow_refresh_params(struct iow *iow, bool force)
+{
+ const struct iow_params *p;
+ int idx;
+
+ lockdep_assert_held(&iow->lock);
+
+ idx = iow_autop_idx(iow);
+ p = &autop[idx];
+
+ if (idx == iow->autop_idx && !force)
+ return false;
+
+ if (idx != iow->autop_idx)
+ atomic64_set(&iow->vtime_rate, VTIME_PER_USEC);
+
+ iow->autop_idx = idx;
+ iow->autop_too_fast_at = 0;
+ iow->autop_too_slow_at = 0;
+
+ if (!iow->user_qos_params)
+ memcpy(iow->params.qos, p->qos, sizeof(p->qos));
+ if (!iow->user_cost_model)
+ memcpy(iow->params.i_lcoefs, p->i_lcoefs, sizeof(p->i_lcoefs));
+
+ iow_refresh_period_us(iow);
+ iow_refresh_lcoefs(iow);
+
+ iow->vrate_min = DIV64_U64_ROUND_UP((u64)iow->params.qos[QOS_MIN] *
+ VTIME_PER_USEC, MILLION);
+ iow->vrate_max = div64_u64((u64)iow->params.qos[QOS_MAX] *
+ VTIME_PER_USEC, MILLION);
+
+ return true;
+}
+
+/* take a snapshot of the current [v]time and vrate */
+static void iow_now(struct iow *iow, struct iow_now *now)
+{
+ unsigned seq;
+
+ now->now_ns = ktime_get();
+ now->now = ktime_to_us(now->now_ns);
+ now->vrate = atomic64_read(&iow->vtime_rate);
+
+ /*
+ * The current vtime is
+ *
+ * vtime at period start + (wallclock time since the start) * vrate
+ *
+ * As a consistent snapshot of `period_at_vtime` and `period_at` is
+ * needed, they're seqcount protected.
+ */
+ do {
+ seq = read_seqcount_begin(&iow->period_seqcount);
+ now->vnow = iow->period_at_vtime +
+ (now->now - iow->period_at) * now->vrate;
+ } while (read_seqcount_retry(&iow->period_seqcount, seq));
+}
+
+static void iow_start_period(struct iow *iow, struct iow_now *now)
+{
+ lockdep_assert_held(&iow->lock);
+ WARN_ON_ONCE(iow->running != IOW_RUNNING);
+
+ write_seqcount_begin(&iow->period_seqcount);
+ iow->period_at = now->now;
+ iow->period_at_vtime = now->vnow;
+ write_seqcount_end(&iow->period_seqcount);
+
+ iow->timer.expires = jiffies + usecs_to_jiffies(iow->period_us);
+ add_timer(&iow->timer);
+}
+
+/*
+ * Update @iowg's `active` and `inuse` to @active and @inuse, update level
+ * weight sums and propagate upwards accordingly.
+ */
+static void __propagate_active_weight(struct iow_gq *iowg, u32 active, u32 inuse)
+{
+ struct iow *iow = iowg->iow;
+ int lvl;
+
+ lockdep_assert_held(&iow->lock);
+
+ inuse = min(active, inuse);
+
+ for (lvl = iowg->level - 1; lvl >= 0; lvl--) {
+ struct iow_gq *parent = iowg->ancestors[lvl];
+ struct iow_gq *child = iowg->ancestors[lvl + 1];
+ u32 parent_active = 0, parent_inuse = 0;
+
+ /* update the level sums */
+ parent->child_active_sum += (s32)(active - child->active);
+ parent->child_inuse_sum += (s32)(inuse - child->inuse);
+ /* apply the udpates */
+ child->active = active;
+ child->inuse = inuse;
+
+ /*
+ * The delta between inuse and active sums indicates that
+ * that much of weight is being given away. Parent's inuse
+ * and active should reflect the ratio.
+ */
+ if (parent->child_active_sum) {
+ parent_active = parent->weight;
+ parent_inuse = DIV64_U64_ROUND_UP(
+ parent_active * parent->child_inuse_sum,
+ parent->child_active_sum);
+ }
+
+ /* do we need to keep walking up? */
+ if (parent_active == parent->active &&
+ parent_inuse == parent->inuse)
+ break;
+
+ active = parent_active;
+ inuse = parent_inuse;
+ }
+
+ iow->weights_updated = true;
+}
+
+static void commit_active_weights(struct iow *iow)
+{
+ lockdep_assert_held(&iow->lock);
+
+ if (iow->weights_updated) {
+ /* paired with rmb in current_hweight(), see there */
+ smp_wmb();
+ atomic_inc(&iow->hweight_gen);
+ iow->weights_updated = false;
+ }
+}
+
+static void propagate_active_weight(struct iow_gq *iowg, u32 active, u32 inuse)
+{
+ __propagate_active_weight(iowg, active, inuse);
+ commit_active_weights(iowg->iow);
+}
+
+static void current_hweight(struct iow_gq *iowg, u32 *hw_activep, u32 *hw_inusep)
+{
+ struct iow *iow = iowg->iow;
+ int lvl;
+ u32 hwa, hwi;
+ int iow_gen;
+
+ /* hot path - if uptodate, use cached */
+ iow_gen = atomic_read(&iow->hweight_gen);
+ if (iow_gen == iowg->hweight_gen)
+ goto out;
+
+ /*
+ * Paired with wmb in commit_active_weights(). If we saw the
+ * updated hweight_gen, all the weight updates from
+ * __propagate_active_weight() are visible too.
+ *
+ * We can race with weight updates during calculation and get it
+ * wrong. However, hweight_gen would have changed and a future
+ * reader will recalculate and we're guaranteed to discard the
+ * wrong result soon.
+ */
+ smp_rmb();
+
+ hwa = hwi = HWEIGHT_WHOLE;
+ for (lvl = 0; lvl <= iowg->level - 1; lvl++) {
+ struct iow_gq *parent = iowg->ancestors[lvl];
+ struct iow_gq *child = iowg->ancestors[lvl + 1];
+ u32 active_sum = READ_ONCE(parent->child_active_sum);
+ u32 inuse_sum = READ_ONCE(parent->child_inuse_sum);
+ u32 active = READ_ONCE(child->active);
+ u32 inuse = READ_ONCE(child->inuse);
+
+ if (!active_sum)
+ continue;
+
+ active_sum = max(active, active_sum);
+ hwa = hwa * active / active_sum; /* max 16bits * 10000 */
+
+ inuse_sum = max(inuse, inuse_sum);
+ hwi = hwi * inuse / inuse_sum; /* max 16bits * 10000 */
+ }
+
+ iowg->hweight_active = max_t(u32, hwa, 1);
+ iowg->hweight_inuse = max_t(u32, hwi, 1);
+ iowg->hweight_gen = iow_gen;
+out:
+ if (hw_activep)
+ *hw_activep = iowg->hweight_active;
+ if (hw_inusep)
+ *hw_inusep = iowg->hweight_inuse;
+}
+
+static void weight_updated(struct iow_gq *iowg)
+{
+ struct iow *iow = iowg->iow;
+ struct blkcg_gq *blkg = iowg_to_blkg(iowg);
+ struct iow_cgrp *iowc = blkcg_to_iowc(blkg->blkcg);
+ u32 weight;
+
+ lockdep_assert_held(&iow->lock);
+
+ weight = iowg->cfg_weight ?: iowc->dfl_weight;
+ if (weight != iowg->weight && iowg->active)
+ propagate_active_weight(iowg, weight,
+ DIV64_U64_ROUND_UP(iowg->inuse * weight, iowg->weight));
+ iowg->weight = weight;
+}
+
+static bool iowg_activate(struct iow_gq *iowg, struct iow_now *now)
+{
+ struct iow *iow = iowg->iow;
+ u64 last_period, cur_period, max_period_delta;
+ u64 vtime, vmargin, vmin;
+ int i;
+
+ /*
+ * If seem to be already active, just update the stamp to tell the
+ * timer that we're still active. We don't mind occassional races.
+ */
+ if (!list_empty(&iowg->active_list)) {
+ iow_now(iow, now);
+ cur_period = atomic64_read(&iow->cur_period);
+ if (atomic64_read(&iowg->active_period) != cur_period)
+ atomic64_set(&iowg->active_period, cur_period);
+ return true;
+ }
+
+ /* racy check on internal node IOs, treat as root level IOs */
+ if (iowg->child_active_sum)
+ return false;
+
+ spin_lock_irq(&iow->lock);
+
+ iow_now(iow, now);
+
+ /* update period */
+ cur_period = atomic64_read(&iow->cur_period);
+ last_period = atomic64_read(&iowg->active_period);
+ atomic64_set(&iowg->active_period, cur_period);
+
+ /* already activated or breaking leaf-only constraint? */
+ for (i = iowg->level; i > 0; i--)
+ if (!list_empty(&iowg->active_list))
+ goto fail_unlock;
+ if (iowg->child_active_sum)
+ goto fail_unlock;
+
+ /*
+ * vtime may wrap when vrate is raised substantially due to
+ * underestimated IO costs. Look at the period and ignore its
+ * vtime if the iowg has been idle for too long. Also, cap the
+ * budget it can start with to the margin.
+ */
+ max_period_delta = DIV64_U64_ROUND_UP(VTIME_VALID_DUR, iow->period_us);
+ vtime = atomic64_read(&iowg->vtime);
+ vmargin = iow->margin_us * now->vrate;
+ vmin = now->vnow - vmargin;
+
+ if (last_period + max_period_delta < cur_period ||
+ time_before64(vtime, vmin)) {
+ atomic64_add(vmin - vtime, &iowg->vtime);
+ atomic64_add(vmin - vtime, &iowg->done_vtime);
+ vtime = vmin;
+ }
+
+ /* activate, propagate weight and start period timer if not running */
+ iowg->hweight_gen = atomic_read(&iow->hweight_gen);
+ list_add(&iowg->active_list, &iow->active_iowgs);
+ propagate_active_weight(iowg, iowg->weight,
+ iowg->last_inuse ?: iowg->weight);
+
+ TRACE_IOWG_PATH(iowg_activate, iowg, now,
+ last_period, cur_period, vtime);
+
+ iowg->last_vtime = vtime;
+
+ if (iow->running == IOW_IDLE) {
+ iow->running = IOW_RUNNING;
+ iow_start_period(iow, now);
+ }
+
+ spin_unlock_irq(&iow->lock);
+ return true;
+
+fail_unlock:
+ spin_unlock_irq(&iow->lock);
+ return false;
+}
+
+static int iowg_wake_fn(struct wait_queue_entry *wq_entry, unsigned mode,
+ int flags, void *key)
+{
+ struct iowg_wait *wait = container_of(wq_entry, struct iowg_wait, wait);
+ struct iowg_wake_ctx *ctx = (struct iowg_wake_ctx *)key;
+ u64 cost = abs_cost_to_cost(wait->abs_cost, ctx->hw_inuse);
+
+ ctx->vbudget -= cost;
+
+ if (ctx->vbudget < 0)
+ return -1;
+
+ iowg_commit_bio(ctx->iowg, wait->bio, cost);
+
+ /*
+ * autoremove_wake_function() removes the wait entry only when it
+ * actually changed the task state. We want the wait always
+ * removed. Remove explicitly and use default_wake_function().
+ */
+ list_del_init(&wq_entry->entry);
+ wait->committed = true;
+
+ default_wake_function(wq_entry, mode, flags, key);
+ return 0;
+}
+
+static void iowg_kick_waitq(struct iow_gq *iowg, struct iow_now *now)
+{
+ struct iow *iow = iowg->iow;
+ struct iowg_wake_ctx ctx = { .iowg = iowg };
+ u64 margin_ns = (u64)(iow->period_us *
+ WAITQ_TIMER_MARGIN_PCT / 100) * NSEC_PER_USEC;
+ u64 vshortage, expires, oexpires;
+
+ lockdep_assert_held(&iowg->waitq.lock);
+
+ /*
+ * Wake up the ones which are due and see how much vtime we'll need
+ * for the next one.
+ */
+ current_hweight(iowg, NULL, &ctx.hw_inuse);
+ ctx.vbudget = now->vnow - atomic64_read(&iowg->vtime);
+ __wake_up_locked_key(&iowg->waitq, TASK_NORMAL, &ctx);
+ if (!waitqueue_active(&iowg->waitq))
+ return;
+ if (WARN_ON_ONCE(ctx.vbudget >= 0))
+ return;
+
+ /* determine next wakeup, add a quarter margin to guarantee chunking */
+ vshortage = -ctx.vbudget;
+ expires = now->now_ns +
+ DIV64_U64_ROUND_UP(vshortage, now->vrate) * NSEC_PER_USEC;
+ expires += margin_ns / 4;
+
+ /* if already active and close enough, don't bother */
+ oexpires = ktime_to_ns(hrtimer_get_softexpires(&iowg->waitq_timer));
+ if (hrtimer_is_queued(&iowg->waitq_timer) &&
+ abs(oexpires - expires) <= margin_ns / 4)
+ return;
+
+ hrtimer_start_range_ns(&iowg->waitq_timer, ns_to_ktime(expires),
+ margin_ns / 4, HRTIMER_MODE_ABS);
+}
+
+static enum hrtimer_restart iowg_waitq_timer_fn(struct hrtimer *timer)
+{
+ struct iow_gq *iowg = container_of(timer, struct iow_gq, waitq_timer);
+ struct iow_now now;
+ unsigned long flags;
+
+ iow_now(iowg->iow, &now);
+
+ spin_lock_irqsave(&iowg->waitq.lock, flags);
+ iowg_kick_waitq(iowg, &now);
+ spin_unlock_irqrestore(&iowg->waitq.lock, flags);
+
+ return HRTIMER_NORESTART;
+}
+
+static void iowg_kick_delay(struct iow_gq *iowg, struct iow_now *now, u64 cost)
+{
+ struct iow *iow = iowg->iow;
+ struct blkcg_gq *blkg = iowg_to_blkg(iowg);
+ u64 vtime = atomic64_read(&iowg->vtime);
+ u64 vmargin = iow->margin_us * now->vrate;
+ u64 margin_ns = iow->margin_us * NSEC_PER_USEC;
+ u64 expires, oexpires;
+
+ /* clear or maintain depending on the overage */
+ if (time_before_eq64(vtime, now->vnow)) {
+ blkcg_clear_delay(blkg);
+ return;
+ }
+ if (!atomic_read(&blkg->use_delay) &&
+ time_before_eq64(vtime, now->vnow + vmargin))
+ return;
+
+ /* use delay */
+ if (cost) {
+ u64 cost_ns = DIV64_U64_ROUND_UP(cost * NSEC_PER_USEC,
+ now->vrate);
+ blkcg_add_delay(blkg, now->now_ns, cost_ns);
+ }
+ blkcg_use_delay(blkg);
+
+ expires = now->now_ns + DIV64_U64_ROUND_UP(vtime - now->vnow,
+ now->vrate) * NSEC_PER_USEC;
+
+ /* if already active and close enough, don't bother */
+ oexpires = ktime_to_ns(hrtimer_get_softexpires(&iowg->delay_timer));
+ if (hrtimer_is_queued(&iowg->delay_timer) &&
+ abs(oexpires - expires) <= margin_ns / 4)
+ return;
+
+ hrtimer_start_range_ns(&iowg->delay_timer, ns_to_ktime(expires),
+ margin_ns / 4, HRTIMER_MODE_ABS);
+}
+
+static enum hrtimer_restart iowg_delay_timer_fn(struct hrtimer *timer)
+{
+ struct iow_gq *iowg = container_of(timer, struct iow_gq, delay_timer);
+ struct iow_now now;
+
+ iow_now(iowg->iow, &now);
+ iowg_kick_delay(iowg, &now, 0);
+
+ return HRTIMER_NORESTART;
+}
+
+static void iow_lat_stat(struct iow *iow, u32 *missed_ppm_ar, u32 *rq_wait_pct_p)
+{
+ u32 nr_met[2] = { };
+ u32 nr_missed[2] = { };
+ u64 rq_wait_ns = 0;
+ int cpu, rw;
+
+ for_each_online_cpu(cpu) {
+ struct iow_pcpu_stat *stat = per_cpu_ptr(iow->pcpu_stat, cpu);
+ u64 this_rq_wait_ns;
+
+ for (rw = READ; rw <= WRITE; rw++) {
+ u32 this_met = READ_ONCE(stat->missed[rw].nr_met);
+ u32 this_missed = READ_ONCE(stat->missed[rw].nr_missed);
+
+ nr_met[rw] += this_met - stat->missed[rw].last_met;
+ nr_missed[rw] += this_missed - stat->missed[rw].last_missed;
+ stat->missed[rw].last_met = this_met;
+ stat->missed[rw].last_missed = this_missed;
+ }
+
+ this_rq_wait_ns = READ_ONCE(stat->rq_wait_ns);
+ rq_wait_ns += this_rq_wait_ns - stat->last_rq_wait_ns;
+ stat->last_rq_wait_ns = this_rq_wait_ns;
+ }
+
+ for (rw = READ; rw <= WRITE; rw++) {
+ if (nr_met[rw] + nr_missed[rw])
+ missed_ppm_ar[rw] =
+ DIV64_U64_ROUND_UP((u64)nr_missed[rw] * MILLION,
+ nr_met[rw] + nr_missed[rw]);
+ else
+ missed_ppm_ar[rw] = 0;
+ }
+
+ *rq_wait_pct_p = div64_u64(rq_wait_ns * 100,
+ iow->period_us * NSEC_PER_USEC);
+}
+
+/* was iowg idle this period? */
+static bool iowg_is_idle(struct iow_gq *iowg)
+{
+ struct iow *iow = iowg->iow;
+
+ /* did something get issued this period? */
+ if (atomic64_read(&iowg->active_period) ==
+ atomic64_read(&iow->cur_period))
+ return false;
+
+ /* is something in flight? */
+ if (atomic64_read(&iowg->done_vtime) < atomic64_read(&iowg->vtime))
+ return false;
+
+ return true;
+}
+
+/* returns usage with margin added if surplus is large enough */
+static u32 surplus_adjusted_hweight_inuse(u32 usage, u32 hw_inuse)
+{
+ /* add margin */
+ usage = DIV_ROUND_UP(usage * SURPLUS_SCALE_PCT, 100);
+ usage += SURPLUS_SCALE_ABS;
+
+ /* don't bother if the surplus is too small */
+ if (usage + SURPLUS_MIN_ADJ_DELTA > hw_inuse)
+ return 0;
+
+ return usage;
+}
+
+static void iow_timer_fn(struct timer_list *timer)
+{
+ struct iow *iow = container_of(timer, struct iow, timer);
+ struct iow_gq *iowg, *tiowg;
+ struct iow_now now;
+ int nr_surpluses = 0, nr_shortages = 0, nr_lagging = 0;
+ u32 ppm_rthr = MILLION - iow->params.qos[QOS_RPPM];
+ u32 ppm_wthr = MILLION - iow->params.qos[QOS_WPPM];
+ u32 missed_ppm[2], rq_wait_pct;
+ u64 period_vtime;
+ int i;
+
+ /* how were the latencies during the period? */
+ iow_lat_stat(iow, missed_ppm, &rq_wait_pct);
+
+ /* take care of active iowgs */
+ spin_lock_irq(&iow->lock);
+
+ iow_now(iow, &now);
+
+ period_vtime = now.vnow - iow->period_at_vtime;
+ if (WARN_ON_ONCE(!period_vtime)) {
+ spin_unlock_irq(&iow->lock);
+ return;
+ }
+
+ /*
+ * Waiters determine the sleep durations based on the vrate they
+ * saw at the time of sleep. If vrate has increased, some waiters
+ * could be sleeping for too long. Wake up tardy waiters which
+ * should have woken up in the last period and expire idle iowgs.
+ */
+ list_for_each_entry_safe(iowg, tiowg, &iow->active_iowgs, active_list) {
+ if (!waitqueue_active(&iowg->waitq) && !iowg_is_idle(iowg))
+ continue;
+
+ spin_lock(&iowg->waitq.lock);
+
+ if (waitqueue_active(&iowg->waitq)) {
+ /* might be oversleeping vtime / hweight changes, kick */
+ iowg_kick_waitq(iowg, &now);
+ iowg_kick_delay(iowg, &now, 0);
+ } else if (iowg_is_idle(iowg)) {
+ /* no waiter and idle, deactivate */
+ iowg->last_inuse = iowg->inuse;
+ __propagate_active_weight(iowg, 0, 0);
+ list_del_init(&iowg->active_list);
+ }
+
+ spin_unlock(&iowg->waitq.lock);
+ }
+ commit_active_weights(iow);
+
+ /* calc usages and see whether some weights need to be moved around */
+ list_for_each_entry(iowg, &iow->active_iowgs, active_list) {
+ u64 vdone, vtime, vusage, vmargin, vmin;
+ u32 hw_active, hw_inuse, usage;
+
+ /*
+ * Collect unused and wind vtime closer to vnow to prevent
+ * iowgs from accumulating a large amount of budget.
+ */
+ vdone = atomic64_read(&iowg->done_vtime);
+ vtime = atomic64_read(&iowg->vtime);
+ current_hweight(iowg, &hw_active, &hw_inuse);
+
+ /*
+ * Latency QoS detection doesn't account for IOs which are
+ * in-flight for longer than a period. Detect them by
+ * comparing vdone against period start. If lagging behind
+ * IOs from past periods, don't increase vrate.
+ */
+ if (!atomic_read(&iowg_to_blkg(iowg)->use_delay) &&
+ time_after64(vtime, vdone) &&
+ time_after64(vtime, now.vnow -
+ MAX_LAGGING_PERIODS * period_vtime) &&
+ time_before64(vdone, now.vnow - period_vtime))
+ nr_lagging++;
+
+ if (waitqueue_active(&iowg->waitq))
+ vusage = now.vnow - iowg->last_vtime;
+ else if (time_before64(iowg->last_vtime, vtime))
+ vusage = vtime - iowg->last_vtime;
+ else
+ vusage = 0;
+
+ iowg->last_vtime += vusage;
+ /*
+ * Factor in in-flight vtime into vusage to avoid
+ * high-latency completions appearing as idle. This should
+ * be done after the above ->last_time adjustment.
+ */
+ vusage = max(vusage, vtime - vdone);
+
+ /* calculate hweight based usage ratio and record */
+ if (vusage) {
+ usage = DIV64_U64_ROUND_UP(vusage * hw_inuse,
+ period_vtime);
+ iowg->usage_idx = (iowg->usage_idx + 1) % NR_USAGE_SLOTS;
+ iowg->usages[iowg->usage_idx] = usage;
+ } else {
+ usage = 0;
+ }
+
+ /* see whether there's surplus vtime */
+ vmargin = iow->margin_us * now.vrate;
+ vmin = now.vnow - vmargin;
+
+ iowg->has_surplus = false;
+
+ if (!waitqueue_active(&iowg->waitq) &&
+ time_before64(vtime, vmin)) {
+ u64 delta = vmin - vtime;
+
+ /* throw away surplus vtime */
+ atomic64_add(delta, &iowg->vtime);
+ atomic64_add(delta, &iowg->done_vtime);
+ iowg->last_vtime += delta;
+ /* if usage is sufficiently low, maybe it can donate */
+ if (surplus_adjusted_hweight_inuse(usage, hw_inuse)) {
+ iowg->has_surplus = true;
+ nr_surpluses++;
+ }
+ } else if (hw_inuse < hw_active) {
+ u32 new_hwi, new_inuse;
+
+ /* was donating but might need to take back some */
+ if (waitqueue_active(&iowg->waitq)) {
+ new_hwi = hw_active;
+ } else {
+ new_hwi = max(hw_inuse,
+ usage * SURPLUS_SCALE_PCT / 100 +
+ SURPLUS_SCALE_ABS);
+ }
+
+ new_inuse = div64_u64((u64)iowg->inuse * new_hwi,
+ hw_inuse);
+ new_inuse = clamp_t(u32, new_inuse, 1, iowg->active);
+
+ if (new_inuse > iowg->inuse) {
+ TRACE_IOWG_PATH(inuse_takeback, iowg, &now,
+ iowg->inuse, new_inuse,
+ hw_inuse, new_hwi);
+ __propagate_active_weight(iowg, iowg->weight,
+ new_inuse);
+ }
+ } else {
+ /* genuninely out of vtime */
+ nr_shortages++;
+ }
+ }
+
+ if (!nr_shortages || !nr_surpluses)
+ goto skip_surplus_transfers;
+
+ /* there are both shortages and surpluses, transfer surpluses */
+ list_for_each_entry(iowg, &iow->active_iowgs, active_list) {
+ u32 usage, hw_active, hw_inuse, new_hwi, new_inuse;
+ int nr_valid = 0;
+
+ if (!iowg->has_surplus)
+ continue;
+
+ /* base the decision on max historical usage */
+ for (i = 0, usage = 0; i < NR_USAGE_SLOTS; i++) {
+ if (iowg->usages[i]) {
+ usage = max(usage, iowg->usages[i]);
+ nr_valid++;
+ }
+ }
+ if (nr_valid < MIN_VALID_USAGES)
+ continue;
+
+ current_hweight(iowg, &hw_active, &hw_inuse);
+ new_hwi = surplus_adjusted_hweight_inuse(usage, hw_inuse);
+ if (!new_hwi)
+ continue;
+
+ new_inuse = DIV64_U64_ROUND_UP((u64)iowg->inuse * new_hwi,
+ hw_inuse);
+ if (new_inuse < iowg->inuse) {
+ TRACE_IOWG_PATH(inuse_giveaway, iowg, &now,
+ iowg->inuse, new_inuse,
+ hw_inuse, new_hwi);
+ __propagate_active_weight(iowg, iowg->weight, new_inuse);
+ }
+ }
+skip_surplus_transfers:
+ commit_active_weights(iow);
+
+ /*
+ * If q is getting clogged or we're missing too much, we're issuing
+ * too much IO and should lower vtime rate. If we're not missing
+ * and experiencing shortages but not surpluses, we're too stingy
+ * and should increase vtime rate.
+ */
+ if (rq_wait_pct > RQ_WAIT_BUSY_PCT ||
+ missed_ppm[READ] > ppm_rthr ||
+ missed_ppm[WRITE] > ppm_wthr) {
+ iow->busy_level = max(iow->busy_level, 0);
+ iow->busy_level++;
+ } else if (nr_lagging) {
+ iow->busy_level = max(iow->busy_level, 0);
+ } else if (nr_shortages && !nr_surpluses &&
+ rq_wait_pct <= RQ_WAIT_BUSY_PCT * UNBUSY_THR_PCT / 100 &&
+ missed_ppm[READ] <= ppm_rthr * UNBUSY_THR_PCT / 100 &&
+ missed_ppm[WRITE] <= ppm_wthr * UNBUSY_THR_PCT / 100) {
+ iow->busy_level = min(iow->busy_level, 0);
+ iow->busy_level--;
+ } else {
+ iow->busy_level = 0;
+ }
+
+ iow->busy_level = clamp(iow->busy_level, -1000, 1000);
+
+ if (iow->busy_level) {
+ u64 vrate = atomic64_read(&iow->vtime_rate);
+ u64 vrate_min = iow->vrate_min, vrate_max = iow->vrate_max;
+
+ /* rq_wait signal is always reliable, ignore user vrate_min */
+ if (rq_wait_pct > RQ_WAIT_BUSY_PCT)
+ vrate_min = VRATE_MIN;
+
+ /*
+ * If vrate is out of bounds, apply clamp gradually as the
+ * bounds can change abruptly. Otherwise, apply busy_level
+ * based adjustment.
+ */
+ if (vrate < vrate_min) {
+ vrate = div64_u64(vrate * (100 + VRATE_CLAMP_ADJ_PCT),
+ 100);
+ vrate = min(vrate, vrate_min);
+ } else if (vrate > vrate_max) {
+ vrate = div64_u64(vrate * (100 - VRATE_CLAMP_ADJ_PCT),
+ 100);
+ vrate = max(vrate, vrate_max);
+ } else {
+ int idx = min_t(int, abs(iow->busy_level),
+ ARRAY_SIZE(vrate_adj_pct) - 1);
+ u32 adj_pct = vrate_adj_pct[idx];
+
+ if (iow->busy_level > 0)
+ adj_pct = 100 - adj_pct;
+ else
+ adj_pct = 100 + adj_pct;
+
+ vrate = clamp(DIV64_U64_ROUND_UP(vrate * adj_pct, 100),
+ vrate_min, vrate_max);
+ }
+
+ trace_ioweight_iow_vrate_adj(iow, vrate, &missed_ppm, rq_wait_pct,
+ nr_lagging, nr_shortages,
+ nr_surpluses);
+
+ atomic64_set(&iow->vtime_rate, vrate);
+ iow->inuse_margin_vtime = DIV64_U64_ROUND_UP(
+ iow->period_us * vrate * INUSE_MARGIN_PCT, 100);
+ }
+
+ iow_refresh_params(iow, false);
+
+ /*
+ * This period is done. Move onto the next one. If nothing's
+ * going on with the device, stop the timer.
+ */
+ atomic64_inc(&iow->cur_period);
+
+ if (iow->running != IOW_STOP) {
+ if (!list_empty(&iow->active_iowgs)) {
+ iow_start_period(iow, &now);
+ } else {
+ iow->busy_level = 0;
+ iow->running = IOW_IDLE;
+ }
+ }
+
+ spin_unlock_irq(&iow->lock);
+}
+
+static void calc_vtime_cost_builtin(struct bio *bio, struct iow_gq *iowg,
+ bool is_merge, u64 *costp)
+{
+ struct iow *iow = iowg->iow;
+ u64 coef_seqio, coef_randio, coef_page;
+ u64 pages = max_t(u64, bio_sectors(bio) >> IOW_SECT_TO_PAGE_SHIFT, 1);
+ u64 seek_pages = 0;
+ u64 cost = 0;
+
+ switch (bio_op(bio)) {
+ case REQ_OP_READ:
+ coef_seqio = iow->params.lcoefs[LCOEF_RSEQIO];
+ coef_randio = iow->params.lcoefs[LCOEF_RRANDIO];
+ coef_page = iow->params.lcoefs[LCOEF_RPAGE];
+ break;
+ case REQ_OP_WRITE:
+ coef_seqio = iow->params.lcoefs[LCOEF_WSEQIO];
+ coef_randio = iow->params.lcoefs[LCOEF_WRANDIO];
+ coef_page = iow->params.lcoefs[LCOEF_WPAGE];
+ break;
+ default:
+ goto out;
+ }
+
+ if (iowg->cursor) {
+ seek_pages = abs(bio->bi_iter.bi_sector - iowg->cursor);
+ seek_pages >>= IOW_SECT_TO_PAGE_SHIFT;
+ }
+
+ if (!is_merge) {
+ if (seek_pages > LCOEF_RANDIO_PAGES) {
+ cost += coef_randio;
+ } else {
+ cost += coef_seqio;
+ }
+ }
+ cost += pages * coef_page;
+out:
+ *costp = cost;
+}
+
+static u64 calc_vtime_cost(struct bio *bio, struct iow_gq *iowg, bool is_merge)
+{
+ u64 cost;
+
+ calc_vtime_cost_builtin(bio, iowg, is_merge, &cost);
+ return cost;
+}
+
+static void iow_rqos_throttle(struct rq_qos *rqos, struct bio *bio)
+{
+ struct blkcg_gq *blkg = bio->bi_blkg;
+ struct iow *iow = rqos_to_iow(rqos);
+ struct iow_gq *iowg = blkg_to_iowg(blkg);
+ struct iow_now now;
+ struct iowg_wait wait;
+ u32 hw_active, hw_inuse;
+ u64 abs_cost, cost, vtime;
+
+ /* bypass IOs if disabled or for root cgroup */
+ if (!iow->enabled || !iowg->level)
+ return;
+
+ /* always activate so that even 0 cost IOs get protected to some level */
+ if (!iowg_activate(iowg, &now))
+ return;
+
+ /* calculate the absolute vtime cost */
+ abs_cost = calc_vtime_cost(bio, iowg, false);
+ if (!abs_cost)
+ return;
+
+ iowg->cursor = bio_end_sector(bio);
+
+ vtime = atomic64_read(&iowg->vtime);
+ current_hweight(iowg, &hw_active, &hw_inuse);
+
+ if (hw_inuse < hw_active &&
+ time_after_eq64(vtime + iow->inuse_margin_vtime, now.vnow)) {
+ TRACE_IOWG_PATH(inuse_reset, iowg, &now,
+ iowg->inuse, iowg->weight, hw_inuse, hw_active);
+ spin_lock_irq(&iow->lock);
+ propagate_active_weight(iowg, iowg->weight, iowg->weight);
+ spin_unlock_irq(&iow->lock);
+ current_hweight(iowg, &hw_active, &hw_inuse);
+ }
+
+ cost = abs_cost_to_cost(abs_cost, hw_inuse);
+
+ /*
+ * If no one's waiting and within budget, issue right away. The
+ * tests are racy but the races aren't systemic - we only miss once
+ * in a while which is fine.
+ */
+ if (!waitqueue_active(&iowg->waitq) &&
+ time_before_eq64(vtime + cost, now.vnow)) {
+ iowg_commit_bio(iowg, bio, cost);
+ return;
+ }
+
+ if (bio_issue_as_root_blkg(bio) || fatal_signal_pending(current)) {
+ iowg_commit_bio(iowg, bio, cost);
+ iowg_kick_delay(iowg, &now, cost);
+ return;
+ }
+
+ /*
+ * Append self to the waitq and schedule the wakeup timer if we're
+ * the first waiter. The timer duration is calculated based on the
+ * current vrate. vtime and hweight changes can make it too short
+ * or too long. Each wait entry records the absolute cost it's
+ * waiting for to allow re-evaluation using a custom wait entry.
+ *
+ * If too short, the timer simply reschedules itself. If too long,
+ * the period timer will notice and trigger wakeups.
+ *
+ * All waiters are on iowg->waitq and the wait states are
+ * synchronized using waitq.lock.
+ */
+ spin_lock_irq(&iowg->waitq.lock);
+
+ /*
+ * We activated above but w/o any synchronization. Deactivation is
+ * synchronized with waitq.lock and we won't get deactivated as
+ * long as we're waiting, so we're good if we're activated here.
+ * In the unlikely case that we are deactivated, just issue the IO.
+ */
+ if (unlikely(list_empty(&iowg->active_list))) {
+ spin_unlock_irq(&iowg->waitq.lock);
+ iowg_commit_bio(iowg, bio, cost);
+ return;
+ }
+
+ init_waitqueue_func_entry(&wait.wait, iowg_wake_fn);
+ wait.wait.private = current;
+ wait.bio = bio;
+ wait.abs_cost = abs_cost;
+ wait.committed = false; /* will be set true by waker */
+
+ __add_wait_queue_entry_tail(&iowg->waitq, &wait.wait);
+ iowg_kick_waitq(iowg, &now);
+
+ spin_unlock_irq(&iowg->waitq.lock);
+
+ while (true) {
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ if (wait.committed)
+ break;
+ io_schedule();
+ }
+
+ /* waker already committed us, proceed */
+ finish_wait(&iowg->waitq, &wait.wait);
+}
+
+static void iow_rqos_merge(struct rq_qos *rqos, struct request *rq,
+ struct bio *bio)
+{
+ struct iow_gq *iowg = blkg_to_iowg(bio->bi_blkg);
+ sector_t bio_end = bio_end_sector(bio);
+ u32 hw_active, hw_inuse;
+ u64 abs_cost, cost;
+
+ /* add iff the existing request has cost assigned */
+ if (!rq->bio || !rq->bio->bi_ioweight_cost)
+ return;
+
+ abs_cost = calc_vtime_cost(bio, iowg, true);
+ if (!abs_cost)
+ return;
+
+ /* update cursor if backmerging into the request at the cursor */
+ if (blk_rq_pos(rq) < bio_end &&
+ blk_rq_pos(rq) + blk_rq_sectors(rq) == iowg->cursor)
+ iowg->cursor = bio_end;
+
+ current_hweight(iowg, &hw_active, &hw_inuse);
+ cost = div64_u64(abs_cost * HWEIGHT_WHOLE, hw_inuse);
+ bio->bi_ioweight_cost = cost;
+
+ atomic64_add(cost, &iowg->vtime);
+}
+
+static void iow_rqos_done_bio(struct rq_qos *rqos, struct bio *bio)
+{
+ struct iow_gq *iowg = blkg_to_iowg(bio->bi_blkg);
+
+ if (iowg && bio->bi_ioweight_cost)
+ atomic64_add(bio->bi_ioweight_cost, &iowg->done_vtime);
+}
+
+static void iow_rqos_done(struct rq_qos *rqos, struct request *rq)
+{
+ struct iow *iow = rqos_to_iow(rqos);
+ u64 on_q_ns, rq_wait_ns;
+ int pidx, rw;
+
+ if (!iow->enabled || !rq->pre_start_time_ns || !rq->start_time_ns)
+ return;
+
+ switch (req_op(rq) & REQ_OP_MASK) {
+ case REQ_OP_READ:
+ pidx = QOS_RLAT;
+ rw = READ;
+ break;
+ case REQ_OP_WRITE:
+ pidx = QOS_WLAT;
+ rw = WRITE;
+ break;
+ default:
+ return;
+ }
+
+ on_q_ns = ktime_get_ns() - rq->pre_start_time_ns;
+ rq_wait_ns = rq->start_time_ns - rq->pre_start_time_ns;
+
+ if (on_q_ns <= iow->params.qos[pidx] * NSEC_PER_USEC)
+ this_cpu_inc(iow->pcpu_stat->missed[rw].nr_met);
+ else
+ this_cpu_inc(iow->pcpu_stat->missed[rw].nr_missed);
+
+ this_cpu_add(iow->pcpu_stat->rq_wait_ns, rq_wait_ns);
+}
+
+static void iow_rqos_queue_depth_changed(struct rq_qos *rqos)
+{
+ struct iow *iow = rqos_to_iow(rqos);
+
+ spin_lock_irq(&iow->lock);
+ iow_refresh_params(iow, false);
+ spin_unlock_irq(&iow->lock);
+}
+
+static void iow_rqos_exit(struct rq_qos *rqos)
+{
+ struct iow *iow = rqos_to_iow(rqos);
+
+ blkcg_deactivate_policy(rqos->q, &blkcg_policy_iow);
+
+ spin_lock_irq(&iow->lock);
+ iow->running = IOW_STOP;
+ spin_unlock_irq(&iow->lock);
+
+ del_timer_sync(&iow->timer);
+ free_percpu(iow->pcpu_stat);
+ kfree(iow);
+}
+
+static struct rq_qos_ops iow_rqos_ops = {
+ .throttle = iow_rqos_throttle,
+ .merge = iow_rqos_merge,
+ .done_bio = iow_rqos_done_bio,
+ .done = iow_rqos_done,
+ .queue_depth_changed = iow_rqos_queue_depth_changed,
+ .exit = iow_rqos_exit,
+};
+
+static int blk_ioweight_init(struct request_queue *q)
+{
+ struct iow *iow;
+ struct rq_qos *rqos;
+ int ret;
+
+ iow = kzalloc(sizeof(*iow), GFP_KERNEL);
+ if (!iow)
+ return -ENOMEM;
+
+ iow->pcpu_stat = alloc_percpu(struct iow_pcpu_stat);
+ if (!iow->pcpu_stat) {
+ kfree(iow);
+ return -ENOMEM;
+ }
+
+ rqos = &iow->rqos;
+ rqos->id = RQ_QOS_WEIGHT;
+ rqos->ops = &iow_rqos_ops;
+ rqos->q = q;
+
+ spin_lock_init(&iow->lock);
+ timer_setup(&iow->timer, iow_timer_fn, 0);
+ INIT_LIST_HEAD(&iow->active_iowgs);
+
+ iow->running = IOW_IDLE;
+ atomic64_set(&iow->vtime_rate, VTIME_PER_USEC);
+ seqcount_init(&iow->period_seqcount);
+ iow->period_at = ktime_to_us(ktime_get());
+ atomic64_set(&iow->cur_period, 0);
+ atomic_set(&iow->hweight_gen, 0);
+
+ spin_lock_irq(&iow->lock);
+ iow->autop_idx = AUTOP_INVALID;
+ iow_refresh_params(iow, true);
+ spin_unlock_irq(&iow->lock);
+
+ rq_qos_add(q, rqos);
+ ret = blkcg_activate_policy(q, &blkcg_policy_iow);
+ if (ret) {
+ rq_qos_del(q, rqos);
+ kfree(iow);
+ return ret;
+ }
+ return 0;
+}
+
+static struct blkcg_policy_data *iow_cpd_alloc(gfp_t gfp)
+{
+ struct iow_cgrp *iowc;
+
+ iowc = kzalloc(sizeof(struct iow_cgrp), gfp);
+ iowc->dfl_weight = CGROUP_WEIGHT_DFL;
+
+ return &iowc->cpd;
+}
+
+static void iow_cpd_free(struct blkcg_policy_data *cpd)
+{
+ kfree(container_of(cpd, struct iow_cgrp, cpd));
+}
+
+static struct blkg_policy_data *iow_pd_alloc(gfp_t gfp, struct request_queue *q,
+ struct blkcg *blkcg)
+{
+ int levels = blkcg->css.cgroup->level + 1;
+ struct iow_gq *iowg;
+
+ iowg = kzalloc_node(sizeof(*iowg) + levels * sizeof(iowg->ancestors[0]),
+ gfp, q->node);
+ if (!iowg)
+ return NULL;
+
+ return &iowg->pd;
+}
+
+static void iow_pd_init(struct blkg_policy_data *pd)
+{
+ struct iow_gq *iowg = pd_to_iowg(pd);
+ struct blkcg_gq *blkg = pd_to_blkg(&iowg->pd);
+ struct iow *iow = q_to_iow(blkg->q);
+ struct iow_now now;
+ struct blkcg_gq *tblkg;
+ unsigned long flags;
+
+ iow_now(iow, &now);
+
+ iowg->iow = iow;
+ atomic64_set(&iowg->vtime, now.vnow);
+ atomic64_set(&iowg->done_vtime, now.vnow);
+ atomic64_set(&iowg->active_period, atomic64_read(&iow->cur_period));
+ INIT_LIST_HEAD(&iowg->active_list);
+
+ init_waitqueue_head(&iowg->waitq);
+ hrtimer_init(&iowg->waitq_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
+ iowg->waitq_timer.function = iowg_waitq_timer_fn;
+ hrtimer_init(&iowg->delay_timer, CLOCK_MONOTONIC, HRTIMER_MODE_ABS);
+ iowg->delay_timer.function = iowg_delay_timer_fn;
+
+ iowg->level = blkg->blkcg->css.cgroup->level;
+
+ for (tblkg = blkg; tblkg; tblkg = tblkg->parent) {
+ struct iow_gq *tiowg = blkg_to_iowg(tblkg);
+ iowg->ancestors[tiowg->level] = tiowg;
+ }
+
+ spin_lock_irqsave(&iow->lock, flags);
+ weight_updated(iowg);
+ spin_unlock_irqrestore(&iow->lock, flags);
+}
+
+static void iow_pd_free(struct blkg_policy_data *pd)
+{
+ struct iow_gq *iowg = pd_to_iowg(pd);
+ struct iow *iow = iowg->iow;
+
+ if (iow) {
+ hrtimer_cancel(&iowg->waitq_timer);
+ hrtimer_cancel(&iowg->delay_timer);
+
+ spin_lock(&iow->lock);
+ if (!list_empty(&iowg->active_list)) {
+ propagate_active_weight(iowg, 0, 0);
+ list_del_init(&iowg->active_list);
+ }
+ spin_unlock(&iow->lock);
+ }
+ kfree(iowg);
+}
+
+static u64 iow_weight_prfill(struct seq_file *sf, struct blkg_policy_data *pd,
+ int off)
+{
+ const char *dname = blkg_dev_name(pd->blkg);
+ struct iow_gq *iowg = pd_to_iowg(pd);
+
+ if (dname && iowg->cfg_weight)
+ seq_printf(sf, "%s %u\n", dname, iowg->cfg_weight);
+ return 0;
+}
+
+
+static int iow_weight_show(struct seq_file *sf, void *v)
+{
+ struct blkcg *blkcg = css_to_blkcg(seq_css(sf));
+ struct iow_cgrp *iowc = blkcg_to_iowc(blkcg);
+
+ seq_printf(sf, "default %u\n", iowc->dfl_weight);
+ blkcg_print_blkgs(sf, blkcg, iow_weight_prfill,
+ &blkcg_policy_iow, seq_cft(sf)->private, false);
+ return 0;
+}
+
+static ssize_t iow_weight_write(struct kernfs_open_file *of, char *buf,
+ size_t nbytes, loff_t off)
+{
+ struct blkcg *blkcg = css_to_blkcg(of_css(of));
+ struct iow_cgrp *iowc = blkcg_to_iowc(blkcg);
+ struct blkg_conf_ctx ctx;
+ struct iow_gq *iowg;
+ u32 v;
+ int ret;
+
+ if (!strchr(buf, ':')) {
+ struct blkcg_gq *blkg;
+
+ if (!sscanf(buf, "default %u", &v) && !sscanf(buf, "%u", &v))
+ return -EINVAL;
+
+ if (v < CGROUP_WEIGHT_MIN || v > CGROUP_WEIGHT_MAX)
+ return -EINVAL;
+
+ spin_lock(&blkcg->lock);
+ iowc->dfl_weight = v;
+ hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) {
+ struct iow_gq *iowg = blkg_to_iowg(blkg);
+
+ if (iowg) {
+ spin_lock_irq(&iowg->iow->lock);
+ weight_updated(iowg);
+ spin_unlock_irq(&iowg->iow->lock);
+ }
+ }
+ spin_unlock(&blkcg->lock);
+
+ return nbytes;
+ }
+
+ ret = blkg_conf_prep(blkcg, &blkcg_policy_iow, buf, &ctx);
+ if (ret)
+ return ret;
+
+ iowg = blkg_to_iowg(ctx.blkg);
+
+ if (!strncmp(ctx.body, "default", 7)) {
+ v = 0;
+ } else {
+ if (!sscanf(ctx.body, "%u", &v))
+ goto einval;
+ if (v < CGROUP_WEIGHT_MIN || v > CGROUP_WEIGHT_MAX)
+ goto einval;
+ }
+
+ spin_lock_irq(&iowg->iow->lock);
+ iowg->cfg_weight = v;
+ weight_updated(iowg);
+ spin_unlock_irq(&iowg->iow->lock);
+
+ blkg_conf_finish(&ctx);
+ return nbytes;
+
+einval:
+ blkg_conf_finish(&ctx);
+ return -EINVAL;
+}
+
+static u64 iow_qos_prfill(struct seq_file *sf, struct blkg_policy_data *pd,
+ int off)
+{
+ const char *dname = blkg_dev_name(pd->blkg);
+ struct iow *iow = pd_to_iowg(pd)->iow;
+
+ if (!dname)
+ return 0;
+
+ seq_printf(sf, "%s enable=%d ctrl=%s rpct=%u.%02u rlat=%u wpct=%u.%02u wlat=%u min=%u.%02u max=%u.%02u\n",
+ dname, iow->enabled, iow->user_qos_params ? "user" : "auto",
+ iow->params.qos[QOS_RPPM] / 10000,
+ iow->params.qos[QOS_RPPM] % 10000 / 100,
+ iow->params.qos[QOS_RLAT],
+ iow->params.qos[QOS_WPPM] / 10000,
+ iow->params.qos[QOS_WPPM] % 10000 / 100,
+ iow->params.qos[QOS_WLAT],
+ iow->params.qos[QOS_MIN] / 10000,
+ iow->params.qos[QOS_MIN] % 10000 / 100,
+ iow->params.qos[QOS_MAX] / 10000,
+ iow->params.qos[QOS_MAX] % 10000 / 100);
+ return 0;
+}
+
+static int iow_qos_show(struct seq_file *sf, void *v)
+{
+ struct blkcg *blkcg = css_to_blkcg(seq_css(sf));
+
+ blkcg_print_blkgs(sf, blkcg, iow_qos_prfill,
+ &blkcg_policy_iow, seq_cft(sf)->private, false);
+ return 0;
+}
+
+static const match_table_t qos_ctrl_tokens = {
+ { QOS_ENABLE, "enable=%u" },
+ { QOS_CTRL, "ctrl=%s" },
+ { NR_QOS_CTRL_PARAMS, NULL },
+};
+
+static const match_table_t qos_tokens = {
+ { QOS_RPPM, "rpct=%s" },
+ { QOS_RLAT, "rlat=%u" },
+ { QOS_WPPM, "wpct=%s" },
+ { QOS_WLAT, "wlat=%u" },
+ { QOS_MIN, "min=%s" },
+ { QOS_MAX, "max=%s" },
+ { NR_QOS_PARAMS, NULL },
+};
+
+static ssize_t iow_qos_write(struct kernfs_open_file *of, char *input,
+ size_t nbytes, loff_t off)
+{
+ struct gendisk *disk;
+ struct iow *iow;
+ u32 qos[NR_QOS_PARAMS];
+ bool enable, user;
+ char *p;
+ int ret;
+
+ disk = blkcg_conf_get_disk(&input);
+ if (IS_ERR(disk))
+ return PTR_ERR(disk);
+
+ iow = q_to_iow(disk->queue);
+ if (!iow) {
+ ret = blk_ioweight_init(disk->queue);
+ if (ret)
+ goto err;
+ iow = q_to_iow(disk->queue);
+ }
+
+ spin_lock_irq(&iow->lock);
+ memcpy(qos, iow->params.qos, sizeof(qos));
+ enable = iow->enabled;
+ user = iow->user_qos_params;
+ spin_unlock_irq(&iow->lock);
+
+ while ((p = strsep(&input, " \t\n"))) {
+ substring_t args[MAX_OPT_ARGS];
+ char buf[32];
+ int tok;
+ s64 v;
+
+ if (!*p)
+ continue;
+
+ switch (match_token(p, qos_ctrl_tokens, args)) {
+ case QOS_ENABLE:
+ match_u64(&args[0], &v);
+ enable = v;
+ continue;
+ case QOS_CTRL:
+ match_strlcpy(buf, &args[0], sizeof(buf));
+ if (!strcmp(buf, "auto"))
+ user = false;
+ else if (!strcmp(buf, "user"))
+ user = true;
+ else
+ goto einval;
+ continue;
+ }
+
+ tok = match_token(p, qos_tokens, args);
+ switch (tok) {
+ case QOS_RPPM:
+ case QOS_WPPM:
+ if (match_strlcpy(buf, &args[0], sizeof(buf)) >=
+ sizeof(buf))
+ goto einval;
+ if (cgroup_parse_float(buf, 4, &v))
+ goto einval;
+ qos[tok] = clamp_t(s64, v, 0, MILLION);
+ break;
+ case QOS_RLAT:
+ case QOS_WLAT:
+ if (match_u64(&args[0], &v))
+ goto einval;
+ qos[tok] = v;
+ break;
+ case QOS_MIN:
+ case QOS_MAX:
+ if (match_strlcpy(buf, &args[0], sizeof(buf)) >=
+ sizeof(buf))
+ goto einval;
+ if (cgroup_parse_float(buf, 4, &v))
+ goto einval;
+ qos[tok] = clamp_t(s64, v, VRATE_MIN_PPM, VRATE_MAX_PPM);
+ break;
+ default:
+ goto einval;
+ }
+ user = true;
+ }
+
+ if (qos[QOS_MIN] > qos[QOS_MAX])
+ goto einval;
+
+ spin_lock_irq(&iow->lock);
+
+ if (enable) {
+ blk_queue_flag_set(QUEUE_FLAG_REC_PRESTART, iow->rqos.q);
+ iow->enabled = true;
+ } else {
+ blk_queue_flag_clear(QUEUE_FLAG_REC_PRESTART, iow->rqos.q);
+ iow->enabled = false;
+ }
+
+ if (user) {
+ memcpy(iow->params.qos, qos, sizeof(qos));
+ iow->user_qos_params = true;
+ } else {
+ iow->user_qos_params = false;
+ }
+
+ iow_refresh_params(iow, true);
+ spin_unlock_irq(&iow->lock);
+
+ put_disk_and_module(disk);
+ return nbytes;
+einval:
+ ret = -EINVAL;
+err:
+ put_disk_and_module(disk);
+ return ret;
+}
+
+static u64 iow_cost_model_prfill(struct seq_file *sf,
+ struct blkg_policy_data *pd, int off)
+{
+ const char *dname = blkg_dev_name(pd->blkg);
+ struct iow *iow = pd_to_iowg(pd)->iow;
+ u64 *u = iow->params.i_lcoefs;
+
+ if (!dname)
+ return 0;
+
+ seq_printf(sf, "%s ctrl=%s model=linear "
+ "rbps=%llu rseqiops=%llu rrandiops=%llu "
+ "wbps=%llu wseqiops=%llu wrandiops=%llu\n",
+ dname, iow->user_cost_model ? "user" : "auto",
+ u[I_LCOEF_RBPS],
+ u[I_LCOEF_RSEQIOPS], u[I_LCOEF_RRANDIOPS],
+ u[I_LCOEF_WBPS],
+ u[I_LCOEF_WSEQIOPS], u[I_LCOEF_WRANDIOPS]);
+ return 0;
+}
+
+static int iow_cost_model_show(struct seq_file *sf, void *v)
+{
+ struct blkcg *blkcg = css_to_blkcg(seq_css(sf));
+
+ blkcg_print_blkgs(sf, blkcg, iow_cost_model_prfill,
+ &blkcg_policy_iow, seq_cft(sf)->private, false);
+ return 0;
+}
+
+static const match_table_t cost_ctrl_tokens = {
+ { COST_CTRL, "ctrl=%s" },
+ { COST_MODEL, "model=%s" },
+ { NR_COST_CTRL_PARAMS, NULL },
+};
+
+static const match_table_t i_lcoef_tokens = {
+ { I_LCOEF_RBPS, "rbps=%u" },
+ { I_LCOEF_RSEQIOPS, "rseqiops=%u" },
+ { I_LCOEF_RRANDIOPS, "rrandiops=%u" },
+ { I_LCOEF_WBPS, "wbps=%u" },
+ { I_LCOEF_WSEQIOPS, "wseqiops=%u" },
+ { I_LCOEF_WRANDIOPS, "wrandiops=%u" },
+ { NR_I_LCOEFS, NULL },
+};
+
+ssize_t iow_cost_model_write(struct kernfs_open_file *of, char *input,
+ size_t nbytes, loff_t off)
+{
+ struct gendisk *disk;
+ struct iow *iow;
+ u64 u[NR_I_LCOEFS];
+ bool user;
+ char *p;
+ int ret;
+
+ disk = blkcg_conf_get_disk(&input);
+ if (IS_ERR(disk))
+ return PTR_ERR(disk);
+
+ iow = q_to_iow(disk->queue);
+ if (!iow) {
+ ret = blk_ioweight_init(disk->queue);
+ if (ret)
+ goto err;
+ iow = q_to_iow(disk->queue);
+ }
+
+ spin_lock_irq(&iow->lock);
+ memcpy(u, iow->params.i_lcoefs, sizeof(u));
+ user = iow->user_cost_model;
+ spin_unlock_irq(&iow->lock);
+
+ while ((p = strsep(&input, " \t\n"))) {
+ substring_t args[MAX_OPT_ARGS];
+ char buf[32];
+ int tok;
+ u64 v;
+
+ if (!*p)
+ continue;
+
+ switch (match_token(p, cost_ctrl_tokens, args)) {
+ case COST_CTRL:
+ match_strlcpy(buf, &args[0], sizeof(buf));
+ if (!strcmp(buf, "auto"))
+ user = false;
+ else if (!strcmp(buf, "user"))
+ user = true;
+ else
+ goto einval;
+ continue;
+ case COST_MODEL:
+ match_strlcpy(buf, &args[0], sizeof(buf));
+ if (strcmp(buf, "linear"))
+ goto einval;
+ continue;
+ }
+
+ tok = match_token(p, i_lcoef_tokens, args);
+ if (tok == NR_I_LCOEFS)
+ goto einval;
+ if (match_u64(&args[0], &v))
+ goto einval;
+ u[tok] = v;
+ user = true;
+ }
+
+ spin_lock_irq(&iow->lock);
+ if (user) {
+ memcpy(iow->params.i_lcoefs, u, sizeof(u));
+ iow->user_cost_model = true;
+ } else {
+ iow->user_cost_model = false;
+ }
+ iow_refresh_params(iow, true);
+ spin_unlock_irq(&iow->lock);
+
+ put_disk_and_module(disk);
+ return nbytes;
+
+einval:
+ ret = -EINVAL;
+err:
+ put_disk_and_module(disk);
+ return ret;
+}
+
+static struct cftype iow_files[] = {
+ {
+ .name = "weight",
+ .flags = CFTYPE_NOT_ON_ROOT,
+ .seq_show = iow_weight_show,
+ .write = iow_weight_write,
+ },
+ {
+ .name = "weight.qos",
+ .flags = CFTYPE_ONLY_ON_ROOT,
+ .seq_show = iow_qos_show,
+ .write = iow_qos_write,
+ },
+ {
+ .name = "weight.cost_model",
+ .flags = CFTYPE_ONLY_ON_ROOT,
+ .seq_show = iow_cost_model_show,
+ .write = iow_cost_model_write,
+ },
+ {}
+};
+
+static struct blkcg_policy blkcg_policy_iow = {
+ .dfl_cftypes = iow_files,
+ .cpd_alloc_fn = iow_cpd_alloc,
+ .cpd_free_fn = iow_cpd_free,
+ .pd_alloc_fn = iow_pd_alloc,
+ .pd_init_fn = iow_pd_init,
+ .pd_free_fn = iow_pd_free,
+};
+
+static int __init iow_init(void)
+{
+ return blkcg_policy_register(&blkcg_policy_iow);
+}
+
+static void __exit iow_exit(void)
+{
+ return blkcg_policy_unregister(&blkcg_policy_iow);
+}
+
+module_init(iow_init);
+module_exit(iow_exit);
diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
index 5f8b75826a98..1db79a2b48ff 100644
--- a/block/blk-rq-qos.h
+++ b/block/blk-rq-qos.h
@@ -15,6 +15,7 @@ struct blk_mq_debugfs_attr;
enum rq_qos_id {
RQ_QOS_WBT,
RQ_QOS_LATENCY,
+ RQ_QOS_WEIGHT,
};

struct rq_wait {
@@ -84,6 +85,8 @@ static inline const char *rq_qos_id_to_name(enum rq_qos_id id)
return "wbt";
case RQ_QOS_LATENCY:
return "latency";
+ case RQ_QOS_WEIGHT:
+ return "weight";
}
return "unknown";
}
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 678932cc42c5..17db42b104bb 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -174,6 +174,9 @@ struct bio {
*/
struct blkcg_gq *bi_blkg;
struct bio_issue bi_issue;
+#ifdef CONFIG_BLK_CGROUP_IOWEIGHT
+ u64 bi_ioweight_cost;
+#endif
#endif
union {
#if defined(CONFIG_BLK_DEV_INTEGRITY)
diff --git a/include/trace/events/ioweight.h b/include/trace/events/ioweight.h
new file mode 100644
index 000000000000..8e5104f46408
--- /dev/null
+++ b/include/trace/events/ioweight.h
@@ -0,0 +1,174 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM ioweight
+
+#if !defined(_TRACE_BLK_IOWEIGHT_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _TRACE_BLK_IOWEIGHT_H
+
+#include <linux/tracepoint.h>
+
+TRACE_EVENT(ioweight_iowg_activate,
+
+ TP_PROTO(struct iow_gq *iowg, const char *path, struct iow_now *now,
+ u64 last_period, u64 cur_period, u64 vtime),
+
+ TP_ARGS(iowg, path, now, last_period, cur_period, vtime),
+
+ TP_STRUCT__entry (
+ __string(devname, iow_name(iowg->iow))
+ __string(cgroup, path)
+ __field(u64, now)
+ __field(u64, vnow)
+ __field(u64, vrate)
+ __field(u64, last_period)
+ __field(u64, cur_period)
+ __field(u64, last_vtime)
+ __field(u64, vtime)
+ __field(u32, weight)
+ __field(u32, inuse)
+ __field(u64, hweight_active)
+ __field(u64, hweight_inuse)
+ ),
+
+ TP_fast_assign(
+ __assign_str(devname, iow_name(iowg->iow));
+ __assign_str(cgroup, path);
+ __entry->now = now->now;
+ __entry->vnow = now->vnow;
+ __entry->vrate = now->vrate;
+ __entry->last_period = last_period;
+ __entry->cur_period = cur_period;
+ __entry->last_vtime = iowg->last_vtime;
+ __entry->vtime = vtime;
+ __entry->weight = iowg->weight;
+ __entry->inuse = iowg->inuse;
+ __entry->hweight_active = iowg->hweight_active;
+ __entry->hweight_inuse = iowg->hweight_inuse;
+ ),
+
+ TP_printk("[%s:%s] now=%llu:%llu vrate=%llu "
+ "period=%llu->%llu vtime=%llu->%llu "
+ "weight=%u/%u hweight=%llu/%llu",
+ __get_str(devname), __get_str(cgroup),
+ __entry->now, __entry->vnow, __entry->vrate,
+ __entry->last_period, __entry->cur_period,
+ __entry->last_vtime, __entry->vtime,
+ __entry->inuse, __entry->weight,
+ __entry->hweight_inuse, __entry->hweight_active
+ )
+);
+
+DECLARE_EVENT_CLASS(iowg_inuse_update,
+
+ TP_PROTO(struct iow_gq *iowg, const char *path, struct iow_now *now,
+ u32 old_inuse, u32 new_inuse,
+ u64 old_hw_inuse, u64 new_hw_inuse),
+
+ TP_ARGS(iowg, path, now, old_inuse, new_inuse,
+ old_hw_inuse, new_hw_inuse),
+
+ TP_STRUCT__entry (
+ __string(devname, iow_name(iowg->iow))
+ __string(cgroup, path)
+ __field(u64, now)
+ __field(u32, old_inuse)
+ __field(u32, new_inuse)
+ __field(u64, old_hweight_inuse)
+ __field(u64, new_hweight_inuse)
+ ),
+
+ TP_fast_assign(
+ __assign_str(devname, iow_name(iowg->iow));
+ __assign_str(cgroup, path);
+ __entry->now = now->now;
+ __entry->old_inuse = old_inuse;
+ __entry->new_inuse = new_inuse;
+ __entry->old_hweight_inuse = old_hw_inuse;
+ __entry->new_hweight_inuse = new_hw_inuse;
+ ),
+
+ TP_printk("[%s:%s] now=%llu inuse=%u->%u hw_inuse=%llu->%llu",
+ __get_str(devname), __get_str(cgroup), __entry->now,
+ __entry->old_inuse, __entry->new_inuse,
+ __entry->old_hweight_inuse, __entry->new_hweight_inuse
+ )
+);
+
+DEFINE_EVENT(iowg_inuse_update, ioweight_inuse_takeback,
+
+ TP_PROTO(struct iow_gq *iowg, const char *path, struct iow_now *now,
+ u32 old_inuse, u32 new_inuse,
+ u64 old_hw_inuse, u64 new_hw_inuse),
+
+ TP_ARGS(iowg, path, now, old_inuse, new_inuse,
+ old_hw_inuse, new_hw_inuse)
+);
+
+DEFINE_EVENT(iowg_inuse_update, ioweight_inuse_giveaway,
+
+ TP_PROTO(struct iow_gq *iowg, const char *path, struct iow_now *now,
+ u32 old_inuse, u32 new_inuse,
+ u64 old_hw_inuse, u64 new_hw_inuse),
+
+ TP_ARGS(iowg, path, now, old_inuse, new_inuse,
+ old_hw_inuse, new_hw_inuse)
+);
+
+DEFINE_EVENT(iowg_inuse_update, ioweight_inuse_reset,
+
+ TP_PROTO(struct iow_gq *iowg, const char *path, struct iow_now *now,
+ u32 old_inuse, u32 new_inuse,
+ u64 old_hw_inuse, u64 new_hw_inuse),
+
+ TP_ARGS(iowg, path, now, old_inuse, new_inuse,
+ old_hw_inuse, new_hw_inuse)
+);
+
+TRACE_EVENT(ioweight_iow_vrate_adj,
+
+ TP_PROTO(struct iow *iow, u64 new_vrate, u32 (*missed_ppm)[2],
+ u32 rq_wait_pct, int nr_lagging, int nr_shortages,
+ int nr_surpluses),
+
+ TP_ARGS(iow, new_vrate, missed_ppm, rq_wait_pct, nr_lagging, nr_shortages,
+ nr_surpluses),
+
+ TP_STRUCT__entry (
+ __string(devname, iow_name(iow))
+ __field(u64, old_vrate)
+ __field(u64, new_vrate)
+ __field(int, busy_level)
+ __field(u32, read_missed_ppm)
+ __field(u32, write_missed_ppm)
+ __field(u32, rq_wait_pct)
+ __field(int, nr_lagging)
+ __field(int, nr_shortages)
+ __field(int, nr_surpluses)
+ ),
+
+ TP_fast_assign(
+ __assign_str(devname, iow_name(iow));
+ __entry->old_vrate = atomic64_read(&iow->vtime_rate);;
+ __entry->new_vrate = new_vrate;
+ __entry->busy_level = iow->busy_level;
+ __entry->read_missed_ppm = (*missed_ppm)[READ];
+ __entry->write_missed_ppm = (*missed_ppm)[WRITE];
+ __entry->rq_wait_pct = rq_wait_pct;
+ __entry->nr_lagging = nr_lagging;
+ __entry->nr_shortages = nr_shortages;
+ __entry->nr_surpluses = nr_surpluses;
+ ),
+
+ TP_printk("[%s] vrate=%llu->%llu busy=%d missed_ppm=%u:%u rq_wait_pct=%u lagging=%d shortages=%d surpluses=%d",
+ __get_str(devname), __entry->old_vrate, __entry->new_vrate,
+ __entry->busy_level,
+ __entry->read_missed_ppm, __entry->write_missed_ppm,
+ __entry->rq_wait_pct, __entry->nr_lagging, __entry->nr_shortages,
+ __entry->nr_surpluses
+ )
+);
+
+#endif /* _TRACE_BLK_IOWEIGHT_H */
+
+/* This part must be outside protection */
+#include <trace/define_trace.h>
--
2.17.1

2019-06-14 01:57:49

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 10/10] blkcg: implement BPF_PROG_TYPE_IO_COST

Currently, blkcg implements one builtin IO cost model - lienar. To
allow customization and experimentation, allow a bpf program to
override IO cost model.

Signed-off-by: Tejun Heo <[email protected]>
---
block/Kconfig | 3 +
block/blk-ioweight.c | 148 +++++++++++++++++-
block/blk.h | 8 +
block/ioctl.c | 4 +
include/linux/bpf_types.h | 3 +
include/uapi/linux/bpf.h | 11 ++
include/uapi/linux/fs.h | 2 +
tools/bpf/bpftool/feature.c | 3 +
tools/bpf/bpftool/main.h | 1 +
tools/include/uapi/linux/bpf.h | 11 ++
tools/include/uapi/linux/fs.h | 2 +
tools/lib/bpf/libbpf.c | 2 +
tools/lib/bpf/libbpf_probes.c | 1 +
tools/testing/selftests/bpf/Makefile | 2 +-
tools/testing/selftests/bpf/iocost_ctrl.c | 43 +++++
.../selftests/bpf/progs/iocost_linear_prog.c | 52 ++++++
16 files changed, 287 insertions(+), 9 deletions(-)
create mode 100644 tools/testing/selftests/bpf/iocost_ctrl.c
create mode 100644 tools/testing/selftests/bpf/progs/iocost_linear_prog.c

diff --git a/block/Kconfig b/block/Kconfig
index 15b3de28a264..2882fdd573ca 100644
--- a/block/Kconfig
+++ b/block/Kconfig
@@ -204,4 +204,7 @@ config BLK_MQ_RDMA
config BLK_PM
def_bool BLOCK && PM

+config BLK_BPF_IO_COST
+ def_bool BLK_CGROUP_IOWEIGHT && BPF_SYSCALL
+
source "block/Kconfig.iosched"
diff --git a/block/blk-ioweight.c b/block/blk-ioweight.c
index 3d9fc1a631be..de4fc57bb77c 100644
--- a/block/blk-ioweight.c
+++ b/block/blk-ioweight.c
@@ -43,6 +43,10 @@
* parameters can be configured from userspace via
* /sys/block/DEV/queue/io_cost_model.
*
+ * For experimentations and refinements, the IO model can also be replaced
+ * by a IO_COST bpf program. Take a look at progs/iocost_linear_prog.c and
+ * iocost_ctrl.c under tools/testing/selftests/bpf for details on how-to.
+ *
* 2. Control Strategy
*
* The device virtual time (vtime) is used as the primary control metric.
@@ -176,6 +180,7 @@
#include <linux/parser.h>
#include <linux/sched/signal.h>
#include <linux/blk-cgroup.h>
+#include <linux/filter.h>
#include "blk-rq-qos.h"
#include "blk-stat.h"
#include "blk-wbt.h"
@@ -387,6 +392,10 @@ struct iow {
bool enabled;

struct iow_params params;
+#ifdef CONFIG_BPF_SYSCALL
+ /* if non-NULL, bpf cost model is being used */
+ struct bpf_prog __rcu *cost_prog;
+#endif
u32 period_us;
u32 margin_us;
u64 vrate_min;
@@ -1565,6 +1574,45 @@ static void iow_timer_fn(struct timer_list *timer)
spin_unlock_irq(&iow->lock);
}

+#ifdef CONFIG_BLK_BPF_IO_COST
+static bool calc_vtime_cost_bpf(struct bio *bio, struct iow_gq *iowg,
+ bool is_merge, u64 *costp)
+{
+ struct iow *iow = iowg->iow;
+ struct bpf_prog *prog;
+ bool ret = false;
+
+ if (!iow->cost_prog)
+ return ret;
+
+ rcu_read_lock();
+ prog = rcu_dereference(iow->cost_prog);
+ if (prog) {
+ struct bpf_io_cost ctx = {
+ .cost = 0,
+ .opf = bio->bi_opf,
+ .nr_sectors = bio_sectors(bio),
+ .sector = bio->bi_iter.bi_sector,
+ .last_sector = iowg->cursor,
+ .is_merge = is_merge,
+ };
+
+ BPF_PROG_RUN(prog, &ctx);
+ *costp = ctx.cost;
+ ret = true;
+ }
+ rcu_read_unlock();
+
+ return ret;
+}
+#else
+static bool calc_vtime_cost_bpf(struct bio *bio, struct iow_gq *iowg,
+ bool is_merge, u64 *costp)
+{
+ return false;
+}
+#endif
+
static void calc_vtime_cost_builtin(struct bio *bio, struct iow_gq *iowg,
bool is_merge, u64 *costp)
{
@@ -1610,6 +1658,9 @@ static u64 calc_vtime_cost(struct bio *bio, struct iow_gq *iowg, bool is_merge)
{
u64 cost;

+ if (calc_vtime_cost_bpf(bio, iowg, is_merge, &cost))
+ return cost;
+
calc_vtime_cost_builtin(bio, iowg, is_merge, &cost);
return cost;
}
@@ -2214,14 +2265,17 @@ static u64 iow_cost_model_prfill(struct seq_file *sf,
if (!dname)
return 0;

- seq_printf(sf, "%s ctrl=%s model=linear "
- "rbps=%llu rseqiops=%llu rrandiops=%llu "
- "wbps=%llu wseqiops=%llu wrandiops=%llu\n",
- dname, iow->user_cost_model ? "user" : "auto",
- u[I_LCOEF_RBPS],
- u[I_LCOEF_RSEQIOPS], u[I_LCOEF_RRANDIOPS],
- u[I_LCOEF_WBPS],
- u[I_LCOEF_WSEQIOPS], u[I_LCOEF_WRANDIOPS]);
+ if (iow->cost_prog)
+ seq_printf(sf, "%s ctrl=bpf\n", dname);
+ else
+ seq_printf(sf, "%s ctrl=%s model=linear "
+ "rbps=%llu rseqiops=%llu rrandiops=%llu "
+ "wbps=%llu wseqiops=%llu wrandiops=%llu\n",
+ dname, iow->user_cost_model ? "user" : "auto",
+ u[I_LCOEF_RBPS],
+ u[I_LCOEF_RSEQIOPS], u[I_LCOEF_RRANDIOPS],
+ u[I_LCOEF_WBPS],
+ u[I_LCOEF_WSEQIOPS], u[I_LCOEF_WRANDIOPS]);
return 0;
}

@@ -2363,6 +2417,84 @@ static struct blkcg_policy blkcg_policy_iow = {
.pd_free_fn = iow_pd_free,
};

+#ifdef CONFIG_BLK_BPF_IO_COST
+static bool io_cost_is_valid_access(int off, int size,
+ enum bpf_access_type type,
+ const struct bpf_prog *prog,
+ struct bpf_insn_access_aux *info)
+{
+ if (off < 0 || off >= sizeof(struct bpf_io_cost) || off % size)
+ return false;
+
+ if (off != offsetof(struct bpf_io_cost, cost) && type != BPF_READ)
+ return false;
+
+ switch (off) {
+ case bpf_ctx_range(struct bpf_io_cost, opf):
+ bpf_ctx_record_field_size(info, sizeof(__u32));
+ return bpf_ctx_narrow_access_ok(off, size, sizeof(__u32));
+ case offsetof(struct bpf_io_cost, nr_sectors):
+ return size == sizeof(__u32);
+ case offsetof(struct bpf_io_cost, cost):
+ case offsetof(struct bpf_io_cost, sector):
+ case offsetof(struct bpf_io_cost, last_sector):
+ return size == sizeof(__u64);
+ case offsetof(struct bpf_io_cost, is_merge):
+ return size == sizeof(__u8);
+ }
+
+ return false;
+}
+
+const struct bpf_prog_ops io_cost_prog_ops = {
+};
+
+const struct bpf_verifier_ops io_cost_verifier_ops = {
+ .is_valid_access = io_cost_is_valid_access,
+};
+
+int blk_bpf_io_cost_ioctl(struct block_device *bdev, unsigned cmd,
+ char __user *arg)
+{
+ int prog_fd = (int)(long)arg;
+ struct bpf_prog *prog = NULL;
+ struct request_queue *q;
+ struct iow *iow;
+ int ret = 0;
+
+ q = bdev_get_queue(bdev);
+ if (!q)
+ return -ENXIO;
+ iow = q_to_iow(q);
+
+ if (prog_fd >= 0) {
+ prog = bpf_prog_get_type(prog_fd, BPF_PROG_TYPE_IO_COST);
+ if (IS_ERR(prog))
+ return PTR_ERR(prog);
+
+ spin_lock_irq(&iow->lock);
+ if (!iow->cost_prog) {
+ rcu_assign_pointer(iow->cost_prog, prog);
+ prog = NULL;
+ } else {
+ ret = -EEXIST;
+ }
+ spin_unlock_irq(&iow->lock);
+ } else {
+ spin_lock_irq(&iow->lock);
+ if (iow->cost_prog) {
+ prog = iow->cost_prog;
+ rcu_assign_pointer(iow->cost_prog, NULL);
+ }
+ spin_unlock_irq(&iow->lock);
+ }
+
+ if (prog)
+ bpf_prog_put(prog);
+ return ret;
+}
+#endif /* CONFIG_BLK_BPF_IO_COST */
+
static int __init iow_init(void)
{
return blkcg_policy_register(&blkcg_policy_iow);
diff --git a/block/blk.h b/block/blk.h
index 7814aa207153..98fa2283534f 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -317,6 +317,14 @@ static inline void blk_queue_bounce(struct request_queue *q, struct bio **bio)
}
#endif /* CONFIG_BOUNCE */

+#ifdef CONFIG_BLK_BPF_IO_COST
+int blk_bpf_io_cost_ioctl(struct block_device *bdev, unsigned cmd,
+ char __user *arg);
+#else
+static inline int blk_bpf_io_cost_ioctl(struct block_device *bdev, unsigned cmd,
+ char __user *arg) { return -ENOTTY; }
+#endif
+
#ifdef CONFIG_BLK_CGROUP_IOLATENCY
extern int blk_iolatency_init(struct request_queue *q);
#else
diff --git a/block/ioctl.c b/block/ioctl.c
index 15a0eb80ada9..89d48d7dea0f 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -11,6 +11,8 @@
#include <linux/pr.h>
#include <linux/uaccess.h>

+#include "blk.h"
+
static int blkpg_ioctl(struct block_device *bdev, struct blkpg_ioctl_arg __user *arg)
{
struct block_device *bdevp;
@@ -590,6 +592,8 @@ int blkdev_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
case BLKTRACESETUP:
case BLKTRACETEARDOWN:
return blk_trace_ioctl(bdev, cmd, argp);
+ case BLKBPFIOCOST:
+ return blk_bpf_io_cost_ioctl(bdev, cmd, argp);
case IOC_PR_REGISTER:
return blkdev_pr_register(bdev, argp);
case IOC_PR_RESERVE:
diff --git a/include/linux/bpf_types.h b/include/linux/bpf_types.h
index 5a9975678d6f..fb0a91c655c2 100644
--- a/include/linux/bpf_types.h
+++ b/include/linux/bpf_types.h
@@ -37,6 +37,9 @@ BPF_PROG_TYPE(BPF_PROG_TYPE_LIRC_MODE2, lirc_mode2)
#ifdef CONFIG_INET
BPF_PROG_TYPE(BPF_PROG_TYPE_SK_REUSEPORT, sk_reuseport)
#endif
+#ifdef CONFIG_BLK_BPF_IO_COST
+BPF_PROG_TYPE(BPF_PROG_TYPE_IO_COST, io_cost)
+#endif

BPF_MAP_TYPE(BPF_MAP_TYPE_ARRAY, array_map_ops)
BPF_MAP_TYPE(BPF_MAP_TYPE_PERCPU_ARRAY, percpu_array_map_ops)
diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index 63e0cf66f01a..1664ef4ccc79 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -170,6 +170,7 @@ enum bpf_prog_type {
BPF_PROG_TYPE_FLOW_DISSECTOR,
BPF_PROG_TYPE_CGROUP_SYSCTL,
BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE,
+ BPF_PROG_TYPE_IO_COST,
};

enum bpf_attach_type {
@@ -3472,6 +3473,16 @@ struct bpf_flow_keys {
};
};

+struct bpf_io_cost {
+ __u64 cost; /* output */
+
+ __u32 opf;
+ __u32 nr_sectors;
+ __u64 sector;
+ __u64 last_sector;
+ __u8 is_merge;
+};
+
struct bpf_func_info {
__u32 insn_off;
__u32 type_id;
diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
index 59c71fa8c553..ddf3c80c9407 100644
--- a/include/uapi/linux/fs.h
+++ b/include/uapi/linux/fs.h
@@ -181,6 +181,8 @@ struct fsxattr {
#define BLKSECDISCARD _IO(0x12,125)
#define BLKROTATIONAL _IO(0x12,126)
#define BLKZEROOUT _IO(0x12,127)
+#define BLKBPFIOCOST _IO(0x12, 128)
+
/*
* A jump here: 130-131 are reserved for zoned block devices
* (see uapi/linux/blkzoned.h)
diff --git a/tools/bpf/bpftool/feature.c b/tools/bpf/bpftool/feature.c
index d672d9086fff..beeac8ac48f3 100644
--- a/tools/bpf/bpftool/feature.c
+++ b/tools/bpf/bpftool/feature.c
@@ -383,6 +383,9 @@ static void probe_kernel_image_config(void)
/* bpftilter module with "user mode helper" */
"CONFIG_BPFILTER_UMH",

+ /* Block */
+ "CONFIG_BLK_IO_COST",
+
/* test_bpf module for BPF tests */
"CONFIG_TEST_BPF",
};
diff --git a/tools/bpf/bpftool/main.h b/tools/bpf/bpftool/main.h
index 3d63feb7f852..298e53f35573 100644
--- a/tools/bpf/bpftool/main.h
+++ b/tools/bpf/bpftool/main.h
@@ -74,6 +74,7 @@ static const char * const prog_type_name[] = {
[BPF_PROG_TYPE_SK_REUSEPORT] = "sk_reuseport",
[BPF_PROG_TYPE_FLOW_DISSECTOR] = "flow_dissector",
[BPF_PROG_TYPE_CGROUP_SYSCTL] = "cgroup_sysctl",
+ [BPF_PROG_TYPE_IO_COST] = "io_cost",
};

extern const char * const map_type_name[];
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index 63e0cf66f01a..1664ef4ccc79 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -170,6 +170,7 @@ enum bpf_prog_type {
BPF_PROG_TYPE_FLOW_DISSECTOR,
BPF_PROG_TYPE_CGROUP_SYSCTL,
BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE,
+ BPF_PROG_TYPE_IO_COST,
};

enum bpf_attach_type {
@@ -3472,6 +3473,16 @@ struct bpf_flow_keys {
};
};

+struct bpf_io_cost {
+ __u64 cost; /* output */
+
+ __u32 opf;
+ __u32 nr_sectors;
+ __u64 sector;
+ __u64 last_sector;
+ __u8 is_merge;
+};
+
struct bpf_func_info {
__u32 insn_off;
__u32 type_id;
diff --git a/tools/include/uapi/linux/fs.h b/tools/include/uapi/linux/fs.h
index 59c71fa8c553..ddf3c80c9407 100644
--- a/tools/include/uapi/linux/fs.h
+++ b/tools/include/uapi/linux/fs.h
@@ -181,6 +181,8 @@ struct fsxattr {
#define BLKSECDISCARD _IO(0x12,125)
#define BLKROTATIONAL _IO(0x12,126)
#define BLKZEROOUT _IO(0x12,127)
+#define BLKBPFIOCOST _IO(0x12, 128)
+
/*
* A jump here: 130-131 are reserved for zoned block devices
* (see uapi/linux/blkzoned.h)
diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
index 197b574406b3..6dbee409f3b0 100644
--- a/tools/lib/bpf/libbpf.c
+++ b/tools/lib/bpf/libbpf.c
@@ -2266,6 +2266,7 @@ static bool bpf_prog_type__needs_kver(enum bpf_prog_type type)
case BPF_PROG_TYPE_RAW_TRACEPOINT_WRITABLE:
case BPF_PROG_TYPE_PERF_EVENT:
case BPF_PROG_TYPE_CGROUP_SYSCTL:
+ case BPF_PROG_TYPE_IO_COST:
return false;
case BPF_PROG_TYPE_KPROBE:
default:
@@ -3168,6 +3169,7 @@ static const struct {
BPF_PROG_SEC("lwt_out", BPF_PROG_TYPE_LWT_OUT),
BPF_PROG_SEC("lwt_xmit", BPF_PROG_TYPE_LWT_XMIT),
BPF_PROG_SEC("lwt_seg6local", BPF_PROG_TYPE_LWT_SEG6LOCAL),
+ BPF_PROG_SEC("io_cost", BPF_PROG_TYPE_IO_COST),
BPF_APROG_SEC("cgroup_skb/ingress", BPF_PROG_TYPE_CGROUP_SKB,
BPF_CGROUP_INET_INGRESS),
BPF_APROG_SEC("cgroup_skb/egress", BPF_PROG_TYPE_CGROUP_SKB,
diff --git a/tools/lib/bpf/libbpf_probes.c b/tools/lib/bpf/libbpf_probes.c
index 5e2aa83f637a..024831756151 100644
--- a/tools/lib/bpf/libbpf_probes.c
+++ b/tools/lib/bpf/libbpf_probes.c
@@ -101,6 +101,7 @@ probe_load(enum bpf_prog_type prog_type, const struct bpf_insn *insns,
case BPF_PROG_TYPE_SK_REUSEPORT:
case BPF_PROG_TYPE_FLOW_DISSECTOR:
case BPF_PROG_TYPE_CGROUP_SYSCTL:
+ case BPF_PROG_TYPE_IO_COST:
default:
break;
}
diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
index 66f2dca1dee1..c28f308c9575 100644
--- a/tools/testing/selftests/bpf/Makefile
+++ b/tools/testing/selftests/bpf/Makefile
@@ -23,7 +23,7 @@ TEST_GEN_PROGS = test_verifier test_tag test_maps test_lru_map test_lpm_map test
test_align test_verifier_log test_dev_cgroup test_tcpbpf_user \
test_sock test_btf test_sockmap test_lirc_mode2_user get_cgroup_id_user \
test_socket_cookie test_cgroup_storage test_select_reuseport test_section_names \
- test_netcnt test_tcpnotify_user test_sock_fields test_sysctl
+ test_netcnt test_tcpnotify_user test_sock_fields test_sysctl iocost_ctrl

BPF_OBJ_FILES = $(patsubst %.c,%.o, $(notdir $(wildcard progs/*.c)))
TEST_GEN_FILES = $(BPF_OBJ_FILES)
diff --git a/tools/testing/selftests/bpf/iocost_ctrl.c b/tools/testing/selftests/bpf/iocost_ctrl.c
new file mode 100644
index 000000000000..d9d3eb70d0ac
--- /dev/null
+++ b/tools/testing/selftests/bpf/iocost_ctrl.c
@@ -0,0 +1,43 @@
+#include <stdio.h>
+#include <errno.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/ioctl.h>
+#include <fcntl.h>
+
+#include <linux/bpf.h>
+#include <bpf/bpf.h>
+#include <bpf/libbpf.h>
+
+#include <linux/fs.h>
+
+int main(int argc, char **argv)
+{
+ struct bpf_object *obj;
+ int dev_fd, prog_fd = -1;
+
+ if (argc < 2) {
+ fprintf(stderr, "Usage: iocost-attach BLKDEV [BPF_PROG]");
+ return 1;
+ }
+
+ dev_fd = open(argv[1], O_RDONLY);
+ if (dev_fd < 0) {
+ perror("open(BLKDEV)");
+ return 1;
+ }
+
+ if (argc > 2) {
+ if (bpf_prog_load(argv[2], BPF_PROG_TYPE_IO_COST,
+ &obj, &prog_fd)) {
+ perror("bpf_prog_load(BPF_PROG)");
+ return 1;
+ }
+ }
+
+ if (ioctl(dev_fd, BLKBPFIOCOST, (long)prog_fd)) {
+ perror("ioctl(BLKBPFIOCOST)");
+ return 1;
+ }
+ return 0;
+}
diff --git a/tools/testing/selftests/bpf/progs/iocost_linear_prog.c b/tools/testing/selftests/bpf/progs/iocost_linear_prog.c
new file mode 100644
index 000000000000..4e202c595658
--- /dev/null
+++ b/tools/testing/selftests/bpf/progs/iocost_linear_prog.c
@@ -0,0 +1,52 @@
+// SPDX-License-Identifier: GPL-2.0
+
+#include <linux/version.h>
+#include <linux/bpf.h>
+#include "bpf_helpers.h"
+
+#define REQ_OP_READ 0
+#define REQ_OP_WRITE 1
+#define REQ_OP_BITS 8
+#define REQ_OP_MASK ((1 << REQ_OP_BITS) - 1)
+
+#define LCOEF_RSEQIO 14663889
+#define LCOEF_RRANDIO 248752010
+#define LCOEF_RPAGE 28151808
+#define LCOEF_WSEQIO 32671670
+#define LCOEF_WRANDIO 63150006
+#define LCOEF_WPAGE 7323648
+
+#define RAND_IO_CUTOFF 10
+
+SEC("io_cost")
+int func(struct bpf_io_cost *ctx)
+{
+ int op;
+ __u64 seqio, randio, page;
+ __s64 delta;
+
+ switch (ctx->opf & REQ_OP_MASK) {
+ case REQ_OP_READ:
+ seqio = LCOEF_RSEQIO;
+ randio = LCOEF_RRANDIO;
+ page = LCOEF_RPAGE;
+ break;
+ case REQ_OP_WRITE:
+ seqio = LCOEF_WSEQIO;
+ randio = LCOEF_WRANDIO;
+ page = LCOEF_WPAGE;
+ break;
+ default:
+ return 0;
+ }
+
+ delta = ctx->sector - ctx->last_sector;
+ if (delta >= -RAND_IO_CUTOFF && delta <= RAND_IO_CUTOFF)
+ ctx->cost += seqio;
+ else
+ ctx->cost += randio;
+ if (!ctx->is_merge)
+ ctx->cost += page * (ctx->nr_sectors >> 3);
+
+ return 0;
+}
--
2.17.1

2019-06-14 01:59:20

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 07/10] blk-mq: add optional request->pre_start_time_ns

There are currently two start time timestamps - start_time_ns and
io_start_time_ns. The former marks the request allocation and and the
second issue-to-device time. The planned io.weight controller needs
to measure the total time bios take to execute after it leaves rq_qos
including the time spent waiting for request to become available,
which can easily dominate on saturated devices.

This patch adds request->pre_start_time_ns which records when the
request allocation attempt started. As it isn't used for the usual
stats, make it optional behind QUEUE_FLAG_REC_PRESTART.

Signed-off-by: Tejun Heo <[email protected]>
---
block/blk-mq.c | 11 +++++++++--
include/linux/blkdev.h | 7 ++++++-
2 files changed, 15 insertions(+), 3 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index ce0f5f4ede70..25ce27434c63 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -291,7 +291,7 @@ static inline bool blk_mq_need_time_stamp(struct request *rq)
}

static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
- unsigned int tag, unsigned int op)
+ unsigned int tag, unsigned int op, u64 pre_start_time_ns)
{
struct blk_mq_tags *tags = blk_mq_tags_from_data(data);
struct request *rq = tags->static_rqs[tag];
@@ -325,6 +325,7 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
RB_CLEAR_NODE(&rq->rb_node);
rq->rq_disk = NULL;
rq->part = NULL;
+ rq->pre_start_time_ns = pre_start_time_ns;
if (blk_mq_need_time_stamp(rq))
rq->start_time_ns = ktime_get_ns();
else
@@ -356,8 +357,14 @@ static struct request *blk_mq_get_request(struct request_queue *q,
struct request *rq;
unsigned int tag;
bool put_ctx_on_error = false;
+ u64 pre_start_time_ns = 0;

blk_queue_enter_live(q);
+
+ /* pre_start_time includes depth and tag waits */
+ if (blk_queue_rec_prestart(q))
+ pre_start_time_ns = ktime_get_ns();
+
data->q = q;
if (likely(!data->ctx)) {
data->ctx = blk_mq_get_ctx(q);
@@ -395,7 +402,7 @@ static struct request *blk_mq_get_request(struct request_queue *q,
return NULL;
}

- rq = blk_mq_rq_ctx_init(data, tag, data->cmd_flags);
+ rq = blk_mq_rq_ctx_init(data, tag, data->cmd_flags, pre_start_time_ns);
if (!op_is_flush(data->cmd_flags)) {
rq->elv.icq = NULL;
if (e && e->type->ops.prepare_request) {
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 592669bcc536..ff72eb940d4c 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -194,7 +194,9 @@ struct request {

struct gendisk *rq_disk;
struct hd_struct *part;
- /* Time that I/O was submitted to the kernel. */
+ /* Time that the first bio started allocating this request. */
+ u64 pre_start_time_ns;
+ /* Time that this request was allocated for this IO. */
u64 start_time_ns;
/* Time that I/O was submitted to the device. */
u64 io_start_time_ns;
@@ -606,6 +608,7 @@ struct request_queue {
#define QUEUE_FLAG_SCSI_PASSTHROUGH 23 /* queue supports SCSI commands */
#define QUEUE_FLAG_QUIESCED 24 /* queue has been quiesced */
#define QUEUE_FLAG_PCI_P2PDMA 25 /* device supports PCI p2p requests */
+#define QUEUE_FLAG_REC_PRESTART 26 /* record pre_start_time_ns */

#define QUEUE_FLAG_MQ_DEFAULT ((1 << QUEUE_FLAG_IO_STAT) | \
(1 << QUEUE_FLAG_SAME_COMP))
@@ -632,6 +635,8 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
test_bit(QUEUE_FLAG_SCSI_PASSTHROUGH, &(q)->queue_flags)
#define blk_queue_pci_p2pdma(q) \
test_bit(QUEUE_FLAG_PCI_P2PDMA, &(q)->queue_flags)
+#define blk_queue_rec_prestart(q) \
+ test_bit(QUEUE_FLAG_REC_PRESTART, &(q)->queue_flags)

#define blk_noretry_request(rq) \
((rq)->cmd_flags & (REQ_FAILFAST_DEV|REQ_FAILFAST_TRANSPORT| \
--
2.17.1

2019-06-14 01:59:28

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 04/10] block/rq_qos: add rq_qos_merge()

Add a merge hook for rq_qos. This will be used by io.weight.

Signed-off-by: Tejun Heo <[email protected]>
---
block/blk-core.c | 4 ++++
block/blk-rq-qos.c | 9 +++++++++
block/blk-rq-qos.h | 9 +++++++++
3 files changed, 22 insertions(+)

diff --git a/block/blk-core.c b/block/blk-core.c
index aad32071fa67..1dcf679c4b44 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -559,6 +559,7 @@ bool bio_attempt_back_merge(struct request_queue *q, struct request *req,
return false;

trace_block_bio_backmerge(q, req, bio);
+ rq_qos_merge(q, req, bio);

if ((req->cmd_flags & REQ_FAILFAST_MASK) != ff)
blk_rq_set_mixed_merge(req);
@@ -580,6 +581,7 @@ bool bio_attempt_front_merge(struct request_queue *q, struct request *req,
return false;

trace_block_bio_frontmerge(q, req, bio);
+ rq_qos_merge(q, req, bio);

if ((req->cmd_flags & REQ_FAILFAST_MASK) != ff)
blk_rq_set_mixed_merge(req);
@@ -605,6 +607,8 @@ bool bio_attempt_discard_merge(struct request_queue *q, struct request *req,
blk_rq_get_max_sectors(req, blk_rq_pos(req)))
goto no_merge;

+ rq_qos_merge(q, req, bio);
+
req->biotail->bi_next = bio;
req->biotail = bio;
req->__data_len += bio->bi_iter.bi_size;
diff --git a/block/blk-rq-qos.c b/block/blk-rq-qos.c
index 659ccb8b693f..7debcaf1ee53 100644
--- a/block/blk-rq-qos.c
+++ b/block/blk-rq-qos.c
@@ -83,6 +83,15 @@ void __rq_qos_track(struct rq_qos *rqos, struct request *rq, struct bio *bio)
} while (rqos);
}

+void __rq_qos_merge(struct rq_qos *rqos, struct request *rq, struct bio *bio)
+{
+ do {
+ if (rqos->ops->merge)
+ rqos->ops->merge(rqos, rq, bio);
+ rqos = rqos->next;
+ } while (rqos);
+}
+
void __rq_qos_done_bio(struct rq_qos *rqos, struct bio *bio)
{
do {
diff --git a/block/blk-rq-qos.h b/block/blk-rq-qos.h
index 2300e038b9fa..8e426a8505b6 100644
--- a/block/blk-rq-qos.h
+++ b/block/blk-rq-qos.h
@@ -35,6 +35,7 @@ struct rq_qos {
struct rq_qos_ops {
void (*throttle)(struct rq_qos *, struct bio *);
void (*track)(struct rq_qos *, struct request *, struct bio *);
+ void (*merge)(struct rq_qos *, struct request *, struct bio *);
void (*issue)(struct rq_qos *, struct request *);
void (*requeue)(struct rq_qos *, struct request *);
void (*done)(struct rq_qos *, struct request *);
@@ -135,6 +136,7 @@ void __rq_qos_issue(struct rq_qos *rqos, struct request *rq);
void __rq_qos_requeue(struct rq_qos *rqos, struct request *rq);
void __rq_qos_throttle(struct rq_qos *rqos, struct bio *bio);
void __rq_qos_track(struct rq_qos *rqos, struct request *rq, struct bio *bio);
+void __rq_qos_merge(struct rq_qos *rqos, struct request *rq, struct bio *bio);
void __rq_qos_done_bio(struct rq_qos *rqos, struct bio *bio);

static inline void rq_qos_cleanup(struct request_queue *q, struct bio *bio)
@@ -185,6 +187,13 @@ static inline void rq_qos_track(struct request_queue *q, struct request *rq,
__rq_qos_track(q->rq_qos, rq, bio);
}

+static inline void rq_qos_merge(struct request_queue *q, struct request *rq,
+ struct bio *bio)
+{
+ if (q->rq_qos)
+ __rq_qos_merge(q->rq_qos, rq, bio);
+}
+
void rq_qos_exit(struct request_queue *);

#endif
--
2.17.1

2019-06-14 01:59:28

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 03/10] blkcg: separate blkcg_conf_get_disk() out of blkg_conf_prep()

Separate out blkcg_conf_get_disk() so that it can be used by blkcg
policy interface file input parsers before the policy is actually
enabled. This doesn't introduce any functional changes.

Signed-off-by: Tejun Heo <[email protected]>
---
block/blk-cgroup.c | 62 ++++++++++++++++++++++++++------------
include/linux/blk-cgroup.h | 1 +
2 files changed, 44 insertions(+), 19 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 60ad9b96e6eb..b66ee908db7c 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -809,6 +809,44 @@ static struct blkcg_gq *blkg_lookup_check(struct blkcg *blkcg,
return __blkg_lookup(blkcg, q, true /* update_hint */);
}

+/**
+ * blkg_conf_prep - parse and prepare for per-blkg config update
+ * @inputp: input string pointer
+ *
+ * Parse the device node prefix part, MAJ:MIN, of per-blkg config update
+ * from @input and get and return the matching gendisk. *@inputp is
+ * updated to point past the device node prefix. Returns an ERR_PTR()
+ * value on error.
+ *
+ * Use this function iff blkg_conf_prep() can't be used for some reason.
+ */
+struct gendisk *blkcg_conf_get_disk(char **inputp)
+{
+ char *input = *inputp;
+ unsigned int major, minor;
+ struct gendisk *disk;
+ int key_len, part;
+
+ if (sscanf(input, "%u:%u%n", &major, &minor, &key_len) != 2)
+ return ERR_PTR(-EINVAL);
+
+ input += key_len;
+ if (!isspace(*input))
+ return ERR_PTR(-EINVAL);
+ input = skip_spaces(input);
+
+ disk = get_gendisk(MKDEV(major, minor), &part);
+ if (!disk)
+ return ERR_PTR(-ENODEV);
+ if (part) {
+ put_disk_and_module(disk);
+ return ERR_PTR(-ENODEV);
+ }
+
+ *inputp = input;
+ return disk;
+}
+
/**
* blkg_conf_prep - parse and prepare for per-blkg config update
* @blkcg: target block cgroup
@@ -828,25 +866,11 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
struct gendisk *disk;
struct request_queue *q;
struct blkcg_gq *blkg;
- unsigned int major, minor;
- int key_len, part, ret;
- char *body;
-
- if (sscanf(input, "%u:%u%n", &major, &minor, &key_len) != 2)
- return -EINVAL;
-
- body = input + key_len;
- if (!isspace(*body))
- return -EINVAL;
- body = skip_spaces(body);
+ int ret;

- disk = get_gendisk(MKDEV(major, minor), &part);
- if (!disk)
- return -ENODEV;
- if (part) {
- ret = -ENODEV;
- goto fail;
- }
+ disk = blkcg_conf_get_disk(&input);
+ if (IS_ERR(disk))
+ return PTR_ERR(disk);

q = disk->queue;

@@ -912,7 +936,7 @@ int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
success:
ctx->disk = disk;
ctx->blkg = blkg;
- ctx->body = body;
+ ctx->body = input;
return 0;

fail_unlock:
diff --git a/include/linux/blk-cgroup.h b/include/linux/blk-cgroup.h
index 1ed27977f88f..674c482ec689 100644
--- a/include/linux/blk-cgroup.h
+++ b/include/linux/blk-cgroup.h
@@ -231,6 +231,7 @@ struct blkg_conf_ctx {
char *body;
};

+struct gendisk *blkcg_conf_get_disk(char **inputp);
int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
char *input, struct blkg_conf_ctx *ctx);
void blkg_conf_finish(struct blkg_conf_ctx *ctx);
--
2.17.1

2019-06-14 11:32:39

by Quentin Monnet

[permalink] [raw]
Subject: Re: [PATCH 10/10] blkcg: implement BPF_PROG_TYPE_IO_COST

2019-06-13 18:56 UTC-0700 ~ Tejun Heo <[email protected]>
> Currently, blkcg implements one builtin IO cost model - lienar. To
> allow customization and experimentation, allow a bpf program to
> override IO cost model.
>
> Signed-off-by: Tejun Heo <[email protected]>
> ---

[...]

> diff --git a/tools/bpf/bpftool/feature.c b/tools/bpf/bpftool/feature.c
> index d672d9086fff..beeac8ac48f3 100644
> --- a/tools/bpf/bpftool/feature.c
> +++ b/tools/bpf/bpftool/feature.c
> @@ -383,6 +383,9 @@ static void probe_kernel_image_config(void)
> /* bpftilter module with "user mode helper" */
> "CONFIG_BPFILTER_UMH",
>
> + /* Block */
> + "CONFIG_BLK_IO_COST",
> +
> /* test_bpf module for BPF tests */
> "CONFIG_TEST_BPF",
> };
> diff --git a/tools/bpf/bpftool/main.h b/tools/bpf/bpftool/main.h
> index 3d63feb7f852..298e53f35573 100644
> --- a/tools/bpf/bpftool/main.h
> +++ b/tools/bpf/bpftool/main.h
> @@ -74,6 +74,7 @@ static const char * const prog_type_name[] = {
> [BPF_PROG_TYPE_SK_REUSEPORT] = "sk_reuseport",
> [BPF_PROG_TYPE_FLOW_DISSECTOR] = "flow_dissector",
> [BPF_PROG_TYPE_CGROUP_SYSCTL] = "cgroup_sysctl",
> + [BPF_PROG_TYPE_IO_COST] = "io_cost",
> };
>
> extern const char * const map_type_name[];

Hi Tejun,

Please make sure to update the documentation and bash
completion when adding the new type to bpftool. You
probably want something like the diff below.

Thanks,
Quentin


diff --git a/tools/bpf/bpftool/Documentation/bpftool-prog.rst b/tools/bpf/bpftool/Documentation/bpftool-prog.rst
index 228a5c863cc7..0ceae71c07a8 100644
--- a/tools/bpf/bpftool/Documentation/bpftool-prog.rst
+++ b/tools/bpf/bpftool/Documentation/bpftool-prog.rst
@@ -40,7 +40,7 @@ PROG COMMANDS
| **lwt_seg6local** | **sockops** | **sk_skb** | **sk_msg** | **lirc_mode2** |
| **cgroup/bind4** | **cgroup/bind6** | **cgroup/post_bind4** | **cgroup/post_bind6** |
| **cgroup/connect4** | **cgroup/connect6** | **cgroup/sendmsg4** | **cgroup/sendmsg6** |
-| **cgroup/sysctl**
+| **cgroup/sysctl** | **io_cost**
| }
| *ATTACH_TYPE* := {
| **msg_verdict** | **stream_verdict** | **stream_parser** | **flow_dissector**
diff --git a/tools/bpf/bpftool/bash-completion/bpftool b/tools/bpf/bpftool/bash-completion/bpftool
index 2725e27dfa42..057590611e63 100644
--- a/tools/bpf/bpftool/bash-completion/bpftool
+++ b/tools/bpf/bpftool/bash-completion/bpftool
@@ -378,7 +378,7 @@ _bpftool()
cgroup/connect4 cgroup/connect6 \
cgroup/sendmsg4 cgroup/sendmsg6 \
cgroup/post_bind4 cgroup/post_bind6 \
- cgroup/sysctl" -- \
+ cgroup/sysctl io_cost" -- \
"$cur" ) )
return 0
;;
diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
index 1f209c80d906..6ba1d567bf17 100644
--- a/tools/bpf/bpftool/prog.c
+++ b/tools/bpf/bpftool/prog.c
@@ -1070,7 +1070,7 @@ static int do_help(int argc, char **argv)
" sk_reuseport | flow_dissector | cgroup/sysctl |\n"
" cgroup/bind4 | cgroup/bind6 | cgroup/post_bind4 |\n"
" cgroup/post_bind6 | cgroup/connect4 | cgroup/connect6 |\n"
- " cgroup/sendmsg4 | cgroup/sendmsg6 }\n"
+ " cgroup/sendmsg4 | cgroup/sendmsg6 | io_cost }\n"
" ATTACH_TYPE := { msg_verdict | stream_verdict | stream_parser |\n"
" flow_dissector }\n"
" " HELP_SPEC_OPTIONS "\n"

2019-06-14 12:18:28

by Toke Høiland-Jørgensen

[permalink] [raw]
Subject: Re: [PATCH 08/10] blkcg: implement blk-ioweight

Tejun Heo <[email protected]> writes:

> This patchset implements IO cost model based work-conserving
> proportional controller.
>
> While io.latency provides the capability to comprehensively prioritize
> and protect IOs depending on the cgroups, its protection is binary -
> the lowest latency target cgroup which is suffering is protected at
> the cost of all others. In many use cases including stacking multiple
> workload containers in a single system, it's necessary to distribute
> IO capacity with better granularity.
>
> One challenge of controlling IO resources is the lack of trivially
> observable cost metric. The most common metrics - bandwidth and iops
> - can be off by orders of magnitude depending on the device type and
> IO pattern. However, the cost isn't a complete mystery. Given
> several key attributes, we can make fairly reliable predictions on how
> expensive a given stream of IOs would be, at least compared to other
> IO patterns.
>
> The function which determines the cost of a given IO is the IO cost
> model for the device. This controller distributes IO capacity based
> on the costs estimated by such model. The more accurate the cost
> model the better but the controller adapts based on IO completion
> latency and as long as the relative costs across differents IO
> patterns are consistent and sensible, it'll adapt to the actual
> performance of the device.
>
> Currently, the only implemented cost model is a simple linear one with
> a few sets of default parameters for different classes of device.
> This covers most common devices reasonably well. All the
> infrastructure to tune and add different cost models is already in
> place and a later patch will also allow using bpf progs for cost
> models.
>
> Please see the top comment in blk-ioweight.c and documentation for
> more details.

Reading through the description here and in the comment, and with the
caveat that I am familiar with network packet scheduling but not with
the IO layer, I think your approach sounds quite reasonable; and I'm
happy to see improvements in this area!

One question: How are equal-weight cgroups scheduled relative to each
other? Or requests from different processes within a single cgroup for
that matter? FIFO? Round-robin? Something else?

Thanks,

-Toke

2019-06-14 14:53:07

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 10/10] blkcg: implement BPF_PROG_TYPE_IO_COST

Hello, Quentin.

On Fri, Jun 14, 2019 at 12:32:09PM +0100, Quentin Monnet wrote:
> Please make sure to update the documentation and bash
> completion when adding the new type to bpftool. You
> probably want something like the diff below.

Thank you so much. Will incorporate them. Just in case, while it's
noted in the head message, I lost the RFC marker while prepping this
patch. It isn't yet clear whether we'd really need custom cost
functions and this patch is included more as a proof of concept. If
it turns out that this is beneficial enough, the followings need to be
answered.

* Is block ioctl the right mechanism to attach these programs?

* Are there more parameters that need to be exposed to the programs?

* It'd be great to have efficient access to per-blockdev and
per-blockdev-cgroup-pair storages available to these programs so
that they can keep track of history. What'd be the best of way of
doing that considering the fact that these programs will be called
per each IO and the overhead can add up quickly?

Thanks.

--
tejun

2019-06-14 15:10:16

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 08/10] blkcg: implement blk-ioweight

Hello, Toke.

On Fri, Jun 14, 2019 at 02:17:45PM +0200, Toke H?iland-J?rgensen wrote:
> One question: How are equal-weight cgroups scheduled relative to each
> other? Or requests from different processes within a single cgroup for
> that matter? FIFO? Round-robin? Something else?

Once each cgroup got their hierarchical weight and current vtime for
the period, they don't talk to each other. Each is expected to do the
right thing on their own. When the period ends, the timer looks at
how the device is performing, how much each used and so on and then
make necessary adjustments. So, there's no direct cross-cgroup
synchronization. Each is throttled to their target level
independently.

Within a single cgroup, the IOs are FIFO. When an IO has enough vtime
credit, it just passes through. When it doesn't, it always waits
behind any other IOs which are already waiting.

Thanks.

--
tejun

2019-06-14 16:38:36

by Alexei Starovoitov

[permalink] [raw]
Subject: Re: [PATCH 10/10] blkcg: implement BPF_PROG_TYPE_IO_COST

On 6/14/19 7:52 AM, Tejun Heo wrote:
> Hello, Quentin.
>
> On Fri, Jun 14, 2019 at 12:32:09PM +0100, Quentin Monnet wrote:
>> Please make sure to update the documentation and bash
>> completion when adding the new type to bpftool. You
>> probably want something like the diff below.
>
> Thank you so much. Will incorporate them. Just in case, while it's
> noted in the head message, I lost the RFC marker while prepping this
> patch. It isn't yet clear whether we'd really need custom cost
> functions and this patch is included more as a proof of concept.

the example bpf prog looks flexible enough to allow some degree
of experiments. The question is what kind of new algorithms you envision
it will do? what other inputs it would need to make a decision?
I think it's ok to start with what it does now and extend further
when need arises.

> If
> it turns out that this is beneficial enough, the followings need to be
> answered.
>
> * Is block ioctl the right mechanism to attach these programs?

imo ioctl is a bit weird, but since its only one program per block
device it's probably ok? Unless you see it being cgroup scoped in
the future? Then cgroup-bpf style hooks will be more suitable
and allow a chain of programs.

> * Are there more parameters that need to be exposed to the programs?
>
> * It'd be great to have efficient access to per-blockdev and
> per-blockdev-cgroup-pair storages available to these programs so
> that they can keep track of history. What'd be the best of way of
> doing that considering the fact that these programs will be called
> per each IO and the overhead can add up quickly?

Martin's socket local storage solved that issue for sockets.
Something very similar can work for per-blockdev-per-cgroup.

2019-06-14 17:09:50

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 10/10] blkcg: implement BPF_PROG_TYPE_IO_COST

Hello, Alexei.

On Fri, Jun 14, 2019 at 04:35:35PM +0000, Alexei Starovoitov wrote:
> the example bpf prog looks flexible enough to allow some degree
> of experiments. The question is what kind of new algorithms you envision
> it will do? what other inputs it would need to make a decision?
> I think it's ok to start with what it does now and extend further
> when need arises.

I'm not sure right now. The linear model worked a lot better than I
originally expected and looks like it can cover most of the current
use cases. It could easily be that we just haven't seen enough
different cases yet.

At one point, quadratic model was on the table in case the linear
model wasn't good enough. Also, one area which may need improvements
could be factoring in r/w mixture into consideration. Some SSDs'
performance nose-dive when r/w commands are mixed in certain
proportions. Right now, we just deal with that by adjusting global
performance ratio (vrate) but I can imagine a model which considers
the issue history in the past X seconds of the cgroup and bumps the
overall cost according to r/w mixture.

> > * Is block ioctl the right mechanism to attach these programs?
>
> imo ioctl is a bit weird, but since its only one program per block
> device it's probably ok? Unless you see it being cgroup scoped in
> the future? Then cgroup-bpf style hooks will be more suitable
> and allow a chain of programs.

As this is a device property, I think there should only be one program
per block device.

> > * Are there more parameters that need to be exposed to the programs?
> >
> > * It'd be great to have efficient access to per-blockdev and
> > per-blockdev-cgroup-pair storages available to these programs so
> > that they can keep track of history. What'd be the best of way of
> > doing that considering the fact that these programs will be called
> > per each IO and the overhead can add up quickly?
>
> Martin's socket local storage solved that issue for sockets.
> Something very similar can work for per-blockdev-per-cgroup.

Cool, that sounds great in case we need to develop this further. Andy
had this self-learning model which didn't need any external input and
could tune itself solely based on device saturation state. If the
prog can remember states cheaply, it'd be pretty cool to experiment
with things like that in bpf.

Thanks.

--
tejun

2019-06-14 17:57:46

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving porportional controller

On Thu, Jun 13, 2019 at 06:56:10PM -0700, Tejun Heo wrote:
...
> The patchset is also available in the following git branch.
>
> git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git review-iow

Updated patchset available in the following branch. Just build fixes
and cosmetic changes for now.

git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git review-iow-v2

Thanks.

--
tejun

2019-06-14 20:52:50

by Toke Høiland-Jørgensen

[permalink] [raw]
Subject: Re: [PATCH 08/10] blkcg: implement blk-ioweight

Tejun Heo <[email protected]> writes:

> Hello, Toke.
>
> On Fri, Jun 14, 2019 at 02:17:45PM +0200, Toke Høiland-Jørgensen wrote:
>> One question: How are equal-weight cgroups scheduled relative to each
>> other? Or requests from different processes within a single cgroup for
>> that matter? FIFO? Round-robin? Something else?
>
> Once each cgroup got their hierarchical weight and current vtime for
> the period, they don't talk to each other. Each is expected to do the
> right thing on their own. When the period ends, the timer looks at
> how the device is performing, how much each used and so on and then
> make necessary adjustments. So, there's no direct cross-cgroup
> synchronization. Each is throttled to their target level
> independently.

Right, makes sense.

> Within a single cgroup, the IOs are FIFO. When an IO has enough vtime
> credit, it just passes through. When it doesn't, it always waits
> behind any other IOs which are already waiting.

OK. Is there any fundamental reason why requests from individual
processes could not be interleaved? Or does it just not give the same
benefits in an IO request context as it does for network packets?

Thanks for the explanations! :)

-Toke

2019-06-15 15:58:53

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 08/10] blkcg: implement blk-ioweight

Hello,

On Fri, Jun 14, 2019 at 10:50:34PM +0200, Toke H?iland-J?rgensen wrote:
> > Within a single cgroup, the IOs are FIFO. When an IO has enough vtime
> > credit, it just passes through. When it doesn't, it always waits
> > behind any other IOs which are already waiting.
>
> OK. Is there any fundamental reason why requests from individual
> processes could not be interleaved? Or does it just not give the same
> benefits in an IO request context as it does for network packets?

I don't think there's any fundamental reason we can't. Currently, it
just isn't doing anything it doesn't have to do while preserving the
existing ordering. One different from networking could be that
there's more sharing - buffered writes are attributed to the whole
domain (either system or cgroup) rather than individual tasks, so the
ownership of IOs gets a bit mushy beyond resource domain level.

Thanks.

--
tejun

2019-08-20 10:49:23

by Paolo Valente

[permalink] [raw]
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving porportional controller



> Il giorno 14 giu 2019, alle ore 19:56, Tejun Heo <[email protected]> ha scritto:
>
> On Thu, Jun 13, 2019 at 06:56:10PM -0700, Tejun Heo wrote:
> ...
>> The patchset is also available in the following git branch.
>>
>> git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git review-iow
>
> Updated patchset available in the following branch. Just build fixes
> and cosmetic changes for now.
>
> git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git review-iow-v2
>

Hi Tejun,
I'm running the kernel in your tree above, in an Ubuntu 18.04.

After unmounting the v1 blkio controller that gets mounted at startup
I have created v2 root as follows

$ mount -t cgroup2 none /cgroup

Then I have:
$ ls /cgroup
cgroup.controllers cgroup.max.descendants cgroup.stat cgroup.threads io.weight.cost_model system.slice
cgroup.max.depth cgroup.procs cgroup.subtree_control init.scope io.weight.qos user.slice

But the following command gives no output:
$ cat /cgroup/io.weight.qos

And, above all,
$ echo 1 > /cgroup/io.weight.qos
bash: echo: write error: Invalid argument

No complain in the kernel log.

What am I doing wrong? How can I make the controller work?

Thanks,
Paolo

> Thanks.
>
> --
> tejun

2019-08-20 15:05:58

by Paolo Valente

[permalink] [raw]
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving porportional controller



> Il giorno 20 ago 2019, alle ore 12:48, Paolo Valente <[email protected]> ha scritto:
>
>
>
>> Il giorno 14 giu 2019, alle ore 19:56, Tejun Heo <[email protected]> ha scritto:
>>
>> On Thu, Jun 13, 2019 at 06:56:10PM -0700, Tejun Heo wrote:
>> ...
>>> The patchset is also available in the following git branch.
>>>
>>> git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git review-iow
>>
>> Updated patchset available in the following branch. Just build fixes
>> and cosmetic changes for now.
>>
>> git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git review-iow-v2
>>
>
> Hi Tejun,
> I'm running the kernel in your tree above, in an Ubuntu 18.04.
>
> After unmounting the v1 blkio controller that gets mounted at startup
> I have created v2 root as follows
>
> $ mount -t cgroup2 none /cgroup
>
> Then I have:
> $ ls /cgroup
> cgroup.controllers cgroup.max.descendants cgroup.stat cgroup.threads io.weight.cost_model system.slice
> cgroup.max.depth cgroup.procs cgroup.subtree_control init.scope io.weight.qos user.slice
>
> But the following command gives no output:
> $ cat /cgroup/io.weight.qos
>
> And, above all,
> $ echo 1 > /cgroup/io.weight.qos
> bash: echo: write error: Invalid argument
>
> No complain in the kernel log.
>
> What am I doing wrong? How can I make the controller work?
>

I made it, sorry for my usual silly questions (for some reason, I
thought the controller could be enabled globally by just passing a 1).

The problem now is that the controller doesn't seem to work. I've
emulated 16 clients doing I/O on a SATA SSD. One client, the target,
does random reads, while the remaining 15 clients, the interferers, do
sequential reads.

Each client is encapsulated in a separate group, but whatever weight
is assigned to the target group, the latter gets the same, extremely
low bandwidth. I have tried with even the maximum weight ratio, i.e.,
1000 for the target and only 1 for each interferer. Here are the
results, compared with BFQ (bandwidth in MB/s):

io.weight BFQ
0.2 3.7

I ran this test with the script S/bandwidth-latency/bandwidth-latency.sh
of the S benchmark suite [1], invoked as follows:
sudo ./bandwidth-latency.sh -t randread -s none -b weight -n 15 -w 1000 -W 1

The above command simply creates groups, assigns weights as follows

echo 1 > /cgroup/InterfererGroup0/io.weight
echo 1 > /cgroup/InterfererGroup1/io.weight
...
echo 1 > /cgroup/InterfererGroup14/io.weight
echo 1000 > /cgroup/interfered/io.weight

and makes one fio instance generate I/O for each group. The bandwidth
reported above is that reported by the fio instance emulating the
target client.

Am I missing something?

Thanks,
Paolo

[1] https://github.com/Algodev-github/S


> Thanks,
> Paolo
>
>> Thanks.
>>
>> --
>> tejun
>

2019-08-20 15:20:42

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving porportional controller

Hello, Paolo.

On Tue, Aug 20, 2019 at 05:04:25PM +0200, Paolo Valente wrote:
> and makes one fio instance generate I/O for each group. The bandwidth
> reported above is that reported by the fio instance emulating the
> target client.
>
> Am I missing something?

If you didn't configure QoS targets, the controller is using device
qdepth saturation as the sole guidance in determining whether the
device needs throttling. Please try configuring the target latencies.
The bandwidth you see for single stream of rand ios should have direct
correlation with how the latency targets are configured. The head
letter for the patchset has some examples.

Thanks.

--
tejun

2019-08-22 11:49:02

by Paolo Valente

[permalink] [raw]
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving porportional controller



> Il giorno 20 ago 2019, alle ore 17:19, Tejun Heo <[email protected]> ha scritto:
>
> Hello, Paolo.
>
> On Tue, Aug 20, 2019 at 05:04:25PM +0200, Paolo Valente wrote:
>> and makes one fio instance generate I/O for each group. The bandwidth
>> reported above is that reported by the fio instance emulating the
>> target client.
>>
>> Am I missing something?
>
> If you didn't configure QoS targets, the controller is using device
> qdepth saturation as the sole guidance in determining whether the
> device needs throttling. Please try configuring the target latencies.
> The bandwidth you see for single stream of rand ios should have direct
> correlation with how the latency targets are configured. The head
> letter for the patchset has some examples.
>

Ok, I tried with the parameters reported for a SATA SSD:

rpct=95.00 rlat=10000 wpct=95.00 wlat=20000 min=50.00 max=400.00

and with a simpler configuration [1]: one target doing random reads
and only four interferers doing sequential reads, with all the
processes (groups) having the same weight.

But there seemed to be little or no control on I/O, because the target
got only 1.84 MB/s, against 1.15 MB/s without any control.

So I tried with rlat=1000 and rlat=100.

Control did improve, with same results for both values of rlat. The
problem is that these results still seem rather bad, both in terms of
throughput guaranteed to the target and in terms of total throughput.
Here are results compared with BFQ (throughputs measured in MB/s):

io.weight BFQ
target's throughput 3.415 6.224
total throughput 159.14 321.375

Am I doing something else wrong?

Thanks,
Paolo

[1] sudo ./bandwidth-latency.sh -t randread -s none -b weight -n 4

> Thanks.
>
> --
> tejun

2019-08-31 06:55:12

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving porportional controller

Hello, Paolo.

On Thu, Aug 22, 2019 at 10:58:22AM +0200, Paolo Valente wrote:
> Ok, I tried with the parameters reported for a SATA SSD:
>
> rpct=95.00 rlat=10000 wpct=95.00 wlat=20000 min=50.00 max=400.00

Sorry, I should have explained it with a lot more details.

There are two things - the cost model and qos params. The default SSD
cost model parameters are derived by averaging a number of mainstream
SSD parameters. As a ballpark, this can be good enough because while
the overall performance varied quite a bit from one ssd to another,
the relative cost of different types of IOs wasn't drastically
different.

However, this means that the performance baseline can easily be way
off from 100% depending on the specific device in use. In the above,
you're specifying min/max which limits how far the controller is
allowed to adjust the overall cost estimation. 50% and 400% are
numbers which may make sense if the cost model parameter is expected
to fall somewhere around 100% - ie. if the parameters are for that
specific device.

In your script, you're using default model params but limiting vrate
range. It's likely that your device is significantly slower than what
the default parameters are expecting. However, because min vrate is
limited to 50%, it doesn't throttle below 50% of the estimated cost,
so if the device is significantly slower than that, nothing gets
controlled.

> and with a simpler configuration [1]: one target doing random reads

And without QoS latency targets, the controller is purely going by
queue depth depletion which works fine for many usual workloads such
as larger reads and writes but isn't likely to serve low-concurrency
latency-sensitive IOs well.

> and only four interferers doing sequential reads, with all the
> processes (groups) having the same weight.
>
> But there seemed to be little or no control on I/O, because the target
> got only 1.84 MB/s, against 1.15 MB/s without any control.
>
> So I tried with rlat=1000 and rlat=100.

And this won't do anything as all rlat/wlat does is regulating how the
overall vrate should be adjusted and it's being min'd at 50%.

> Control did improve, with same results for both values of rlat. The
> problem is that these results still seem rather bad, both in terms of
> throughput guaranteed to the target and in terms of total throughput.
> Here are results compared with BFQ (throughputs measured in MB/s):
>
> io.weight BFQ
> target's throughput 3.415 6.224
> total throughput 159.14 321.375

So, what should have been configured is something like

$ echo '8:0 enable=1 rpct=95 rlat=10000 wpct=95 wlat=20000' > /sys/fs/cgroup/io.cost.qos

which just says "target 10ms p(95) read latency and 20ms p(95) write
latency" without putting any restrictions on vrate range.

With that, I got the following on Micron_1100_MTFDDAV256TBN which is a
pretty old 256GB SATA drive.

Aggregated throughput:
min max avg std_dev conf99%
266.73 275.71 271.38 4.05144 45.7635
Interfered total throughput:
min max avg std_dev
9.608 13.008 10.941 0.664938

During the run, iocost-monitor.py looked like the following.

sda RUN per=40ms cur_per=2074.351:v1008.844 busy= +0 vrate= 59.85% params=ssd_dfl(CQ)
active weight hweight% inflt% del_ms usages%
InterfererGroup0 * 100/ 100 22.94/ 20.00 0.00 0*000 023:023:023
InterfererGroup1 * 100/ 100 22.94/ 20.00 0.00 0*000 023:023:023
InterfererGroup2 * 100/ 100 22.94/ 20.00 0.00 0*000 025:023:021
InterfererGroup3 * 100/ 100 22.94/ 20.00 0.00 0*000 023:023:023
interfered * 36/ 100 8.26/ 20.00 0.42 0*000 003:004:004

Note that interfered is reported to only use 3-4% of the disk capacity
while configured to consume 20%. This is because with single
concurrency 4k randread job, its ability to consume IO capacity is
limited by the completion latency.

10ms is pretty generous (ie. more work-conserving) target for SSDs.
Let's say we're willing to tighten it to trade off total work for
tighter latency.

$ echo '8:0 enable=1 rpct=95 rlat=2500 wpct=95 wlat=5000' > /sys/fs/cgroup/io.cost.qos

Aggregated throughput:
min max avg std_dev conf99%
147.06 172.18 154.608 11.783 133.096
Interfered total throughput:
min max avg std_dev
17.992 19.32 18.698 0.313105

and the monitoring output

sda RUN per=10ms cur_per=2927.152:v1556.138 busy= -2 vrate= 34.74% params=ssd_dfl(CQ)
active weight hweight% inflt% del_ms usages%
InterfererGroup0 * 100/ 100 20.00/ 20.00 386.11 0*000 070:020:020
InterfererGroup1 * 100/ 100 20.00/ 20.00 386.11 0*000 070:020:020
InterfererGroup2 * 100/ 100 20.00/ 20.00 386.11 0*000 070:020:020
InterfererGroup3 * 100/ 100 20.00/ 20.00 0.00 0*000 020:020:020
interfered * 100/ 100 20.00/ 20.00 1.21 0*000 010:014:017

The followings happened.

* The vrate is now hovering way lower. The device is now doing less
total work to acheive tighter completion latencies.

* The overall throughput dropped but interfered's utilization is now
significantly higher along with its bandwidth from lower completion
latencies.

For reference:

[Disabled]

Aggregated throughput:
min max avg std_dev conf99%
493.98 511.37 502.808 9.52773 107.621
Interfered total throughput:
min max avg std_dev
0.056 0.304 0.107 0.0691052

[Enabled, no QoS config]

Aggregated throughput:
min max avg std_dev conf99%
429.07 449.59 437.597 8.64952 97.7015
Interfered total throughput:
min max avg std_dev
0.456 3.12 1.08 0.774318

Thanks.

--
tejun

2019-08-31 07:14:45

by Paolo Valente

[permalink] [raw]
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving porportional controller

Hi Tejun,
thank you very much for this extra information, I'll try the
configuration you suggest. In this respect, is this still the branch
to use

https://kernel.googlesource.com/pub/scm/linux/kernel/git/tj/cgroup/+/refs/heads/review-iocost-v2

also after the issue spotted two days ago [1]?

Thanks,
Paolo

[1] https://lkml.org/lkml/2019/8/29/910

> Il giorno 31 ago 2019, alle ore 08:53, Tejun Heo <[email protected]> ha scritto:
>
> Hello, Paolo.
>
> On Thu, Aug 22, 2019 at 10:58:22AM +0200, Paolo Valente wrote:
>> Ok, I tried with the parameters reported for a SATA SSD:
>>
>> rpct=95.00 rlat=10000 wpct=95.00 wlat=20000 min=50.00 max=400.00
>
> Sorry, I should have explained it with a lot more details.
>
> There are two things - the cost model and qos params. The default SSD
> cost model parameters are derived by averaging a number of mainstream
> SSD parameters. As a ballpark, this can be good enough because while
> the overall performance varied quite a bit from one ssd to another,
> the relative cost of different types of IOs wasn't drastically
> different.
>
> However, this means that the performance baseline can easily be way
> off from 100% depending on the specific device in use. In the above,
> you're specifying min/max which limits how far the controller is
> allowed to adjust the overall cost estimation. 50% and 400% are
> numbers which may make sense if the cost model parameter is expected
> to fall somewhere around 100% - ie. if the parameters are for that
> specific device.
>
> In your script, you're using default model params but limiting vrate
> range. It's likely that your device is significantly slower than what
> the default parameters are expecting. However, because min vrate is
> limited to 50%, it doesn't throttle below 50% of the estimated cost,
> so if the device is significantly slower than that, nothing gets
> controlled.
>
>> and with a simpler configuration [1]: one target doing random reads
>
> And without QoS latency targets, the controller is purely going by
> queue depth depletion which works fine for many usual workloads such
> as larger reads and writes but isn't likely to serve low-concurrency
> latency-sensitive IOs well.
>
>> and only four interferers doing sequential reads, with all the
>> processes (groups) having the same weight.
>>
>> But there seemed to be little or no control on I/O, because the target
>> got only 1.84 MB/s, against 1.15 MB/s without any control.
>>
>> So I tried with rlat=1000 and rlat=100.
>
> And this won't do anything as all rlat/wlat does is regulating how the
> overall vrate should be adjusted and it's being min'd at 50%.
>
>> Control did improve, with same results for both values of rlat. The
>> problem is that these results still seem rather bad, both in terms of
>> throughput guaranteed to the target and in terms of total throughput.
>> Here are results compared with BFQ (throughputs measured in MB/s):
>>
>> io.weight BFQ
>> target's throughput 3.415 6.224
>> total throughput 159.14 321.375
>
> So, what should have been configured is something like
>
> $ echo '8:0 enable=1 rpct=95 rlat=10000 wpct=95 wlat=20000' > /sys/fs/cgroup/io.cost.qos
>
> which just says "target 10ms p(95) read latency and 20ms p(95) write
> latency" without putting any restrictions on vrate range.
>
> With that, I got the following on Micron_1100_MTFDDAV256TBN which is a
> pretty old 256GB SATA drive.
>
> Aggregated throughput:
> min max avg std_dev conf99%
> 266.73 275.71 271.38 4.05144 45.7635
> Interfered total throughput:
> min max avg std_dev
> 9.608 13.008 10.941 0.664938
>
> During the run, iocost-monitor.py looked like the following.
>
> sda RUN per=40ms cur_per=2074.351:v1008.844 busy= +0 vrate= 59.85% params=ssd_dfl(CQ)
> active weight hweight% inflt% del_ms usages%
> InterfererGroup0 * 100/ 100 22.94/ 20.00 0.00 0*000 023:023:023
> InterfererGroup1 * 100/ 100 22.94/ 20.00 0.00 0*000 023:023:023
> InterfererGroup2 * 100/ 100 22.94/ 20.00 0.00 0*000 025:023:021
> InterfererGroup3 * 100/ 100 22.94/ 20.00 0.00 0*000 023:023:023
> interfered * 36/ 100 8.26/ 20.00 0.42 0*000 003:004:004
>
> Note that interfered is reported to only use 3-4% of the disk capacity
> while configured to consume 20%. This is because with single
> concurrency 4k randread job, its ability to consume IO capacity is
> limited by the completion latency.
>
> 10ms is pretty generous (ie. more work-conserving) target for SSDs.
> Let's say we're willing to tighten it to trade off total work for
> tighter latency.
>
> $ echo '8:0 enable=1 rpct=95 rlat=2500 wpct=95 wlat=5000' > /sys/fs/cgroup/io.cost.qos
>
> Aggregated throughput:
> min max avg std_dev conf99%
> 147.06 172.18 154.608 11.783 133.096
> Interfered total throughput:
> min max avg std_dev
> 17.992 19.32 18.698 0.313105
>
> and the monitoring output
>
> sda RUN per=10ms cur_per=2927.152:v1556.138 busy= -2 vrate= 34.74% params=ssd_dfl(CQ)
> active weight hweight% inflt% del_ms usages%
> InterfererGroup0 * 100/ 100 20.00/ 20.00 386.11 0*000 070:020:020
> InterfererGroup1 * 100/ 100 20.00/ 20.00 386.11 0*000 070:020:020
> InterfererGroup2 * 100/ 100 20.00/ 20.00 386.11 0*000 070:020:020
> InterfererGroup3 * 100/ 100 20.00/ 20.00 0.00 0*000 020:020:020
> interfered * 100/ 100 20.00/ 20.00 1.21 0*000 010:014:017
>
> The followings happened.
>
> * The vrate is now hovering way lower. The device is now doing less
> total work to acheive tighter completion latencies.
>
> * The overall throughput dropped but interfered's utilization is now
> significantly higher along with its bandwidth from lower completion
> latencies.
>
> For reference:
>
> [Disabled]
>
> Aggregated throughput:
> min max avg std_dev conf99%
> 493.98 511.37 502.808 9.52773 107.621
> Interfered total throughput:
> min max avg std_dev
> 0.056 0.304 0.107 0.0691052
>
> [Enabled, no QoS config]
>
> Aggregated throughput:
> min max avg std_dev conf99%
> 429.07 449.59 437.597 8.64952 97.7015
> Interfered total throughput:
> min max avg std_dev
> 0.456 3.12 1.08 0.774318
>
> Thanks.
>
> --
> tejun

2019-08-31 11:22:23

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving porportional controller

Hello,

On Sat, Aug 31, 2019 at 09:10:26AM +0200, Paolo Valente wrote:
> Hi Tejun,
> thank you very much for this extra information, I'll try the
> configuration you suggest. In this respect, is this still the branch
> to use
>
> https://kernel.googlesource.com/pub/scm/linux/kernel/git/tj/cgroup/+/refs/heads/review-iocost-v2
>
> also after the issue spotted two days ago [1]?

block/for-next is the branch which has all the updates.

git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git for-next

Thanks.

--
tejun

2019-09-02 15:47:04

by Paolo Valente

[permalink] [raw]
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving porportional controller



> Il giorno 31 ago 2019, alle ore 08:53, Tejun Heo <[email protected]> ha scritto:
>
> Hello, Paolo.
>

Hi Tejun,

> On Thu, Aug 22, 2019 at 10:58:22AM +0200, Paolo Valente wrote:
>> Ok, I tried with the parameters reported for a SATA SSD:
>>
>> rpct=95.00 rlat=10000 wpct=95.00 wlat=20000 min=50.00 max=400.00
>
> Sorry, I should have explained it with a lot more details.
>
> There are two things - the cost model and qos params. The default SSD
> cost model parameters are derived by averaging a number of mainstream
> SSD parameters. As a ballpark, this can be good enough because while
> the overall performance varied quite a bit from one ssd to another,
> the relative cost of different types of IOs wasn't drastically
> different.
>
> However, this means that the performance baseline can easily be way
> off from 100% depending on the specific device in use. In the above,
> you're specifying min/max which limits how far the controller is
> allowed to adjust the overall cost estimation. 50% and 400% are
> numbers which may make sense if the cost model parameter is expected
> to fall somewhere around 100% - ie. if the parameters are for that
> specific device.
>
> In your script, you're using default model params but limiting vrate
> range. It's likely that your device is significantly slower than what
> the default parameters are expecting. However, because min vrate is
> limited to 50%, it doesn't throttle below 50% of the estimated cost,
> so if the device is significantly slower than that, nothing gets
> controlled.
>

Thanks for this extra explanations. It is a little bit difficult for
me to understand how the min/max teaks for exactly, but you did give
me the general idea.

>> and with a simpler configuration [1]: one target doing random reads
>
> And without QoS latency targets, the controller is purely going by
> queue depth depletion which works fine for many usual workloads such
> as larger reads and writes but isn't likely to serve low-concurrency
> latency-sensitive IOs well.
>
>> and only four interferers doing sequential reads, with all the
>> processes (groups) having the same weight.
>>
>> But there seemed to be little or no control on I/O, because the target
>> got only 1.84 MB/s, against 1.15 MB/s without any control.
>>
>> So I tried with rlat=1000 and rlat=100.
>
> And this won't do anything as all rlat/wlat does is regulating how the
> overall vrate should be adjusted and it's being min'd at 50%.
>
>> Control did improve, with same results for both values of rlat. The
>> problem is that these results still seem rather bad, both in terms of
>> throughput guaranteed to the target and in terms of total throughput.
>> Here are results compared with BFQ (throughputs measured in MB/s):
>>
>> io.weight BFQ
>> target's throughput 3.415 6.224
>> total throughput 159.14 321.375
>
> So, what should have been configured is something like
>
> $ echo '8:0 enable=1 rpct=95 rlat=10000 wpct=95 wlat=20000' > /sys/fs/cgroup/io.cost.qos
>

Unfortunately, io.cost does not seem to control I/O with this
configuration, as it gives the interfered the same bw as no I/O
control (i.e., none as I/O scheduler and no I/O controller or policy
active):

none io.weight BFQ
target's throughput 0.8 0.7 4
total throughput 506 506 344

The test case is still the rand reader against 7 seq readers.

> which just says "target 10ms p(95) read latency and 20ms p(95) write
> latency" without putting any restrictions on vrate range.
>
> With that, I got the following on Micron_1100_MTFDDAV256TBN which is a
> pretty old 256GB SATA drive.
>
> Aggregated throughput:
> min max avg std_dev conf99%
> 266.73 275.71 271.38 4.05144 45.7635
> Interfered total throughput:
> min max avg std_dev
> 9.608 13.008 10.941 0.664938
>
> During the run, iocost-monitor.py looked like the following.
>
> sda RUN per=40ms cur_per=2074.351:v1008.844 busy= +0 vrate= 59.85% params=ssd_dfl(CQ)
> active weight hweight% inflt% del_ms usages%
> InterfererGroup0 * 100/ 100 22.94/ 20.00 0.00 0*000 023:023:023
> InterfererGroup1 * 100/ 100 22.94/ 20.00 0.00 0*000 023:023:023
> InterfererGroup2 * 100/ 100 22.94/ 20.00 0.00 0*000 025:023:021
> InterfererGroup3 * 100/ 100 22.94/ 20.00 0.00 0*000 023:023:023
> interfered * 36/ 100 8.26/ 20.00 0.42 0*000 003:004:004
>
> Note that interfered is reported to only use 3-4% of the disk capacity
> while configured to consume 20%. This is because with single
> concurrency 4k randread job, its ability to consume IO capacity is
> limited by the completion latency.
>
> 10ms is pretty generous (ie. more work-conserving) target for SSDs.
> Let's say we're willing to tighten it to trade off total work for
> tighter latency.
>
> $ echo '8:0 enable=1 rpct=95 rlat=2500 wpct=95 wlat=5000' > /sys/fs/cgroup/io.cost.qos
>

Now io.weight does control I/O, but throughputs fluctuate a lot
between runs and during each run. After extending the duration of
each run to 20 seconds, an average run for io.weight and BFQ gives the
following throughputs (same throughputs as above for none):

none io.weight BFQ
target's throughput 0.8 2.3 3.6
total throughput 506 321 360

For completeness I tried also with rlat=1000. But throughputs dropped
dramatically:

none io.weight BFQ
target's throughput 0.8 0.2 3.6
total throughput 506 17 360

Are these results in line with your expectations? If they are, then
I'd like to extend benchmarks to more mixes of workloads. Or should I
try some other QoS configuration first?

Thanks,
Paolo

> Aggregated throughput:
> min max avg std_dev conf99%
> 147.06 172.18 154.608 11.783 133.096
> Interfered total throughput:
> min max avg std_dev
> 17.992 19.32 18.698 0.313105
>
> and the monitoring output
>
> sda RUN per=10ms cur_per=2927.152:v1556.138 busy= -2 vrate= 34.74% params=ssd_dfl(CQ)
> active weight hweight% inflt% del_ms usages%
> InterfererGroup0 * 100/ 100 20.00/ 20.00 386.11 0*000 070:020:020
> InterfererGroup1 * 100/ 100 20.00/ 20.00 386.11 0*000 070:020:020
> InterfererGroup2 * 100/ 100 20.00/ 20.00 386.11 0*000 070:020:020
> InterfererGroup3 * 100/ 100 20.00/ 20.00 0.00 0*000 020:020:020
> interfered * 100/ 100 20.00/ 20.00 1.21 0*000 010:014:017
>
> The followings happened.
>
> * The vrate is now hovering way lower. The device is now doing less
> total work to acheive tighter completion latencies.
>
> * The overall throughput dropped but interfered's utilization is now
> significantly higher along with its bandwidth from lower completion
> latencies.
>
> For reference:
>
> [Disabled]
>
> Aggregated throughput:
> min max avg std_dev conf99%
> 493.98 511.37 502.808 9.52773 107.621
> Interfered total throughput:
> min max avg std_dev
> 0.056 0.304 0.107 0.0691052
>
> [Enabled, no QoS config]
>
> Aggregated throughput:
> min max avg std_dev conf99%
> 429.07 449.59 437.597 8.64952 97.7015
> Interfered total throughput:
> min max avg std_dev
> 0.456 3.12 1.08 0.774318
>
> Thanks.
>
> --
> tejun

2019-09-02 15:58:05

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving porportional controller

On Mon, Sep 02, 2019 at 05:45:50PM +0200, Paolo Valente wrote:
> Thanks for this extra explanations. It is a little bit difficult for
> me to understand how the min/max teaks for exactly, but you did give
> me the general idea.

It just limits how far high and low the IO issue rate, measured in
cost, can go. ie. if max is at 200%, the controller won't issue more
than twice of what the cost model says 100% is.

> Are these results in line with your expectations? If they are, then
> I'd like to extend benchmarks to more mixes of workloads. Or should I
> try some other QoS configuration first?

They aren't. Can you please include the content of io.cost.qos and
io.cost.model before each run? Note that partial writes to subset of
parameters don't clear other parameters.

Thanks.

--
tejun

2019-09-02 19:45:10

by Paolo Valente

[permalink] [raw]
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving porportional controller



> Il giorno 2 set 2019, alle ore 17:56, Tejun Heo <[email protected]> ha scritto:
>
> On Mon, Sep 02, 2019 at 05:45:50PM +0200, Paolo Valente wrote:
>> Thanks for this extra explanations. It is a little bit difficult for
>> me to understand how the min/max teaks for exactly, but you did give
>> me the general idea.
>
> It just limits how far high and low the IO issue rate, measured in
> cost, can go. ie. if max is at 200%, the controller won't issue more
> than twice of what the cost model says 100% is.
>
>> Are these results in line with your expectations? If they are, then
>> I'd like to extend benchmarks to more mixes of workloads. Or should I
>> try some other QoS configuration first?
>
> They aren't. Can you please include the content of io.cost.qos and
> io.cost.model before each run? Note that partial writes to subset of
> parameters don't clear other parameters.
>

Yep. I've added the printing of the two parameters in the script, and
I'm pasting the whole output, in case you could get also some other
useful piece of information from it.

$ sudo ./bandwidth-latency.sh -t randread -s none -b weight -n 7 -d 20
Switching to none for sda
echo "8:0 enable=1 rpct=95 rlat=2500 wpct=95 wlat=5000" > /cgroup/io.cost.qos
/cgroup/io.cost.qos 8:0 enable=1 ctrl=user rpct=95.00 rlat=2500 wpct=95.00 wlat=5000 min=1.00 max=10000.00
/cgroup/io.cost.model 8:0 ctrl=auto model=linear rbps=488636629 rseqiops=8932 rrandiops=8518 wbps=427891549 wseqiops=28755 wrandiops=21940
Not changing weight/limits for interferer group 0
Not changing weight/limits for interferer group 1
Not changing weight/limits for interferer group 2
Not changing weight/limits for interferer group 3
Not changing weight/limits for interferer group 4
Not changing weight/limits for interferer group 5
Not changing weight/limits for interferer group 6
Not changing weight/limits for interfered
Starting Interferer group 0
start_fio_jobs InterfererGroup0 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile0
Starting Interferer group 1
start_fio_jobs InterfererGroup1 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile1
Starting Interferer group 2
start_fio_jobs InterfererGroup2 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile2
Starting Interferer group 3
start_fio_jobs InterfererGroup3 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile3
Starting Interferer group 4
start_fio_jobs InterfererGroup4 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile4
Starting Interferer group 5
start_fio_jobs InterfererGroup5 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile5
Starting Interferer group 6
start_fio_jobs InterfererGroup6 0 default read MAX linear 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile6
Linux 5.3.0-rc6+ (paolo-ThinkPad-W520) 02/09/2019 _x86_64_ (8 CPU)

02/09/2019 21:39:11
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 66.53 5.22 0.10 1385 27

start_fio_jobs interfered 20 default randread MAX poisson 1 1 0 0 4k /home/paolo/local-S/bandwidth-latency/../workfiles/largefile_interfered0
02/09/2019 21:39:14
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 154.67 20.63 0.05 61 0

02/09/2019 21:39:17
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 453.00 64.27 0.00 192 0

02/09/2019 21:39:20
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 675.33 95.99 0.00 287 0

02/09/2019 21:39:23
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 1907.67 348.61 0.00 1045 0

02/09/2019 21:39:26
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 2414.67 462.98 0.00 1388 0

02/09/2019 21:39:29
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 2429.67 438.71 0.00 1316 0

02/09/2019 21:39:32
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 2437.00 475.79 0.00 1427 0

02/09/2019 21:39:35
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sda 2162.33 346.97 0.00 1040 0

Results for one rand reader against 7 seq readers (I/O depth 1), weight-none with weights: (default, default)
Aggregated throughput:
min max avg std_dev conf99%
64.27 475.79 319.046 171.233 1011.97
Read throughput:
min max avg std_dev conf99%
64.27 475.79 319.046 171.233 1011.97
Write throughput:
min max avg std_dev conf99%
0 0 0 0 0
Interfered total throughput:
min max avg std_dev
1.032 4.455 2.266 0.742696
Interfered per-request total latency:
min max avg std_dev
0.11 12.005 1.7545 0.878281

Thanks,
Paolo


> Thanks.
>
> --
> tejun

2019-09-05 21:26:40

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving porportional controller

Hello, Paolo.

So, I'm currently verifying iocost in the FB fleet. Around three
thousand machines running v5.2 (+ some backports) with btrfs on a
handful of different models of consumer grade SSDs. I haven't seen
complete loss of control as you're reporting. Given that you're
reporting the same thing on io.latency, which is deployed on multiple
orders of magnitude more machines at this point, it's likely that
there's something common affecting your test setup. Can you please
describe your test configuration and if you aren't already try testing
on btrfs?

Thanks.

--
tejun

2019-09-06 11:12:08

by Paolo Valente

[permalink] [raw]
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving porportional controller



> Il giorno 5 set 2019, alle ore 18:55, Tejun Heo <[email protected]> ha scritto:
>
> Hello, Paolo.
>
> So, I'm currently verifying iocost in the FB fleet. Around three
> thousand machines running v5.2 (+ some backports) with btrfs on a
> handful of different models of consumer grade SSDs. I haven't seen
> complete loss of control as you're reporting. Given that you're
> reporting the same thing on io.latency, which is deployed on multiple
> orders of magnitude more machines at this point, it's likely that
> there's something common affecting your test setup.

Yep, I had that doubt too, so I extended my tests to one more PC and
two more drives: a fast SAMSUNG NVMe SSD 970 PRO and an HITACHI
HTS72755 HDD, using the QoS configurations suggested in your last
email. As for the filesystem, I'm interested in ext4, because it is
the most widely used file system, and, with some workloads, it makes
it hard to control I/O while keeping throughput high. I'll provide hw
and sw details in my reply to your next question. I'm willing to run
tests with btrfs too, at a later time.

Something is wrong with io.cost also with the other PC and the other
drives. In the next table, each pair of numbers contains the target's
throughput and the total throughput:

none io.cost bfq
SAMSUNG SSD 11.373 3295.517 6.468 3273.892 10.802 1862.288
HITACHI HDD 0.026 11.531 0.042 30.299 0.067 76.642

With the SAMSUNG SSD, io.cost gives to the target less throughput than
none (and bfq is behaving badly too, but this is my problem). On the
HDD, io.cost gives to the target a little bit more than half the
throughput guaranteed by bfq, and reaches less than half the total
throughput reached by bfq.

I do agree that three thousand is an overwhelming number of machines,
and I'll probably never have that many resources for my tests. Still,
it seems rather unlikely that two different PCs, and three different
drives, all suffer from a common anomaly that causes troubles only to
io.cost and io.latency.

I try to never overlook also me being the problematic link in the
chain. But I'm executing this test with the public script I mentioned
in my previous emails; and all steps seem correct.

> Can you please
> describe your test configuration and if you aren't already try testing
> on btrfs?
>

PC 1: Thinkpad W520, Ubuntu 18.04 (no configuration change w.r.t.
defaults), PLEXTOR SATA PX-256M5S SSD, HITACHI HTS72755 HDD, ext4.

PC 2: Thinkpad X1 Extreme, Ubuntu 19.04 (no configuration change
w.r.t. defaults), SAMSUNG NVMe SSD 970 PRO, ext4.

If you need more details, just ask.

Thanks,
Paolo



> Thanks.
>
> --
> tejun

2019-09-07 01:06:25

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving porportional controller

Hello, Paolo.

On Fri, Sep 06, 2019 at 11:07:17AM +0200, Paolo Valente wrote:
> email. As for the filesystem, I'm interested in ext4, because it is
> the most widely used file system, and, with some workloads, it makes

Ext4 can't do writeback control as it currently stands. It creates
hard ordering across data writes from different cgroups. No matter
what mechanism you use for IO control, it is broken. I'm sure it's
fixable but does need some work.

That said, read-only tests like you're doing should work fine on ext4
too but the last time I tested io control on ext4 is more than a year
ago so something might have changed in the meantime.

Just to rule out this isn't what you're hitting. Can you please run
your test on btrfs with the following patchset applied?

http://lkml.kernel.org/r/[email protected]

And as I wrote in the previous reply, I did run your benchmark on one
of the test machines and it did work fine.

Thanks.

--
tejun

2020-02-19 18:34:33

by Paolo Valente

[permalink] [raw]
Subject: Re: [PATCHSET block/for-next] IO cost model based work-conserving porportional controller

Hi Tejun
sorry for the long delay, but, before replying, I preferred to analyze
io.cost deeply.

> Il giorno 6 set 2019, alle ore 16:58, Tejun Heo <[email protected]> ha scritto:
>
> Hello, Paolo.
>
> On Fri, Sep 06, 2019 at 11:07:17AM +0200, Paolo Valente wrote:
>> email. As for the filesystem, I'm interested in ext4, because it is
>> the most widely used file system, and, with some workloads, it makes
>
> Ext4 can't do writeback control as it currently stands. It creates
> hard ordering across data writes from different cgroups. No matter
> what mechanism you use for IO control, it is broken. I'm sure it's
> fixable but does need some work.
>

Yep. However, with read+write mixes, bfq controls I/O while io.cost
fails.

> That said, read-only tests like you're doing should work fine on ext4
> too but the last time I tested io control on ext4 is more than a year
> ago so something might have changed in the meantime.
>
> Just to rule out this isn't what you're hitting. Can you please run
> your test on btrfs with the following patchset applied?
>
> http://lkml.kernel.org/r/[email protected]
>

I've run tests with btrfs too, things get better, but the same issues
show up with other workloads. This is one of the reasons why I
decided to analyze the problem more deeply (see below).

> And as I wrote in the previous reply, I did run your benchmark on one
> of the test machines and it did work fine.
>

To address this issue we repeated the same tests on a lot of different
drives and machines. Here is a list:
- PLEXTOR SATA PX-256M5S SSD, mounted on a Thinkpad W520
- HITACHI HTS72755 HDD, mounted on a Thinkpad W520
- WDC WD10JPVX-22JC3T0 HDD, mounted on an Acer V3-572G-75CA
- TOSHIBA MQ04ABF1 HDD, mounted on a Dell G5 5590
- Samsung SSD 860 (500GB), mounted on ThinkPad X1 Extreme

Same outcome.

So, as I wrote above, I decided to analyze io.cost in depth, and to
try to understand why it fails with some workloads. I've been writing
my findings in an article.

I'm pasting the latex source of the (relatively long) section of this
article devoted to explaining the failures of io.cost with come
workloads. If this text is not enough, I'm willing to share the full
article privately.


In this section we provide an explanation for each of the two failures
of \iocost shown in the previous figures for some workloads: failure
to guarantee a fair bandwidth distribution and failure to reach a high
throughput. Then, in view of these explanations, we point out why \bfq
does not suffer from this problem. Let us start by stating the root
cause for both failures.

Drives have very complex transfer functions, because of multiple
channels, in-channel pipelines, striping, locality-dependent
parallelism, \emph{readahead}, I/O-request reordering, garbage
collection, wearing, ... In particular, these features make the
parameters of transfer functions non-linear, and variable with time
and workloads. They also make these parameters hard to know or to
compute precisely. Yet virtually all parameters of a transfer function
play a non-negligible role in the actual behavior of a drive.

This important issue affects \iocost, because \iocost controls I/O by
using exactly two time-varying, and hard-to-know-precisely parameters
(of the transfer function of a drive). Incidentally, \iolatency
controls I/O with a throttling logic somehow similar to that of
\iocost, but based on much poorer knowledge of the transfer function
of the drive.

The parameters used by \iocost are I/O costs and device-rate
saturation. I/O costs affect the effectiveness of \iocost in both
distributing bandwidth fairly and reaching a high throughput. We
analyze the way I/O costs are involved in the
fair-bandwidth-distribution failure first. Then we consider device
saturation, which is involved only in the failure in reaching a high
throughput.

\iocost currently uses a linear-cost model, where each I/O is
classified as sequential or random, and as a read or a write. Each
class of I/O is assigned a base cost and a cost coefficient. The cost
of an I/O request is then computed as the sum of the base cost for its
class of I/O, and of a variable cost, equal to the cost coefficient
for its class of I/O multiplied by the size of the I/O. Using these
estimated I/O costs, \iocost estimates the service received by each
group, and tries to let each active group receive an amount of
estimated service proportional to its weight. \iocost attains this
goal by throttling groups that would receive more than their target
service if not suspended for a while.

Both the base cost and the cost coefficient for an I/O request depend
only on the class of I/O of the request, and are independent of any
other parameter. In contrast, because of the opposite effects of, on
one side, interference by other groups, and, on the other side,
parallelism, pipelining, and any other sort of drive internal
optimization, both the actual base cost of the same I/O request, and
the very law by which the total cost of the request grows with the
size of the request, may vary greatly with the workload mix and with
the time. So they may vary even as a function of how \iocost itself
modifies the I/O pattern by throttling groups. Finally, I/O
workloads---and therefore I/O costs---may vary with the filesystem
too, given the same sequence of userspace I/O operations.

The resulting deviations between estimated and actual I/O costs may
lead to deviations between the estimated and the actual amounts of
service received by groups, and therefore to bandwidth distributions
that, for the same set of group weights, may deviate highly from each
other, and from fair distributions. Before showing this problem at
work in one of the benchmarks, we need to introduce one more bit of
information on \iocost.

\iocost does take into account issues stemming from an inaccurate
model; but only in terms of consequences on (total) throughput. In
particular, to avoid that throughput drops because too much drive time
is being granted to a low-throughput group, \iocost dynamically
adjusts group weights internally, so as to make each group donate time
to other groups, if this donation increases total throughput without
penalizing the donor.

Yet, the above deviation between estimated and actual amounts of
service may make it much more difficult, or just impossible, for this
feedback-loop to converge to weight adjustments that are stable and
reach a high throughput.

This last problem may be exacerbated by two more issues. First \iocost
evaluates the service surplus or lag for a group by comparing
the---possibly wrongly---estimated service received by the group with
a threshold computed heuristically. In particular, this threshold is
not computed as a function of the dynamically varying parameters of
the transfer function of the drive. Secondly, weights are correctly
changed in a direction that tends to bring target quantities back in
the heuristically accepted ranges, but changes are heuristically
applied with a timing and an intensity that does not take into account
how and with what delay these changes modify I/O costs and target
quantities themselves.

Depending on the actual transfer function of a drive, the combination
of these imprecise-estimation and heuristic-update issues may make it
hard for \iocost to control per-group I/O bandwidths in a stable and
effective way. A real-life example may make it easier to understand
the problem. After this example, we will finally apply the above facts
to one of the scenarios in which \iocost fails to distribute
bandwidths fairly.

Consider a building where little or no care has been put in
implementing a stable and easy-to-control water-heating
system. Enforcing a fair I/O bandwidth distribution, while at the same
time using most of the speed of the drive, is as difficult as getting
the shower temperature right in such a building. Knob rotations
stimulate, non-linearly, a non-linear system that reacts with
time-varying delays. Until we become familiar with the system, we know
its parameters so little that we have almost no control on the
temperature of the water. In addition, even after we make it to get
the temperature we desire, changes in the rest of the system (e.g.,
one more shower open) may change parameters so much to make us burn
ourselves with no action from our side!

The authors of \iocost and \iolatency did make it to get the right
temperature for their \emph{showers}, because, most certainly, they
patiently and skillfully tuned parameters, and modified algorithms
where/as needed. But the same tweaks may not work on different
systems. If a given I/O-cost model and feedback-loop logic do not
comply with some parameters of the transfer function of a drive, then
it may be hard or impossible to find a QoS and I/O-cost configuration
that work.

We can now dive into the details of a failure case. We instrumented
\iocost so as to trace the value of some of its internal
parameters~\cite{io.cost-tracing} over time. Group weights are one of
the traced parameters. Figure~\ref{fig:group-weights} shows the values
of the weights of the target and of one of the interferers (all
interferers exhibit the same weight fluctuation) during the benchmark
whose results are shown in the third subplot in
Figure~\ref{fig:SSD-rand-interferers}. In this subplot, a target doing
sequential reads eats almost all the bandwidth, at the expense of
interferers doing random reads. As for weights, \iocost detects,
cyclically, that interferers get a service surplus, and therefore it
cyclically lowers their weights, progressively but very quickly. Then
this make the estimated service of the interfers lag above the
threshold, which triggers a weight reset. At this point, the loop
restarts.

The negligible total bandwidth obtained by interferers clearly shows
that \iocost is throttling interferers too much, because of their I/O
cost, and is also lowering interferer weights too much. The periodic
weight reset does not balance the problem.

\begin{figure}
\includegraphics[width=\textwidth]{plots/weights-seq_rd_vs_rand_rd.png}
\caption{Per-group weights during the benchmark.}
\label{fig:group-weights}
\end{figure}

The other failure of \iocost concerns reaching a high throughput. To
describe this failure we need to add one last bit of information on
\iocost internals. \iocost dispatches I/O to the drive at an overall
rate proportional to a quantity named \emph{virtual rate}
(\vrate). \iocost dynamically adjusts the \vrate, so as to try to keep
the drive always close to saturation, but not overloaded. To this
goal, \iocost computes, heuristically, the \emph{busy level} of the
drive, as a function of, first, the number of groups in service
surplus and the number of groups lagging behind their target service,
and, secondly, of I/O-request latencies. So, all the inaccuracy issues
pointed out so far may affect the computation of the busy level and
thus of the \vrate, plus the following extra issue.

Some I/O flows may suffer from a high or low per-request latency even
if the device is actually not so close or very close to saturation,
respectively. This may happen because of the nature of the flows,
because of interference, or because of both reasons. So, depending on
the I/O pattern, the same per-requests latency may have a different
meaning in terms of actual device saturation. In this respect,
\iocost itself modifies the I/O pattern by changing the \vrate. But,
to evaluate saturation, \iocost compares request latencies with a
heuristic, fixed threshold, and compares the number of requests above
threshold with a further heuristic, fixed threshold. Unfortunately,
these fixed thresholds do not and cannot take the above facts into
account (thresholds can be modified by the user, but this does not
change the essence of the problem).

The combination of all these issues may lead \iocost to lower or
increase \vrate wrongly, and to establish a \vrate fluctuation that
neither ends nor converges, at least on average, to a good I/O
throughput. This is exactly what happens during the throughput failure
reported in the third subplot in Figure~\ref{fig:SSD-seq-interferers}
(both target and interferers doing sequential reads). Figure
~\ref{fig:vrate} shows the curves for the busy level, the number of
groups detected as lagging and finally the \vrate (all traced with our
tracing patch~\cite{io.cost-tracing}). The \vrate starts with a
relatively high, although fluctuating, value. Yet, around time 10,
\iocost detects a sudden rise of the busy level, which triggers a
sudden drop of \vrate. \vrate remains stably low until time $\sim$23,
when \iocost detects a low busy level and raises \vrate. But this
raising causes a new rising of the busy level, which this time goes on
for a while, causing \iocost to lower \vrate much more. Finally, from
about time 23, the number of groups lagging starts to grow, which
convinces \iocost to begin increasing the \vrate (slowly) again. All
these detections of device saturations are evidently false positives,
and result only in \iocost underutilizing the speed of the drive. The
weight-adjusting mechanism is failing as well in boosting
throughput. In particular, the weights of all groups remain constantly
equal to 100 (not shown).

\begin{figure}
\includegraphics[width=\textwidth]{plots/vrate-seq_rd_vs_seq_rd.png}
\caption{Busy level, number of groups lagging and \vrate during the
benchmark.}
\label{fig:vrate}
\end{figure}

As a last crosstest, we traced \iocost also for the throughput failure
reported in the last subplot in Figure~\ref{fig:SSD-seq-interferers}
(target doing sequential reads and interferes doing sequential
writes). Results are reported in Figure~\ref{fig:vrate-writes}, and
show the same \vrate estimation issues as in the failure with only
reads.

\begin{figure}
\includegraphics[width=\textwidth]{plots/vrate-seq_rd_vs_seq_wr.png}
\caption{Busy level, number of groups lagging and \vrate during the
benchmark.}
\label{fig:vrate-writes}
\end{figure}

The remaining question is then: why does \bfq make it? \bfq makes it
because it \textbf{does not} use any transfer-function parameter to
provide its main service guarantee. \bfq's main actuators are simply
the fixed weights set by the user; and, given the total number of
sectors transferred in a given time interval, \bfq just provides each
process or group with a fraction of those sectors proportional to the
weight of the process or group. There are feedback-loop mechanisms in
\bfq too, but they intervene only to boost throughput. This is
evidently an easier task than the combined task of boosting throughput
and at the same time guaranteeing bandwidth and latency. Moreover,
even if throughput boosting fails for some workload, service
guarantees are however preserved.

Thanks,
Paolo


> Thanks.
>
> --
> tejun