2015-08-03 22:41:36

by Tejun Heo

[permalink] [raw]
Subject: [PATCHSET sched,cgroup] sched: Implement interface for cgroup unified hierarchy

Hello,

This patchset implements cpu controller's interface for unified
hierarchy. While cpu controller didn't have structural issues that
memcg and blkcg had, there still are minor issues such as cpuacct and
use of different time units and its interface can be made consistent
with other controllers so that cgroup as a whole presents uniform ways
to achieve similar things with different resources.

This patchset contains the following three patches.

0001-cgroup-define-controller-file-conventions.patch
0002-sched-Misc-preps-for-cgroup-unified-hierarchy-interf.patch
0003-sched-Implement-interface-for-cgroup-unified-hierarc.patch

The "Controller file conventions" section in
Documentation/cgroups/unified-hierarchy.txt which is added by the
first patch codifies the syntax and semantics for controller knobs and
the next two patches implement the new interface for the cpu
controller.

The first patch is needed by blkcg too, so once the changes get acked
I'll set up a branch containing the patch so that it can be pulled
from both sched and blkcg.

This patchset is on top of v4.2-rc1 and also available in the
following git branch.

git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git review-sched-unified-intf

diffstat follows, thanks.

Documentation/cgroups/unified-hierarchy.txt | 128 +++++++++++++++++++-
include/linux/cgroup.h | 9 +
kernel/sched/core.c | 173 +++++++++++++++++++++++++++-
kernel/sched/cpuacct.c | 57 ++++++---
kernel/sched/cpuacct.h | 5
5 files changed, 342 insertions(+), 30 deletions(-)

--
tejun


2015-08-03 22:42:21

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 1/3] cgroup: define controller file conventions

Traditionally, each cgroup controller implemented whatever interface
it wanted leading to interfaces which are widely inconsistent.
Examining the requirements of the controllers readily yield that there
are only a few control schemes shared among all.

Two major controllers already had to implement new interface for the
unified hierarchy due to significant structural changes. Let's take
the chance to establish common conventions throughout all controllers.

This patch defines CGROUP_WEIGHT_MIN/DFL/MAX to be used on all weight
based control knobs and documents the conventions that controllers
should follow on the unified hierarchy. Except for io.weight knob,
all existing unified hierarchy knobs are already compliant. A
follow-up patch will update io.weight.

Signed-off-by: Tejun Heo <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Cc: Li Zefan <[email protected]>
---
Documentation/cgroups/unified-hierarchy.txt | 75 ++++++++++++++++++++++++++---
include/linux/cgroup.h | 9 ++++
2 files changed, 76 insertions(+), 8 deletions(-)

diff --git a/Documentation/cgroups/unified-hierarchy.txt b/Documentation/cgroups/unified-hierarchy.txt
index 86847a7..fc372b8 100644
--- a/Documentation/cgroups/unified-hierarchy.txt
+++ b/Documentation/cgroups/unified-hierarchy.txt
@@ -23,10 +23,13 @@ CONTENTS
5. Other Changes
5-1. [Un]populated Notification
5-2. Other Core Changes
- 5-3. Per-Controller Changes
- 5-3-1. blkio
- 5-3-2. cpuset
- 5-3-3. memory
+ 5-3. Controller file conventions
+ 5-3-1. Format
+ 5-3-2. Control knobs
+ 5-4. Per-Controller Changes
+ 5-4-1. blkio
+ 5-4-2. cpuset
+ 5-4-3. memory
6. Planned Changes
6-1. CAP for resource control

@@ -372,14 +375,70 @@ supported and the interface files "release_agent" and
- The "cgroup.clone_children" file is removed.


-5-3. Per-Controller Changes
+5-3. Controller file conventions

-5-3-1. blkio
+5-3-1. Format
+
+In general, all controller files should be in one of the following
+formats whenever possible.
+
+- Values only files
+
+ VAL0 VAL1...\n
+
+- Flat keyed files
+
+ KEY0 VAL0\n
+ KEY1 VAL1\n
+ ...
+
+- Nested keyed files
+
+ KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
+ KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
+ ...
+
+For a writeable file, the format for writing should generally match
+reading; however, controllers may allow omitting later fields or
+implement restricted shortcuts for most common use cases.
+
+For both flat and nested keyed files, only the values for a single key
+can be written at a time. For nested keyed files, the sub key pairs
+may be specified in any order and not all pairs have to be specified.
+
+
+5-3-2. Control knobs
+
+- Settings for a single feature should generally be implemented in a
+ single file.
+
+- In general, the root cgroup should be exempt from resource control
+ and thus shouldn't have resource control knobs.
+
+- If a controller implements ratio based resource distribution, the
+ control knob should be named "weight" and have the range [1, 10000]
+ and 100 should be the default value. The values are chosen to allow
+ enough and symmetric bias in both directions while keeping it
+ intuitive (the default is 100%).
+
+- If a controller implements an absolute resource limit, the control
+ knob should be named "max". The special token "max" should be used
+ to represent no limit for both reading and writing.
+
+- If a setting has configurable default value and specific overrides,
+ the default settings should be keyed with "default" and appear as
+ the first entry in the file. Specific entries can use "default" as
+ its value to indicate inheritance of the default value.
+
+
+5-4. Per-Controller Changes
+
+5-4-1. blkio

- blk-throttle becomes properly hierarchical.


-5-3-2. cpuset
+5-4-2. cpuset

- Tasks are kept in empty cpusets after hotplug and take on the masks
of the nearest non-empty ancestor, instead of being moved to it.
@@ -388,7 +447,7 @@ supported and the interface files "release_agent" and
masks of the nearest non-empty ancestor.


-5-3-3. memory
+5-4-3. memory

- use_hierarchy is on by default and the cgroup file for the flag is
not created.
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index a593e29..c6bf9d3 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -22,6 +22,15 @@

#ifdef CONFIG_CGROUPS

+/*
+ * All weight knobs on the default hierarhcy should use the following min,
+ * default and max values. The default value is the logarithmic center of
+ * MIN and MAX and allows 100x to be expressed in both directions.
+ */
+#define CGROUP_WEIGHT_MIN 1
+#define CGROUP_WEIGHT_DFL 100
+#define CGROUP_WEIGHT_MAX 10000
+
/* a css_task_iter should be treated as an opaque object */
struct css_task_iter {
struct cgroup_subsys *ss;
--
2.4.3

2015-08-03 22:41:57

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 2/3] sched: Misc preps for cgroup unified hierarchy interface

Make the following changes in preparation for the cpu controller
interface implementation for the unified hierarchy. This patch
doesn't cause any functional differences.

* s/cpu_stats_show()/cpu_cfs_stats_show()/

* s/cpu_files/cpu_legacy_files/

* Separate out cpuacct_stats_read() from cpuacct_stats_show(). While
at it, remove pointless cpuacct_stat_desc[] array.

Signed-off-by: Tejun Heo <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Li Zefan <[email protected]>
Cc: Johannes Weiner <[email protected]>
---
kernel/sched/core.c | 8 ++++----
kernel/sched/cpuacct.c | 33 +++++++++++++++------------------
2 files changed, 19 insertions(+), 22 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 78b4bad10..6137037 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8359,7 +8359,7 @@ static int __cfs_schedulable(struct task_group *tg, u64 period, u64 quota)
return ret;
}

-static int cpu_stats_show(struct seq_file *sf, void *v)
+static int cpu_cfs_stats_show(struct seq_file *sf, void *v)
{
struct task_group *tg = css_tg(seq_css(sf));
struct cfs_bandwidth *cfs_b = &tg->cfs_bandwidth;
@@ -8399,7 +8399,7 @@ static u64 cpu_rt_period_read_uint(struct cgroup_subsys_state *css,
}
#endif /* CONFIG_RT_GROUP_SCHED */

-static struct cftype cpu_files[] = {
+static struct cftype cpu_legacy_files[] = {
#ifdef CONFIG_FAIR_GROUP_SCHED
{
.name = "shares",
@@ -8420,7 +8420,7 @@ static struct cftype cpu_files[] = {
},
{
.name = "stat",
- .seq_show = cpu_stats_show,
+ .seq_show = cpu_cfs_stats_show,
},
#endif
#ifdef CONFIG_RT_GROUP_SCHED
@@ -8447,7 +8447,7 @@ struct cgroup_subsys cpu_cgrp_subsys = {
.can_attach = cpu_cgroup_can_attach,
.attach = cpu_cgroup_attach,
.exit = cpu_cgroup_exit,
- .legacy_cftypes = cpu_files,
+ .legacy_cftypes = cpu_legacy_files,
.early_init = 1,
};

diff --git a/kernel/sched/cpuacct.c b/kernel/sched/cpuacct.c
index dd7cbb5..42b2dd5 100644
--- a/kernel/sched/cpuacct.c
+++ b/kernel/sched/cpuacct.c
@@ -177,36 +177,33 @@ static int cpuacct_percpu_seq_show(struct seq_file *m, void *V)
return 0;
}

-static const char * const cpuacct_stat_desc[] = {
- [CPUACCT_STAT_USER] = "user",
- [CPUACCT_STAT_SYSTEM] = "system",
-};
-
-static int cpuacct_stats_show(struct seq_file *sf, void *v)
+static void cpuacct_stats_read(struct cpuacct *ca, u64 *userp, u64 *sysp)
{
- struct cpuacct *ca = css_ca(seq_css(sf));
int cpu;
- s64 val = 0;

+ *userp = 0;
for_each_online_cpu(cpu) {
struct kernel_cpustat *kcpustat = per_cpu_ptr(ca->cpustat, cpu);
- val += kcpustat->cpustat[CPUTIME_USER];
- val += kcpustat->cpustat[CPUTIME_NICE];
+ *userp += kcpustat->cpustat[CPUTIME_USER];
+ *userp += kcpustat->cpustat[CPUTIME_NICE];
}
- val = cputime64_to_clock_t(val);
- seq_printf(sf, "%s %lld\n", cpuacct_stat_desc[CPUACCT_STAT_USER], val);

- val = 0;
+ *sysp = 0;
for_each_online_cpu(cpu) {
struct kernel_cpustat *kcpustat = per_cpu_ptr(ca->cpustat, cpu);
- val += kcpustat->cpustat[CPUTIME_SYSTEM];
- val += kcpustat->cpustat[CPUTIME_IRQ];
- val += kcpustat->cpustat[CPUTIME_SOFTIRQ];
+ *sysp += kcpustat->cpustat[CPUTIME_SYSTEM];
+ *sysp += kcpustat->cpustat[CPUTIME_IRQ];
+ *sysp += kcpustat->cpustat[CPUTIME_SOFTIRQ];
}
+}

- val = cputime64_to_clock_t(val);
- seq_printf(sf, "%s %lld\n", cpuacct_stat_desc[CPUACCT_STAT_SYSTEM], val);
+static int cpuacct_stats_show(struct seq_file *sf, void *v)
+{
+ cputime64_t user, sys;

+ cpuacct_stats_read(css_ca(seq_css(sf)), &user, &sys);
+ seq_printf(sf, "user %lld\n", cputime64_to_clock_t(user));
+ seq_printf(sf, "system %lld\n", cputime64_to_clock_t(sys));
return 0;
}

--
2.4.3

2015-08-03 22:41:39

by Tejun Heo

[permalink] [raw]
Subject: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

While the cpu controller doesn't have any functional problems, there
are a couple interface issues which can be addressed in the v2
interface.

* cpuacct being a separate controller. This separation is artificial
and rather pointless as demonstrated by most use cases co-mounting
the two controllers. It also forces certain information to be
accounted twice.

* Use of different time units. Writable control knobs use
microseconds, some stat fields use nanoseconds while other cpuacct
stat fields use centiseconds.

* Control knobs which can't be used in the root cgroup still show up
in the root.

* Control knob names and semantics aren't consistent with other
controllers.

This patchset implements cpu controller's interface on the unified
hierarchy which adheres to the controller file conventions described
in Documentation/cgroups/unified-hierarchy.txt. Overall, the
following changes are made.

* cpuacct is implictly enabled and disabled by cpu and its information
is reported through "cpu.stat" which now uses microseconds for all
time durations. All time duration fields now have "_usec" appended
to them for clarity. While this doesn't solve the double accounting
immediately, once majority of users switch to v2, cpu can directly
account and report the relevant stats and cpuacct can be disabled on
the unified hierarchy.

Note that cpuacct.usage_percpu is currently not included in
"cpu.stat". If this information is actually called for, it can be
added later.

* "cpu.shares" is replaced with "cpu.weight" and operates on the
standard scale defined by CGROUP_WEIGHT_MIN/DFL/MAX (1, 100, 10000).
The weight is scaled to scheduler weight so that 100 maps to 1024
and the ratio relationship is preserved - if weight is W and its
scaled value is S, W / 100 == S / 1024. While the mapped range is a
bit smaller than the orignal scheduler weight range, the dead zones
on both sides are relatively small and covers wider range than the
nice value mappings. This file doesn't make sense in the root
cgroup and isn't create on root.

* "cpu.cfs_quota_us" and "cpu.cfs_period_us" are replaced by "cpu.max"
which contains both quota and period.

* "cpu.rt_runtime_us" and "cpu.rt_period_us" are replaced by
"cpu.rt.max" which contains both runtime and period.

Signed-off-by: Tejun Heo <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Li Zefan <[email protected]>
Cc: Johannes Weiner <[email protected]>
---
Documentation/cgroups/unified-hierarchy.txt | 53 +++++++++
kernel/sched/core.c | 165 ++++++++++++++++++++++++++++
kernel/sched/cpuacct.c | 24 ++++
kernel/sched/cpuacct.h | 5 +
4 files changed, 247 insertions(+)

diff --git a/Documentation/cgroups/unified-hierarchy.txt b/Documentation/cgroups/unified-hierarchy.txt
index fc372b8..24c3e89 100644
--- a/Documentation/cgroups/unified-hierarchy.txt
+++ b/Documentation/cgroups/unified-hierarchy.txt
@@ -30,6 +30,7 @@ CONTENTS
5-4-1. blkio
5-4-2. cpuset
5-4-3. memory
+ 5-4-4. cpu, cpuacct
6. Planned Changes
6-1. CAP for resource control

@@ -532,6 +533,58 @@ may be specified in any order and not all pairs have to be specified.
memory.low, memory.high, and memory.max will use the string "max" to
indicate and set the highest possible value.

+5-4-4. cpu, cpuacct
+
+- cpuacct is no longer an independent controller. It's implicitly
+ enabled by cpu and its information is reported in cpu.stat.
+
+- All time durations, including all stats, are now in microseconds.
+
+- The interface is updated as follows.
+
+ cpu.stat
+
+ Currently reports the following six stats. All time stats are
+ in microseconds.
+
+ usage_usec
+ user_usec
+ system_usec
+ nr_periods
+ nr_throttled
+ throttled_usec
+
+ cpu.weight
+
+ The weight setting. The weight is between 1 and 10000 and
+ defaults to 100.
+
+ This file is available only on non-root cgroups.
+
+ cpu.max
+
+ The maximum bandwidth setting. It's in the following format.
+
+ $MAX $PERIOD
+
+ which indicates that the group may consume upto $MAX in each
+ $PERIOD duration. "max" for $MAX indicates no limit. If only
+ one number is written, $MAX is updated.
+
+ This file is available only on non-root cgroups.
+
+ cpu.rt.max
+
+ The maximum realtime runtime setting. It's in the following
+ format.
+
+ $MAX $PERIOD
+
+ which indicates that the group may consume upto $MAX in each
+ $PERIOD duration. If only one number is written, $MAX is
+ updated.
+
+
6. Planned Changes

6-1. CAP for resource control
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 6137037..0fb1dd7 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8438,6 +8438,163 @@ static struct cftype cpu_legacy_files[] = {
{ } /* terminate */
};

+static int cpu_stats_show(struct seq_file *sf, void *v)
+{
+ cpuacct_cpu_stats_show(sf);
+
+#ifdef CONFIG_FAIR_GROUP_SCHED
+ {
+ struct task_group *tg = css_tg(seq_css(sf));
+ struct cfs_bandwidth *cfs_b = &tg->cfs_bandwidth;
+
+ seq_printf(sf, "nr_periods %d\n"
+ "nr_throttled %d\n"
+ "throttled_usec %llu\n",
+ cfs_b->nr_periods, cfs_b->nr_throttled,
+ cfs_b->throttled_time / NSEC_PER_USEC);
+ }
+#endif
+ return 0;
+}
+
+#ifdef CONFIG_FAIR_GROUP_SCHED
+static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
+ struct cftype *cft)
+{
+ struct task_group *tg = css_tg(css);
+ u64 weight = scale_load_down(tg->shares);
+
+ return DIV_ROUND_CLOSEST_ULL(weight * CGROUP_WEIGHT_DFL, 1024);
+}
+
+static int cpu_weight_write_u64(struct cgroup_subsys_state *css,
+ struct cftype *cftype, u64 weight)
+{
+ /*
+ * cgroup weight knobs should use the common MIN, DFL and MAX
+ * values which are 1, 100 and 10000 respectively. While it loses
+ * a bit of range on both ends, it maps pretty well onto the shares
+ * value used by scheduler and the round-trip conversions preserve
+ * the original value over the entire range.
+ */
+ if (weight < CGROUP_WEIGHT_MIN || weight > CGROUP_WEIGHT_MAX)
+ return -ERANGE;
+
+ weight = DIV_ROUND_CLOSEST_ULL(weight * 1024, CGROUP_WEIGHT_DFL);
+
+ return sched_group_set_shares(css_tg(css), scale_load(weight));
+}
+#endif
+
+/* caller should put the current value in *@periodp before calling */
+static int __maybe_unused cpu_max_parse(char *buf, u64 *periodp, u64 *quotap)
+{
+ char tok[21]; /* U64_MAX */
+
+ if (!sscanf(buf, "%s %llu", tok, periodp))
+ return -EINVAL;
+
+ *periodp *= NSEC_PER_USEC;
+
+ if (sscanf(tok, "%llu", quotap))
+ *quotap *= NSEC_PER_USEC;
+ else if (!strcmp(tok, "max"))
+ *quotap = RUNTIME_INF;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+
+static void __maybe_unused cpu_max_print(struct seq_file *sf, long period,
+ long quota)
+{
+ if (quota < 0)
+ seq_puts(sf, "max");
+ else
+ seq_printf(sf, "%ld", quota);
+
+ seq_printf(sf, " %ld\n", period);
+}
+
+#ifdef CONFIG_CFS_BANDWIDTH
+static int cpu_max_show(struct seq_file *sf, void *v)
+{
+ struct task_group *tg = css_tg(seq_css(sf));
+
+ cpu_max_print(sf, tg_get_cfs_period(tg), tg_get_cfs_quota(tg));
+ return 0;
+}
+
+static ssize_t cpu_max_write(struct kernfs_open_file *of,
+ char *buf, size_t nbytes, loff_t off)
+{
+ struct task_group *tg = css_tg(of_css(of));
+ u64 period = tg_get_cfs_period(tg);
+ u64 quota;
+ int ret;
+
+ ret = cpu_max_parse(buf, &period, &quota);
+ if (!ret)
+ ret = tg_set_cfs_bandwidth(tg, period, quota);
+ return ret ?: nbytes;
+}
+#endif
+#ifdef CONFIG_RT_GROUP_SCHED
+static int cpu_rt_max_show(struct seq_file *sf, void *v)
+{
+ struct task_group *tg = css_tg(seq_css(sf));
+
+ cpu_max_print(sf, sched_group_rt_period(tg), sched_group_rt_runtime(tg));
+ return 0;
+}
+
+static ssize_t cpu_rt_max_write(struct kernfs_open_file *of,
+ char *buf, size_t nbytes, loff_t off)
+{
+ struct task_group *tg = css_tg(of_css(of));
+ u64 period = sched_group_rt_period(tg);
+ u64 runtime;
+ int ret;
+
+ ret = cpu_max_parse(buf, &period, &runtime);
+ if (!ret)
+ ret = tg_set_rt_bandwidth(tg, period, runtime);
+ return ret ?: nbytes;
+}
+#endif
+
+static struct cftype cpu_files[] = {
+ {
+ .name = "stat",
+ .seq_show = cpu_stats_show,
+ },
+#ifdef CONFIG_FAIR_GROUP_SCHED
+ {
+ .name = "weight",
+ .flags = CFTYPE_NOT_ON_ROOT,
+ .read_u64 = cpu_weight_read_u64,
+ .write_u64 = cpu_weight_write_u64,
+ },
+#endif
+#ifdef CONFIG_CFS_BANDWIDTH
+ {
+ .name = "max",
+ .flags = CFTYPE_NOT_ON_ROOT,
+ .seq_show = cpu_max_show,
+ .write = cpu_max_write,
+ },
+#endif
+#ifdef CONFIG_RT_GROUP_SCHED
+ {
+ .name = "rt.max",
+ .seq_show = cpu_rt_max_show,
+ .write = cpu_rt_max_write,
+ },
+#endif
+ { } /* terminate */
+};
+
struct cgroup_subsys cpu_cgrp_subsys = {
.css_alloc = cpu_cgroup_css_alloc,
.css_free = cpu_cgroup_css_free,
@@ -8448,7 +8605,15 @@ struct cgroup_subsys cpu_cgrp_subsys = {
.attach = cpu_cgroup_attach,
.exit = cpu_cgroup_exit,
.legacy_cftypes = cpu_legacy_files,
+ .dfl_cftypes = cpu_files,
.early_init = 1,
+#ifdef CONFIG_CGROUP_CPUACCT
+ /*
+ * cpuacct is enabled together with cpu on the unified hierarchy
+ * and its stats are reported through "cpu.stat".
+ */
+ .depends_on = 1 << cpuacct_cgrp_id,
+#endif
};

#endif /* CONFIG_CGROUP_SCHED */
diff --git a/kernel/sched/cpuacct.c b/kernel/sched/cpuacct.c
index 42b2dd5..b4d32a6 100644
--- a/kernel/sched/cpuacct.c
+++ b/kernel/sched/cpuacct.c
@@ -224,6 +224,30 @@ static struct cftype files[] = {
{ } /* terminate */
};

+/* used to print cpuacct stats in cpu.stat on the unified hierarchy */
+void cpuacct_cpu_stats_show(struct seq_file *sf)
+{
+ struct cgroup_subsys_state *css;
+ u64 usage, user, sys;
+
+ css = cgroup_get_e_css(seq_css(sf)->cgroup, &cpuacct_cgrp_subsys);
+
+ usage = cpuusage_read(css, seq_cft(sf));
+ cpuacct_stats_read(css_ca(css), &user, &sys);
+
+ user *= TICK_NSEC;
+ sys *= TICK_NSEC;
+ do_div(usage, NSEC_PER_USEC);
+ do_div(user, NSEC_PER_USEC);
+ do_div(sys, NSEC_PER_USEC);
+
+ seq_printf(sf, "usage_usec %llu\n"
+ "user_usec %llu\n"
+ "system_usec %llu\n", usage, user, sys);
+
+ css_put(css);
+}
+
/*
* charge this task's execution time to its accounting group.
*
diff --git a/kernel/sched/cpuacct.h b/kernel/sched/cpuacct.h
index ed60562..44eace9 100644
--- a/kernel/sched/cpuacct.h
+++ b/kernel/sched/cpuacct.h
@@ -2,6 +2,7 @@

extern void cpuacct_charge(struct task_struct *tsk, u64 cputime);
extern void cpuacct_account_field(struct task_struct *p, int index, u64 val);
+extern void cpuacct_cpu_stats_show(struct seq_file *sf);

#else

@@ -14,4 +15,8 @@ cpuacct_account_field(struct task_struct *p, int index, u64 val)
{
}

+static inline void cpuacct_cpu_stats_show(struct seq_file *sf)
+{
+}
+
#endif
--
2.4.3

2015-08-04 08:43:07

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 1/3] cgroup: define controller file conventions

On Mon, Aug 03, 2015 at 06:41:27PM -0400, Tejun Heo wrote:
> +- If a controller implements an absolute resource limit, the control
> + knob should be named "max". The special token "max" should be used
> + to represent no limit for both reading and writing.

So what do you do with minimal resource guarantees? That's still an
absolute resource limit and 'max' is obviously the wrong name.

2015-08-04 08:49:07

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 1/3] cgroup: define controller file conventions

On Mon, Aug 03, 2015 at 06:41:27PM -0400, Tejun Heo wrote:
>
> This patch defines CGROUP_WEIGHT_MIN/DFL/MAX to be used on all weight
> based control knobs and documents the conventions that controllers
> should follow on the unified hierarchy. Except for io.weight knob,
> all existing unified hierarchy knobs are already compliant. A
> follow-up patch will update io.weight.

> +- If a controller implements ratio based resource distribution, the
> + control knob should be named "weight" and have the range [1, 10000]
> + and 100 should be the default value. The values are chosen to allow
> + enough and symmetric bias in both directions while keeping it
> + intuitive (the default is 100%).

Aside from 100% being a sane 'default', what it actually is is a unit.
100% == 1.

So I would suggest naming the thing CGROUP_WEIGHT_UNIT := 100,

> +/*
> + * All weight knobs on the default hierarhcy should use the following min,
> + * default and max values. The default value is the logarithmic center of
> + * MIN and MAX and allows 100x to be expressed in both directions.
> + */
> +#define CGROUP_WEIGHT_MIN 1
> +#define CGROUP_WEIGHT_DFL 100
> +#define CGROUP_WEIGHT_MAX 10000

That said, I'm not entirely keen on having to change this.

2015-08-04 09:08:42

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Mon, Aug 03, 2015 at 06:41:29PM -0400, Tejun Heo wrote:
> While the cpu controller doesn't have any functional problems, there
> are a couple interface issues which can be addressed in the v2
> interface.
>
> * cpuacct being a separate controller. This separation is artificial
> and rather pointless as demonstrated by most use cases co-mounting
> the two controllers. It also forces certain information to be
> accounted twice.
>
> * Use of different time units. Writable control knobs use
> microseconds, some stat fields use nanoseconds while other cpuacct
> stat fields use centiseconds.
>
> * Control knobs which can't be used in the root cgroup still show up
> in the root.
>
> * Control knob names and semantics aren't consistent with other
> controllers.

What about the unified hierarchy stuff cannot deal with per-task
controllers?

_That_ was the biggest problem from what I can remember, and I see no
proposed resolution for that here.

> This patchset implements cpu controller's interface on the unified
> hierarchy which adheres to the controller file conventions described
> in Documentation/cgroups/unified-hierarchy.txt. Overall, the
> following changes are made.
>
> * cpuacct is implictly enabled and disabled by cpu and its information
> is reported through "cpu.stat" which now uses microseconds for all
> time durations. All time duration fields now have "_usec" appended
> to them for clarity. While this doesn't solve the double accounting
> immediately, once majority of users switch to v2, cpu can directly
> account and report the relevant stats and cpuacct can be disabled on
> the unified hierarchy.
>
> Note that cpuacct.usage_percpu is currently not included in
> "cpu.stat". If this information is actually called for, it can be
> added later.

Since you're rev'ing the interface, can't we simply kill the old cpuacct
and implement the missing pieces in cpu directly ?

> * "cpu.shares" is replaced with "cpu.weight" and operates on the
> standard scale defined by CGROUP_WEIGHT_MIN/DFL/MAX (1, 100, 10000).
> The weight is scaled to scheduler weight so that 100 maps to 1024
> and the ratio relationship is preserved - if weight is W and its
> scaled value is S, W / 100 == S / 1024. While the mapped range is a
> bit smaller than the orignal scheduler weight range, the dead zones
> on both sides are relatively small and covers wider range than the
> nice value mappings. This file doesn't make sense in the root
> cgroup and isn't create on root.

Not too thrilled about this, but if people can live with the reduced
resolution then I suppose we can do.

> * "cpu.cfs_quota_us" and "cpu.cfs_period_us" are replaced by "cpu.max"
> which contains both quota and period.

This is indeed a maximum limit, however

> * "cpu.rt_runtime_us" and "cpu.rt_period_us" are replaced by
> "cpu.rt.max" which contains both runtime and period.

the RT thing is conceptually more of a minimum guarantee, than a
maximum, even though the current implementation is both, there are plans
to allow (controlled) relaxation of the maximum part.

Also, if you're going to rev the interface, there's more changes we
should make. I'll have to go dig them out.

2015-08-04 14:51:16

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 1/3] cgroup: define controller file conventions

Hello,

On Tue, Aug 04, 2015 at 10:42:57AM +0200, Peter Zijlstra wrote:
> On Mon, Aug 03, 2015 at 06:41:27PM -0400, Tejun Heo wrote:
> > +- If a controller implements an absolute resource limit, the control
> > + knob should be named "max". The special token "max" should be used
> > + to represent no limit for both reading and writing.
>
> So what do you do with minimal resource guarantees? That's still an
> absolute resource limit and 'max' is obviously the wrong name.

The whole spectrum is min, low, high, max where min, max are absolute
guarantee, upper limit and low, high are best effort ones. Will
update the doc.

Thanks.

--
tejun

2015-08-04 14:53:59

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 1/3] cgroup: define controller file conventions

Hello, Peter.

On Tue, Aug 04, 2015 at 10:48:55AM +0200, Peter Zijlstra wrote:
> > +- If a controller implements ratio based resource distribution, the
> > + control knob should be named "weight" and have the range [1, 10000]
> > + and 100 should be the default value. The values are chosen to allow
> > + enough and symmetric bias in both directions while keeping it
> > + intuitive (the default is 100%).
>
> Aside from 100% being a sane 'default', what it actually is is a unit.
> 100% == 1.
>
> So I would suggest naming the thing CGROUP_WEIGHT_UNIT := 100,

It's a minor point either way but I think people would generally find
default more familiar.

> > +/*
> > + * All weight knobs on the default hierarhcy should use the following min,
> > + * default and max values. The default value is the logarithmic center of
> > + * MIN and MAX and allows 100x to be expressed in both directions.
> > + */
> > +#define CGROUP_WEIGHT_MIN 1
> > +#define CGROUP_WEIGHT_DFL 100
> > +#define CGROUP_WEIGHT_MAX 10000
>
> That said, I'm not entirely keen on having to change this.

Yeah, changing the scale is an icky thing to do but I think the
benefits of unifying the scales across different controllers outweigh
here.

Thanks.

--
tejun

2015-08-04 15:10:24

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hello, Peter.

On Tue, Aug 04, 2015 at 11:07:11AM +0200, Peter Zijlstra wrote:
> What about the unified hierarchy stuff cannot deal with per-task
> controllers?
>
> _That_ was the biggest problem from what I can remember, and I see no
> proposed resolution for that here.

I've been thinking about it and I'm now convinced that cgroups just is
the wrong interface to require each application to be programming
against. I wrote this in the CAT thread too but cgroups may be an
okay management / administration interface but is a horrible
programming interface to be used by individual applications.

For things which don't require hierarchy, the obvious thing to do is
implementing a usual syscall-like interface be it a separate syscall,
an prctl command, an ioctl or whatever. For things which require
building a hierarchy of member threads, the right thing to do is
making it a part of the usual process hierarchy - this is *the*
hierarchy that applications are familiar with and have the facilities
to deal with, so we can, for example, add a clone or unshare flag
which puts the calling threads in a new child group and then let that
use the fore-mentioned syscall-like interface to configure whatever it
wants to configure. In the long term, this is *way* better than
letting individual applications fumble with cgroup hierarchy
delegation and pseudo filesystem access.

If hierarchical weight and/or bandwidth limiting for thread hierarchy
is absolutely necessary, doing this shouldn't be too difficult and I
suspect it wouldn't be all that different from autogroup.

> > * cpuacct is implictly enabled and disabled by cpu and its information
> > is reported through "cpu.stat" which now uses microseconds for all
> > time durations. All time duration fields now have "_usec" appended
> > to them for clarity. While this doesn't solve the double accounting
> > immediately, once majority of users switch to v2, cpu can directly
> > account and report the relevant stats and cpuacct can be disabled on
> > the unified hierarchy.
> >
> > Note that cpuacct.usage_percpu is currently not included in
> > "cpu.stat". If this information is actually called for, it can be
> > added later.
>
> Since you're rev'ing the interface, can't we simply kill the old cpuacct
> and implement the missing pieces in cpu directly ?

Yeah, that's the plan. For the transitional period however, we'd have
a lot more usages where cpuacct is mounted in a legacy hierarchy so I
didn't want to incur the overhead of duplicate accounting for those
cases and the dependency mechanism is already there making it trivial.

> > * "cpu.cfs_quota_us" and "cpu.cfs_period_us" are replaced by "cpu.max"
> > which contains both quota and period.
>
> This is indeed a maximum limit, however
>
> > * "cpu.rt_runtime_us" and "cpu.rt_period_us" are replaced by
> > "cpu.rt.max" which contains both runtime and period.
>
> the RT thing is conceptually more of a minimum guarantee, than a
> maximum, even though the current implementation is both, there are plans
> to allow (controlled) relaxation of the maximum part.

Ah, I see. Yeah, then it should be cpu.rt.min. I'll just remove the
file until the relaxation part is determined.

> Also, if you're going to rev the interface, there's more changes we
> should make. I'll have to go dig them out.

Great, please let me know what you have on mind.

Thanks.

--
tejun

2015-08-04 19:31:07

by Tejun Heo

[permalink] [raw]
Subject: [PATCH v2 1/3] cgroup: define controller file conventions

>From 6abc8ca19df0078de17dc38340db3002ed489ce7 Mon Sep 17 00:00:00 2001
From: Tejun Heo <[email protected]>
Date: Tue, 4 Aug 2015 15:20:55 -0400

Traditionally, each cgroup controller implemented whatever interface
it wanted leading to interfaces which are widely inconsistent.
Examining the requirements of the controllers readily yield that there
are only a few control schemes shared among all.

Two major controllers already had to implement new interface for the
unified hierarchy due to significant structural changes. Let's take
the chance to establish common conventions throughout all controllers.

This patch defines CGROUP_WEIGHT_MIN/DFL/MAX to be used on all weight
based control knobs and documents the conventions that controllers
should follow on the unified hierarchy. Except for io.weight knob,
all existing unified hierarchy knobs are already compliant. A
follow-up patch will update io.weight.

v2: Added descriptions of min, low and high knobs.

Signed-off-by: Tejun Heo <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Cc: Li Zefan <[email protected]>
Cc: Peter Zijlstra <[email protected]>
---
Hello,

Added low/high descriptions and applied to the following git branch.

git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git for-4.3-unified-base

The branch currently only contains this patch and will stay stable so
that it can be pulled from. I kept the base weight as DFL for now.
If we decide to change it, I'll apply the change on top.

Thanks.

Documentation/cgroups/unified-hierarchy.txt | 80 ++++++++++++++++++++++++++---
include/linux/cgroup.h | 9 ++++
2 files changed, 81 insertions(+), 8 deletions(-)

diff --git a/Documentation/cgroups/unified-hierarchy.txt b/Documentation/cgroups/unified-hierarchy.txt
index 86847a7..1ee9caf 100644
--- a/Documentation/cgroups/unified-hierarchy.txt
+++ b/Documentation/cgroups/unified-hierarchy.txt
@@ -23,10 +23,13 @@ CONTENTS
5. Other Changes
5-1. [Un]populated Notification
5-2. Other Core Changes
- 5-3. Per-Controller Changes
- 5-3-1. blkio
- 5-3-2. cpuset
- 5-3-3. memory
+ 5-3. Controller File Conventions
+ 5-3-1. Format
+ 5-3-2. Control Knobs
+ 5-4. Per-Controller Changes
+ 5-4-1. blkio
+ 5-4-2. cpuset
+ 5-4-3. memory
6. Planned Changes
6-1. CAP for resource control

@@ -372,14 +375,75 @@ supported and the interface files "release_agent" and
- The "cgroup.clone_children" file is removed.


-5-3. Per-Controller Changes
+5-3. Controller File Conventions

-5-3-1. blkio
+5-3-1. Format
+
+In general, all controller files should be in one of the following
+formats whenever possible.
+
+- Values only files
+
+ VAL0 VAL1...\n
+
+- Flat keyed files
+
+ KEY0 VAL0\n
+ KEY1 VAL1\n
+ ...
+
+- Nested keyed files
+
+ KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
+ KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
+ ...
+
+For a writeable file, the format for writing should generally match
+reading; however, controllers may allow omitting later fields or
+implement restricted shortcuts for most common use cases.
+
+For both flat and nested keyed files, only the values for a single key
+can be written at a time. For nested keyed files, the sub key pairs
+may be specified in any order and not all pairs have to be specified.
+
+
+5-3-2. Control Knobs
+
+- Settings for a single feature should generally be implemented in a
+ single file.
+
+- In general, the root cgroup should be exempt from resource control
+ and thus shouldn't have resource control knobs.
+
+- If a controller implements ratio based resource distribution, the
+ control knob should be named "weight" and have the range [1, 10000]
+ and 100 should be the default value. The values are chosen to allow
+ enough and symmetric bias in both directions while keeping it
+ intuitive (the default is 100%).
+
+- If a controller implements an absolute resource guarantee and/or
+ limit, the control knobs should be named "min" and "max"
+ respectively. If a controller implements best effort resource
+ gurantee and/or limit, the control knobs should be named "low" and
+ "high" respectively.
+
+ In the above four control files, the special token "max" should be
+ used to represent upward infinity for both reading and writing.
+
+- If a setting has configurable default value and specific overrides,
+ the default settings should be keyed with "default" and appear as
+ the first entry in the file. Specific entries can use "default" as
+ its value to indicate inheritance of the default value.
+
+
+5-4. Per-Controller Changes
+
+5-4-1. blkio

- blk-throttle becomes properly hierarchical.


-5-3-2. cpuset
+5-4-2. cpuset

- Tasks are kept in empty cpusets after hotplug and take on the masks
of the nearest non-empty ancestor, instead of being moved to it.
@@ -388,7 +452,7 @@ supported and the interface files "release_agent" and
masks of the nearest non-empty ancestor.


-5-3-3. memory
+5-4-3. memory

- use_hierarchy is on by default and the cgroup file for the flag is
not created.
diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
index a593e29..c6bf9d3 100644
--- a/include/linux/cgroup.h
+++ b/include/linux/cgroup.h
@@ -22,6 +22,15 @@

#ifdef CONFIG_CGROUPS

+/*
+ * All weight knobs on the default hierarhcy should use the following min,
+ * default and max values. The default value is the logarithmic center of
+ * MIN and MAX and allows 100x to be expressed in both directions.
+ */
+#define CGROUP_WEIGHT_MIN 1
+#define CGROUP_WEIGHT_DFL 100
+#define CGROUP_WEIGHT_MAX 10000
+
/* a css_task_iter should be treated as an opaque object */
struct css_task_iter {
struct cgroup_subsys *ss;
--
2.4.3

2015-08-04 19:32:45

by Tejun Heo

[permalink] [raw]
Subject: [PATCH v2 3/3] sched: Implement interface for cgroup unified hierarchy

>From f85c07ea11a52068c45cdd5f5528ed7c842c833a Mon Sep 17 00:00:00 2001
From: Tejun Heo <[email protected]>
Date: Tue, 4 Aug 2015 15:24:08 -0400

While the cpu controller doesn't have any functional problems, there
are a couple interface issues which can be addressed in the v2
interface.

* cpuacct being a separate controller. This separation is artificial
and rather pointless as demonstrated by most use cases co-mounting
the two controllers. It also forces certain information to be
accounted twice.

* Use of different time units. Writable control knobs use
microseconds, some stat fields use nanoseconds while other cpuacct
stat fields use centiseconds.

* Control knobs which can't be used in the root cgroup still show up
in the root.

* Control knob names and semantics aren't consistent with other
controllers.

This patchset implements cpu controller's interface on the unified
hierarchy which adheres to the controller file conventions described
in Documentation/cgroups/unified-hierarchy.txt. Overall, the
following changes are made.

* cpuacct is implictly enabled and disabled by cpu and its information
is reported through "cpu.stat" which now uses microseconds for all
time durations. All time duration fields now have "_usec" appended
to them for clarity. While this doesn't solve the double accounting
immediately, once majority of users switch to v2, cpu can directly
account and report the relevant stats and cpuacct can be disabled on
the unified hierarchy.

Note that cpuacct.usage_percpu is currently not included in
"cpu.stat". If this information is actually called for, it can be
added later.

* "cpu.shares" is replaced with "cpu.weight" and operates on the
standard scale defined by CGROUP_WEIGHT_MIN/DFL/MAX (1, 100, 10000).
The weight is scaled to scheduler weight so that 100 maps to 1024
and the ratio relationship is preserved - if weight is W and its
scaled value is S, W / 100 == S / 1024. While the mapped range is a
bit smaller than the orignal scheduler weight range, the dead zones
on both sides are relatively small and covers wider range than the
nice value mappings. This file doesn't make sense in the root
cgroup and isn't create on root.

* "cpu.cfs_quota_us" and "cpu.cfs_period_us" are replaced by "cpu.max"
which contains both quota and period.

* "cpu.rt_runtime_us" and "cpu.rt_period_us" are replaced by
"cpu.rt.max" which contains both runtime and period.

v2: cpu_stats_show() was incorrectly using CONFIG_FAIR_GROUP_SCHED for
CFS bandwidth stats and also using raw division for u64. Use
CONFIG_CFS_BANDWITH and do_div() instead.

The semantics of "cpu.rt.max" is not fully decided yet. Dropped
for now.

Signed-off-by: Tejun Heo <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Li Zefan <[email protected]>
Cc: Johannes Weiner <[email protected]>
---
Hello,

Fixed build issues for certain configs and removed cpu.rt.max for now.
The git branch has been updated accordingly.

git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git review-sched-unified-intf

Thanks.

Documentation/cgroups/unified-hierarchy.txt | 53 +++++++++++
kernel/sched/core.c | 140 ++++++++++++++++++++++++++++
kernel/sched/cpuacct.c | 24 +++++
kernel/sched/cpuacct.h | 5 +
4 files changed, 222 insertions(+)

diff --git a/Documentation/cgroups/unified-hierarchy.txt b/Documentation/cgroups/unified-hierarchy.txt
index 1ee9caf..09b4a4e 100644
--- a/Documentation/cgroups/unified-hierarchy.txt
+++ b/Documentation/cgroups/unified-hierarchy.txt
@@ -30,6 +30,7 @@ CONTENTS
5-4-1. blkio
5-4-2. cpuset
5-4-3. memory
+ 5-4-4. cpu, cpuacct
6. Planned Changes
6-1. CAP for resource control

@@ -537,6 +538,58 @@ may be specified in any order and not all pairs have to be specified.
memory.low, memory.high, and memory.max will use the string "max" to
indicate and set the highest possible value.

+5-4-4. cpu, cpuacct
+
+- cpuacct is no longer an independent controller. It's implicitly
+ enabled by cpu and its information is reported in cpu.stat.
+
+- All time durations, including all stats, are now in microseconds.
+
+- The interface is updated as follows.
+
+ cpu.stat
+
+ Currently reports the following six stats. All time stats are
+ in microseconds.
+
+ usage_usec
+ user_usec
+ system_usec
+ nr_periods
+ nr_throttled
+ throttled_usec
+
+ cpu.weight
+
+ The weight setting. The weight is between 1 and 10000 and
+ defaults to 100.
+
+ This file is available only on non-root cgroups.
+
+ cpu.max
+
+ The maximum bandwidth setting. It's in the following format.
+
+ $MAX $PERIOD
+
+ which indicates that the group may consume upto $MAX in each
+ $PERIOD duration. "max" for $MAX indicates no limit. If only
+ one number is written, $MAX is updated.
+
+ This file is available only on non-root cgroups.
+
+ cpu.rt.max
+
+ The maximum realtime runtime setting. It's in the following
+ format.
+
+ $MAX $PERIOD
+
+ which indicates that the group may consume upto $MAX in each
+ $PERIOD duration. If only one number is written, $MAX is
+ updated.
+
+
6. Planned Changes

6-1. CAP for resource control
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 6137037..1e72cdd 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8438,6 +8438,138 @@ static struct cftype cpu_legacy_files[] = {
{ } /* terminate */
};

+static int cpu_stats_show(struct seq_file *sf, void *v)
+{
+ cpuacct_cpu_stats_show(sf);
+
+#ifdef CONFIG_CFS_BANDWIDTH
+ {
+ struct task_group *tg = css_tg(seq_css(sf));
+ struct cfs_bandwidth *cfs_b = &tg->cfs_bandwidth;
+ u64 throttled_usec;
+
+ throttled_usec = cfs_b->throttled_time;
+ do_div(throttled_usec, NSEC_PER_USEC);
+
+ seq_printf(sf, "nr_periods %d\n"
+ "nr_throttled %d\n"
+ "throttled_usec %llu\n",
+ cfs_b->nr_periods, cfs_b->nr_throttled,
+ throttled_usec);
+ }
+#endif
+ return 0;
+}
+
+#ifdef CONFIG_FAIR_GROUP_SCHED
+static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
+ struct cftype *cft)
+{
+ struct task_group *tg = css_tg(css);
+ u64 weight = scale_load_down(tg->shares);
+
+ return DIV_ROUND_CLOSEST_ULL(weight * CGROUP_WEIGHT_DFL, 1024);
+}
+
+static int cpu_weight_write_u64(struct cgroup_subsys_state *css,
+ struct cftype *cftype, u64 weight)
+{
+ /*
+ * cgroup weight knobs should use the common MIN, DFL and MAX
+ * values which are 1, 100 and 10000 respectively. While it loses
+ * a bit of range on both ends, it maps pretty well onto the shares
+ * value used by scheduler and the round-trip conversions preserve
+ * the original value over the entire range.
+ */
+ if (weight < CGROUP_WEIGHT_MIN || weight > CGROUP_WEIGHT_MAX)
+ return -ERANGE;
+
+ weight = DIV_ROUND_CLOSEST_ULL(weight * 1024, CGROUP_WEIGHT_DFL);
+
+ return sched_group_set_shares(css_tg(css), scale_load(weight));
+}
+#endif
+
+static void __maybe_unused cpu_period_quota_print(struct seq_file *sf,
+ long period, long quota)
+{
+ if (quota < 0)
+ seq_puts(sf, "max");
+ else
+ seq_printf(sf, "%ld", quota);
+
+ seq_printf(sf, " %ld\n", period);
+}
+
+/* caller should put the current value in *@periodp before calling */
+static int __maybe_unused cpu_period_quota_parse(char *buf,
+ u64 *periodp, u64 *quotap)
+{
+ char tok[21]; /* U64_MAX */
+
+ if (!sscanf(buf, "%s %llu", tok, periodp))
+ return -EINVAL;
+
+ *periodp *= NSEC_PER_USEC;
+
+ if (sscanf(tok, "%llu", quotap))
+ *quotap *= NSEC_PER_USEC;
+ else if (!strcmp(tok, "max"))
+ *quotap = RUNTIME_INF;
+ else
+ return -EINVAL;
+
+ return 0;
+}
+
+#ifdef CONFIG_CFS_BANDWIDTH
+static int cpu_max_show(struct seq_file *sf, void *v)
+{
+ struct task_group *tg = css_tg(seq_css(sf));
+
+ cpu_period_quota_print(sf, tg_get_cfs_period(tg), tg_get_cfs_quota(tg));
+ return 0;
+}
+
+static ssize_t cpu_max_write(struct kernfs_open_file *of,
+ char *buf, size_t nbytes, loff_t off)
+{
+ struct task_group *tg = css_tg(of_css(of));
+ u64 period = tg_get_cfs_period(tg);
+ u64 quota;
+ int ret;
+
+ ret = cpu_period_quota_parse(buf, &period, &quota);
+ if (!ret)
+ ret = tg_set_cfs_bandwidth(tg, period, quota);
+ return ret ?: nbytes;
+}
+#endif
+
+static struct cftype cpu_files[] = {
+ {
+ .name = "stat",
+ .seq_show = cpu_stats_show,
+ },
+#ifdef CONFIG_FAIR_GROUP_SCHED
+ {
+ .name = "weight",
+ .flags = CFTYPE_NOT_ON_ROOT,
+ .read_u64 = cpu_weight_read_u64,
+ .write_u64 = cpu_weight_write_u64,
+ },
+#endif
+#ifdef CONFIG_CFS_BANDWIDTH
+ {
+ .name = "max",
+ .flags = CFTYPE_NOT_ON_ROOT,
+ .seq_show = cpu_max_show,
+ .write = cpu_max_write,
+ },
+#endif
+ { } /* terminate */
+};
+
struct cgroup_subsys cpu_cgrp_subsys = {
.css_alloc = cpu_cgroup_css_alloc,
.css_free = cpu_cgroup_css_free,
@@ -8448,7 +8580,15 @@ struct cgroup_subsys cpu_cgrp_subsys = {
.attach = cpu_cgroup_attach,
.exit = cpu_cgroup_exit,
.legacy_cftypes = cpu_legacy_files,
+ .dfl_cftypes = cpu_files,
.early_init = 1,
+#ifdef CONFIG_CGROUP_CPUACCT
+ /*
+ * cpuacct is enabled together with cpu on the unified hierarchy
+ * and its stats are reported through "cpu.stat".
+ */
+ .depends_on = 1 << cpuacct_cgrp_id,
+#endif
};

#endif /* CONFIG_CGROUP_SCHED */
diff --git a/kernel/sched/cpuacct.c b/kernel/sched/cpuacct.c
index 42b2dd5..b4d32a6 100644
--- a/kernel/sched/cpuacct.c
+++ b/kernel/sched/cpuacct.c
@@ -224,6 +224,30 @@ static struct cftype files[] = {
{ } /* terminate */
};

+/* used to print cpuacct stats in cpu.stat on the unified hierarchy */
+void cpuacct_cpu_stats_show(struct seq_file *sf)
+{
+ struct cgroup_subsys_state *css;
+ u64 usage, user, sys;
+
+ css = cgroup_get_e_css(seq_css(sf)->cgroup, &cpuacct_cgrp_subsys);
+
+ usage = cpuusage_read(css, seq_cft(sf));
+ cpuacct_stats_read(css_ca(css), &user, &sys);
+
+ user *= TICK_NSEC;
+ sys *= TICK_NSEC;
+ do_div(usage, NSEC_PER_USEC);
+ do_div(user, NSEC_PER_USEC);
+ do_div(sys, NSEC_PER_USEC);
+
+ seq_printf(sf, "usage_usec %llu\n"
+ "user_usec %llu\n"
+ "system_usec %llu\n", usage, user, sys);
+
+ css_put(css);
+}
+
/*
* charge this task's execution time to its accounting group.
*
diff --git a/kernel/sched/cpuacct.h b/kernel/sched/cpuacct.h
index ed60562..44eace9 100644
--- a/kernel/sched/cpuacct.h
+++ b/kernel/sched/cpuacct.h
@@ -2,6 +2,7 @@

extern void cpuacct_charge(struct task_struct *tsk, u64 cputime);
extern void cpuacct_account_field(struct task_struct *p, int index, u64 val);
+extern void cpuacct_cpu_stats_show(struct seq_file *sf);

#else

@@ -14,4 +15,8 @@ cpuacct_account_field(struct task_struct *p, int index, u64 val)
{
}

+static inline void cpuacct_cpu_stats_show(struct seq_file *sf)
+{
+}
+
#endif
--
2.4.3

2015-08-05 00:50:14

by Kamezawa Hiroyuki

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] cgroup: define controller file conventions

On 2015/08/05 4:31, Tejun Heo wrote:
> From 6abc8ca19df0078de17dc38340db3002ed489ce7 Mon Sep 17 00:00:00 2001
> From: Tejun Heo <[email protected]>
> Date: Tue, 4 Aug 2015 15:20:55 -0400
>
> Traditionally, each cgroup controller implemented whatever interface
> it wanted leading to interfaces which are widely inconsistent.
> Examining the requirements of the controllers readily yield that there
> are only a few control schemes shared among all.
>
> Two major controllers already had to implement new interface for the
> unified hierarchy due to significant structural changes. Let's take
> the chance to establish common conventions throughout all controllers.
>
> This patch defines CGROUP_WEIGHT_MIN/DFL/MAX to be used on all weight
> based control knobs and documents the conventions that controllers
> should follow on the unified hierarchy. Except for io.weight knob,
> all existing unified hierarchy knobs are already compliant. A
> follow-up patch will update io.weight.
>
> v2: Added descriptions of min, low and high knobs.
>
> Signed-off-by: Tejun Heo <[email protected]>
> Acked-by: Johannes Weiner <[email protected]>
> Cc: Li Zefan <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> ---
> Hello,
>
> Added low/high descriptions and applied to the following git branch.
>
> git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git for-4.3-unified-base
>
> The branch currently only contains this patch and will stay stable so
> that it can be pulled from. I kept the base weight as DFL for now.
> If we decide to change it, I'll apply the change on top.
>
> Thanks.
>
> Documentation/cgroups/unified-hierarchy.txt | 80 ++++++++++++++++++++++++++---
> include/linux/cgroup.h | 9 ++++
> 2 files changed, 81 insertions(+), 8 deletions(-)
>
> diff --git a/Documentation/cgroups/unified-hierarchy.txt b/Documentation/cgroups/unified-hierarchy.txt
> index 86847a7..1ee9caf 100644
> --- a/Documentation/cgroups/unified-hierarchy.txt
> +++ b/Documentation/cgroups/unified-hierarchy.txt
> @@ -23,10 +23,13 @@ CONTENTS
> 5. Other Changes
> 5-1. [Un]populated Notification
> 5-2. Other Core Changes
> - 5-3. Per-Controller Changes
> - 5-3-1. blkio
> - 5-3-2. cpuset
> - 5-3-3. memory
> + 5-3. Controller File Conventions
> + 5-3-1. Format
> + 5-3-2. Control Knobs
> + 5-4. Per-Controller Changes
> + 5-4-1. blkio
> + 5-4-2. cpuset
> + 5-4-3. memory
> 6. Planned Changes
> 6-1. CAP for resource control
>
> @@ -372,14 +375,75 @@ supported and the interface files "release_agent" and
> - The "cgroup.clone_children" file is removed.
>
>
> -5-3. Per-Controller Changes
> +5-3. Controller File Conventions
>
> -5-3-1. blkio
> +5-3-1. Format
> +
> +In general, all controller files should be in one of the following
> +formats whenever possible.
> +
> +- Values only files
> +
> + VAL0 VAL1...\n
> +
> +- Flat keyed files
> +
> + KEY0 VAL0\n
> + KEY1 VAL1\n
> + ...
> +
> +- Nested keyed files
> +
> + KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
> + KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
> + ...
> +
> +For a writeable file, the format for writing should generally match
> +reading; however, controllers may allow omitting later fields or
> +implement restricted shortcuts for most common use cases.
> +
> +For both flat and nested keyed files, only the values for a single key
> +can be written at a time. For nested keyed files, the sub key pairs
> +may be specified in any order and not all pairs have to be specified.
> +
> +
> +5-3-2. Control Knobs
> +
> +- Settings for a single feature should generally be implemented in a
> + single file.
> +
> +- In general, the root cgroup should be exempt from resource control
> + and thus shouldn't have resource control knobs.
> +
> +- If a controller implements ratio based resource distribution, the
> + control knob should be named "weight" and have the range [1, 10000]
> + and 100 should be the default value. The values are chosen to allow
> + enough and symmetric bias in both directions while keeping it
> + intuitive (the default is 100%).
> +
> +- If a controller implements an absolute resource guarantee and/or
> + limit, the control knobs should be named "min" and "max"
> + respectively. If a controller implements best effort resource
> + gurantee and/or limit, the control knobs should be named "low" and
> + "high" respectively.
> +
> + In the above four control files, the special token "max" should be
> + used to represent upward infinity for both reading and writing.
> +
so, for memory controller, we'll have

(in alphabet order)
memory.failcnt
memory.force_empty (<= should this be removed ?)
memory.kmem.failcnt
memory.kmem.max
memory.kmem.max_usage
memory.kmem.slabinfo
memory.kmem.tcp.failcnt
memory.kmem.tcp.max
memory.kmem.tcp.max_usage
memory.kmem.tcp.usage
memory.kmem.usage
memory.max
memory.max_usage
memory.move_charge_at_immigrate
memory.numa_stat
memory.oom_control
memory.pressure_level
memory.high
memory.swapiness
memory.usage
memory.use_hierarchy (<= removed)

?
-Kame

2015-08-05 07:48:03

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] cgroup: define controller file conventions

On Wed 05-08-15 09:39:40, KAMEZAWA Hiroyuki wrote:
[...]
> so, for memory controller, we'll have

We currently have only current, low, high, max and events currently.
All other knobs are either deprecated or waiting for a usecase to emerge
before they get added.

> (in alphabet order)
> memory.failcnt
> memory.force_empty (<= should this be removed ?)
> memory.kmem.failcnt
> memory.kmem.max
> memory.kmem.max_usage
> memory.kmem.slabinfo
> memory.kmem.tcp.failcnt
> memory.kmem.tcp.max
> memory.kmem.tcp.max_usage
> memory.kmem.tcp.usage
> memory.kmem.usage
> memory.max
> memory.max_usage
> memory.move_charge_at_immigrate
> memory.numa_stat
> memory.oom_control
> memory.pressure_level
> memory.high
> memory.swapiness
> memory.usage
> memory.use_hierarchy (<= removed)

--
Michal Hocko
SUSE Labs

2015-08-05 09:10:48

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Tue, Aug 04, 2015 at 11:10:17AM -0400, Tejun Heo wrote:
> Hello, Peter.
>
> On Tue, Aug 04, 2015 at 11:07:11AM +0200, Peter Zijlstra wrote:
> > What about the unified hierarchy stuff cannot deal with per-task
> > controllers?
> >
> > _That_ was the biggest problem from what I can remember, and I see no
> > proposed resolution for that here.
>
> I've been thinking about it and I'm now convinced that cgroups just is
> the wrong interface to require each application to be programming
> against.

But people are doing it. So you must give them something. You cannot
just tell them to go away.

So where are the people doing this in this discussion? Or are you
one-sidedly forcing things? IIRC Google was doing this.

The whole libvirt trainwreck also does this (the programming against
cgroups, not the per task thing afaik).

You also cannot mandate system-disease, not everybody will want to run
that monster. From what I understood last time, Google has no interest
what so ever of using it.

> I wrote this in the CAT thread too but cgroups may be an
> okay management / administration interface but is a horrible
> programming interface to be used by individual applications.

Yeah, I need to catch up on that CAT thread, but the reality is, people
use it as a programming interface, whether you like it or not.

> For things which don't require hierarchy, the obvious thing to do is
> implementing a usual syscall-like interface be it a separate syscall,
> an prctl command, an ioctl or whatever.

And then you get /proc extensions to observe them, then people make
those /proc extensions writable and before you know it you've got an
equal or bigger mess back than you started out with :-(

> For things which require
> building a hierarchy of member threads, the right thing to do is
> making it a part of the usual process hierarchy - this is *the*
> hierarchy that applications are familiar with and have the facilities
> to deal with, so we can, for example, add a clone or unshare flag
> which puts the calling threads in a new child group and then let that
> use the fore-mentioned syscall-like interface to configure whatever it
> wants to configure.

And then you get to add support to cgroups to migrate hierarchies, is
that complexity you're waiting for?

Not to mention that its an unwieldy interface because then you get spawn
spawning threads etc.. Seeing how its impossible for the main thread to
create N tasks in one subgroup and another M tasks in another subgroup.

Instead they get to spawn a thread A, with which they then need to
communicate to spawn a further N tasks, then spawn a thread B, and again
communicate for another M tasks.

That's a rather awkward change to how people usually spawn threads.

Also, what to do when a thread changes profile? I can imagine a
situation where a task accepts a connection and depending on the kind of
request it gets, gets placed into a certain sub-group.

But there's no migration facility, so you get to go hand the work
around, which is expensive.

If there would be a migration facility, you've just lost naming, so how
are you going to denote the subgroups?

> In the long term, this is *way* better than
> letting individual applications fumble with cgroup hierarchy
> delegation and pseudo filesystem access.

You're worried about the intersection between what a task does and what
the administrator does, and that's a valid worry. But I'm really not
convinced this is going to make it better.

We already have relative file ops (openat(), mkdirat(), unlinkat()
etc..) can't we make sure they do the right thing in the face of a
process (hierarchy) getting migrated by the administrator.

That way, things at least _can_ work right, and I think being able to do
the right thing trumps not being able to make a mess -- people are
people, they'll always make a mess.

> If hierarchical weight and/or bandwidth limiting for thread hierarchy
> is absolutely necessary, doing this shouldn't be too difficult and I
> suspect it wouldn't be all that different from autogroup.

Autogroups are a bit icky and have the 'advantage' of not intersecting
with regular cgroups (much). The above has intricate intersection with
the cgroup stuff.

As said, your migrate process becomes a move hierarchy. You further get
more 'hidden' cgroups. /proc files that report what cgroup a task is in
will report a cgroup that's not actually present in the filesystem
(autogroups already does this, it confuses people). And as stated you
take away a lot of things that are now possible.


2015-08-05 14:31:41

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hello,

On Wed, Aug 05, 2015 at 11:10:36AM +0200, Peter Zijlstra wrote:
> > I've been thinking about it and I'm now convinced that cgroups just is
> > the wrong interface to require each application to be programming
> > against.
>
> But people are doing it. So you must give them something. You cannot
> just tell them to go away.

Sure, more on specifics later, but, first of all, the transition to v2
is a gradual process. The new and old hierarchies can co-exist, so
nothing forces abrupt transitions. Also, we do want to start as
restricted as possible and then widen it gradually as necessary.

> So where are the people doing this in this discussion? Or are you
> one-sidedly forcing things? IIRC Google was doing this.

We've been having those discussions for years in person and on the
cgroup mailing list. IIRC, the google case was for blkcg where they
have an IO proxy process which wanna issue IOs as different cgroups
depending on who's the original issuer. They created multiple
threads, put them in different cgroups and bounce the IOs to the
matching one; however, this is already pretty silly as they have to
bounce IOs to different threads. What makes a lot more sense here is
the ability to tag an IO as coming from a specific cgroup (or a
process's cgroup) and there was discussion of using an extra field in
aio request to indicate this, which is an a lot better solution for
the problem, can also express different IO priority and pretty easy to
implement.

> The whole libvirt trainwreck also does this (the programming against
> cgroups, not the per task thing afaik).

AFAIK, libvirt is doing multiple backends anyway and as long as the
delegation rules are clear, libvirt managing its own subhierarchy is
not a problem. It's an administration software stack which requires
fairly close integration with the userland part of operating system.

> You also cannot mandate system-disease, not everybody will want to run
> that monster. From what I understood last time, Google has no interest
> what so ever of using it.

But what would require tight coupling of individual applications and
something like systemd is the kernel failing to set up a reasonable
boundary between management and application interfaces. If the kernel
provides a useable API for individual applications to use, they'll
program against it and the management part can be whatever. If we
fail to do that, individual applications will have to talk to external
agent to coordinate access to management interface and that's what'll
end up creating hard dependency on specific system agents from
applications like apache or mysql or whatever. We really don't want
that. The kernel *NEEDS* to clearly distinguish those two to prevent
that from happening.

> > I wrote this in the CAT thread too but cgroups may be an
> > okay management / administration interface but is a horrible
> > programming interface to be used by individual applications.
>
> Yeah, I need to catch up on that CAT thread, but the reality is, people
> use it as a programming interface, whether you like it or not.

And that's one of the major fuck ups on cgroup's part that must be
rectified. Look at the interface being proposed there. It's exposing
direct hardware details w/o much abstraction which is fine for a
system management interface but at the same time it's intended to be
exposed to individual applications. This lack of distinction makes
people skip the attention that they should be paying when they're
designing interface exposed to individual programs. Worse, this makes
these things fly under the review scrutiny that public API accessible
to applications usually receives. Yet, that's what these things end
up to be. This just has to stop. cgroups can't continue to be this
ghetto shortcut to implementing half-assed APIs.

> > For things which don't require hierarchy, the obvious thing to do is
> > implementing a usual syscall-like interface be it a separate syscall,
> > an prctl command, an ioctl or whatever.
>
> And then you get /proc extensions to observe them, then people make
> those /proc extensions writable and before you know it you've got an
> equal or bigger mess back than you started out with :-(

What we should be doing is pushing them into the same arena as any
other publicly accessible API. I don't think there can be a shortcut
to this.

> > For things which require
> > building a hierarchy of member threads, the right thing to do is
> > making it a part of the usual process hierarchy - this is *the*
> > hierarchy that applications are familiar with and have the facilities
> > to deal with, so we can, for example, add a clone or unshare flag
> > which puts the calling threads in a new child group and then let that
> > use the fore-mentioned syscall-like interface to configure whatever it
> > wants to configure.
>
> And then you get to add support to cgroups to migrate hierarchies, is
> that complexity you're waiting for?

Absolutely, if it comes to that, that's what we should do. The only
other option is spilling and getting locked into half-baked interface
to applications which not only harm userland but also kernel.

> Not to mention that its an unwieldy interface because then you get spawn
> spawning threads etc.. Seeing how its impossible for the main thread to
> create N tasks in one subgroup and another M tasks in another subgroup.
>
> Instead they get to spawn a thread A, with which they then need to
> communicate to spawn a further N tasks, then spawn a thread B, and again
> communicate for another M tasks.
>
> That's a rather awkward change to how people usually spawn threads.

It is within the usual purview of how userland deals with hierarchies
of processes / threads and I don't think it's necessarily bad and more
importantly I don't think the use case or the perceived awkwardness
justifies introducing a wholely new mechanism.

> Also, what to do when a thread changes profile? I can imagine a
> situation where a task accepts a connection and depending on the kind of
> request it gets, gets placed into a certain sub-group.

Migration is a very expensive operation. The obvious thing to do for
such cases is having pools of workers for different profiles. Also,
as mentioned before, for more specific cases like IO, it makes a lot
more sense to override things per operation rather than moving threads
around.

> But there's no migration facility, so you get to go hand the work
> around, which is expensive.

That's a lot cheaper than migrating.

> If there would be a migration facility, you've just lost naming, so how
> are you going to denote the subgroups?

I don't think we want migration in sub-process hierarchy but in the
off chance we do the naming can follow the same pid/program
group/session id scheme, which, again, is a lot easier to deal with
from applications.

> > In the long term, this is *way* better than
> > letting individual applications fumble with cgroup hierarchy
> > delegation and pseudo filesystem access.
>
> You're worried about the intersection between what a task does and what
> the administrator does, and that's a valid worry. But I'm really not
> convinced this is going to make it better.
>
> We already have relative file ops (openat(), mkdirat(), unlinkat()
> etc..) can't we make sure they do the right thing in the face of a
> process (hierarchy) getting migrated by the administrator.

But those are relative to the current directory per operation and
there's no way to define a transaction across multiple file
operations. There's no way to prevent a process from being migrated
inbetween openat() and subsequent write().

> That way, things at least _can_ work right, and I think being able to do
> the right thing trumps not being able to make a mess -- people are
> people, they'll always make a mess.

It can't, at least not in the usual manner that file system operations
are defined. This is an interface which requires central coordination
(even for delegation) and a horrible one to expose to individual
applications.

> > If hierarchical weight and/or bandwidth limiting for thread hierarchy
> > is absolutely necessary, doing this shouldn't be too difficult and I
> > suspect it wouldn't be all that different from autogroup.
>
> Autogroups are a bit icky and have the 'advantage' of not intersecting
> with regular cgroups (much). The above has intricate intersection with
> the cgroup stuff.
>
> As said, your migrate process becomes a move hierarchy. You further get
> more 'hidden' cgroups. /proc files that report what cgroup a task is in
> will report a cgroup that's not actually present in the filesystem
> (autogroups already does this, it confuses people). And as stated you
> take away a lot of things that are now possible.

I don't think it's a lot that per-process is gonna take away.
Per-thread use cases are pretty niche to begin with and most can and
should be implemented better using a more fitting mechanism. As for
having to deal with more complexity in cgroup core, that's fine. If
it comes to that, we'll have to bite the bullet and do it. Sure, we
want to be simpler but not at the cost of messing up userland API and
please note that what we lost with cgroups is this tension.

This tension between the difficulty and complexity of implementing
something which can be used by applications and the necessity or
desirability of the proposed use cases is crucial in steering kernel
development and the APIs it exposes. Abusing cgroups like we've been
doing bypasses that tension and we of course end up locked into an
extremely crappy interfaces and mechanisms which could never be
justified in the first place. This is about time we stopped this
disaster train.

Thanks.

--
tejun

2015-08-06 02:30:45

by Kamezawa Hiroyuki

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] cgroup: define controller file conventions

On 2015/08/05 16:47, Michal Hocko wrote:
> On Wed 05-08-15 09:39:40, KAMEZAWA Hiroyuki wrote:
> [...]
>> so, for memory controller, we'll have
>
> We currently have only current, low, high, max and events currently.
> All other knobs are either deprecated or waiting for a usecase to emerge
> before they get added.
>

Sure. I think following has users.
- *.stat - for chekcing health of cgroup ,or for debug
- *.pressure_level - for notifying memory pressure
- *.swappiness - for adjusting LRU activity per application type.
- *.oom_control - for surviving/notifiyng out of memory
memcg's oom can be recovered if limit goes up rather than kill.
But I know people says this knob is not useful. This will require
discussion.

Hm. If we don't want to increase files, NETLINK or systemcall is an another choice of
subsystem specific interface ?

-Kame

>> (in alphabet order)
>> memory.failcnt
>> memory.force_empty (<= should this be removed ?)
>> memory.kmem.failcnt
>> memory.kmem.max
>> memory.kmem.max_usage
>> memory.kmem.slabinfo
>> memory.kmem.tcp.failcnt
>> memory.kmem.tcp.max
>> memory.kmem.tcp.max_usage
>> memory.kmem.tcp.usage
>> memory.kmem.usage
>> memory.max
>> memory.max_usage
>> memory.move_charge_at_immigrate
>> memory.numa_stat
>> memory.oom_control
>> memory.pressure_level
>> memory.high
>> memory.swapiness
>> memory.usage
>> memory.use_hierarchy (<= removed)
>

2015-08-07 18:17:29

by Michal Hocko

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] cgroup: define controller file conventions

On Thu 06-08-15 11:30:08, KAMEZAWA Hiroyuki wrote:
[...]
> Sure. I think following has users.
> - *.stat - for chekcing health of cgroup ,or for debug

Yes but we want to have something which is closer to meminfo/vmstat IMO

> - *.pressure_level - for notifying memory pressure

Notifications are definitely useful I am just not sure this interface is
the right one. We have seen some requests to adjust the interface to get
new semantics (edge vs. level triggered). This should be sorted out
before we expose the knob.

> - *.swappiness - for adjusting LRU activity per application type.

Yes, and I wanted to post a patch to export it several times but then I
realized that this should be done only as long as vm.swappiness stays
and it is not deprecated. And more and more I think about swappiness
the less sure I am about it's usefulness. It is not doing much for
quite some time because we are heavily biasing to the pagecache reclaim
and the knob is more and more misleading. It is also not offering what
people might want it to do. E.g. it doesn't allow for preferring swapout
which might be useful when the swap is backed by a really fast storage.
Maybe we will need a new metric here so I wouldn't rush exporting memcg
alternative much.

> - *.oom_control - for surviving/notifiyng out of memory
> memcg's oom can be recovered if limit goes up rather than kill.

I think it is very much useful - when used wisely. I have seen many
calls for user defined OOM policies but then we have seen those that are
more creative like having the policy maker live in the same memcg which
requires some hacks to prevent from self-deadlocks.
So overall this is very attractive but we might need to think about a
better interface. BPF sounds like a potential way to go. I feel the
memcg and the global approaches should be consistent as much as possible
wrt. API.
--
Michal Hocko
SUSE Labs

2015-08-17 20:35:31

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hello, Peter.

Do we have an agreement on the sched changes?

Thanks a lot.

--
tejun

2015-08-17 21:34:43

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] cgroup: define controller file conventions

On Tue, Aug 04, 2015 at 03:31:01PM -0400, Tejun Heo wrote:
> From 6abc8ca19df0078de17dc38340db3002ed489ce7 Mon Sep 17 00:00:00 2001
> From: Tejun Heo <[email protected]>
> Date: Tue, 4 Aug 2015 15:20:55 -0400
>
> Traditionally, each cgroup controller implemented whatever interface
> it wanted leading to interfaces which are widely inconsistent.
> Examining the requirements of the controllers readily yield that there
> are only a few control schemes shared among all.
>
> Two major controllers already had to implement new interface for the
> unified hierarchy due to significant structural changes. Let's take
> the chance to establish common conventions throughout all controllers.
>
> This patch defines CGROUP_WEIGHT_MIN/DFL/MAX to be used on all weight
> based control knobs and documents the conventions that controllers
> should follow on the unified hierarchy. Except for io.weight knob,
> all existing unified hierarchy knobs are already compliant. A
> follow-up patch will update io.weight.
>
> v2: Added descriptions of min, low and high knobs.
>
> Signed-off-by: Tejun Heo <[email protected]>
> Acked-by: Johannes Weiner <[email protected]>
> Cc: Li Zefan <[email protected]>
> Cc: Peter Zijlstra <[email protected]>

Acked-by: Johannes Weiner <[email protected]>

2015-08-17 22:05:35

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v2 1/3] cgroup: define controller file conventions

On Fri, Aug 07, 2015 at 08:17:23PM +0200, Michal Hocko wrote:
> On Thu 06-08-15 11:30:08, KAMEZAWA Hiroyuki wrote:
> > - *.oom_control - for surviving/notifiyng out of memory
> > memcg's oom can be recovered if limit goes up rather than kill.
>
> I think it is very much useful - when used wisely. I have seen many
> calls for user defined OOM policies but then we have seen those that are
> more creative like having the policy maker live in the same memcg which
> requires some hacks to prevent from self-deadlocks.
> So overall this is very attractive but we might need to think about a
> better interface. BPF sounds like a potential way to go. I feel the
> memcg and the global approaches should be consistent as much as possible
> wrt. API.

I'm not sure I still see a usecase for this.

The whole idea behind memory.high is to give the user the chance to
monitor the group's health and then act upon that. You can freeze the
group if you must, gather information, kill tasks. This is the way to
implement a custom OOM policy.

memory.max on the other hand tells the *kernel* when to OOM, with all
the implications that a kernel OOM has. Don't configure that when you
don't want your tasks killed.

2015-08-18 04:04:04

by Paul Turner

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Apologies for the repeat. Gmail ate its plain text setting for some
reason. Shame bells.

On Mon, Aug 17, 2015 at 9:02 PM, Paul Turner <[email protected]> wrote:
>
>
> On Wed, Aug 5, 2015 at 7:31 AM, Tejun Heo <[email protected]> wrote:
>>
>> Hello,
>>
>> On Wed, Aug 05, 2015 at 11:10:36AM +0200, Peter Zijlstra wrote:
>> > > I've been thinking about it and I'm now convinced that cgroups just is
>> > > the wrong interface to require each application to be programming
>> > > against.
>> >
>> > But people are doing it. So you must give them something. You cannot
>> > just tell them to go away.
>>
>> Sure, more on specifics later, but, first of all, the transition to v2
>> is a gradual process. The new and old hierarchies can co-exist, so
>> nothing forces abrupt transitions. Also, we do want to start as
>> restricted as possible and then widen it gradually as necessary.
>>
>> > So where are the people doing this in this discussion? Or are you
>> > one-sidedly forcing things? IIRC Google was doing this.
>>
>> We've been having those discussions for years in person and on the
>> cgroup mailing list. IIRC, the google case was for blkcg where they
>> have an IO proxy process which wanna issue IOs as different cgroups
>> depending on who's the original issuer. They created multiple
>> threads, put them in different cgroups and bounce the IOs to the
>> matching one; however, this is already pretty silly as they have to
>> bounce IOs to different threads. What makes a lot more sense here is
>> the ability to tag an IO as coming from a specific cgroup (or a
>> process's cgroup) and there was discussion of using an extra field in
>> aio request to indicate this, which is an a lot better solution for
>> the problem, can also express different IO priority and pretty easy to
>> implement.
>>
>
> So we have two major types of use that are relevant to this interface:
>
> 1) Proxy agents. When a control systems want to perform work on behalf of a
> container, they will sometimes move the acting thread into the relevant
> control groups so that it can be accounted on that container's behalf.
> [This is more relevant for non-persistent resources such as CPU time or I/O
> priorities than charges that will outlive the work such as memory
> allocations.]
>
> I agree (1) is at best a bit of a hack and can be worked around on the type
> of time-frame these interfaces move at.
>
> 2) Control within an address-space. For subsystems with fungible resources,
> e.g. CPU, it can be useful for an address space to partition its own
> threads. Losing the capability to do this against the CPU controller would
> be a large set-back for instance. Occasionally, it is useful to share these
> groupings between address spaces when processes are cooperative, but this is
> less of a requirement.
>
> This is important to us.
>
>
>> > The whole libvirt trainwreck also does this (the programming against
>> > cgroups, not the per task thing afaik).
>>
>> AFAIK, libvirt is doing multiple backends anyway and as long as the
>> delegation rules are clear, libvirt managing its own subhierarchy is
>> not a problem. It's an administration software stack which requires
>> fairly close integration with the userland part of operating system.
>>
>> > You also cannot mandate system-disease, not everybody will want to run
>> > that monster. From what I understood last time, Google has no interest
>> > what so ever of using it.
>>
>> But what would require tight coupling of individual applications and
>> something like systemd is the kernel failing to set up a reasonable
>> boundary between management and application interfaces. If the kernel
>> provides a useable API for individual applications to use, they'll
>> program against it and the management part can be whatever. If we
>> fail to do that, individual applications will have to talk to external
>> agent to coordinate access to management interface
>
>
> It's notable here that for a managed system, the agent coordinating access
> *must* be external
>
>>
>> and that's what'll
>> end up creating hard dependency on specific system agents from
>> applications like apache or mysql or whatever. We really don't want
>> that. The kernel *NEEDS* to clearly distinguish those two to prevent
>> that from happening.
>>
>> > > I wrote this in the CAT thread too but cgroups may be an
>> > > okay management / administration interface but is a horrible
>> > > programming interface to be used by individual applications.
>> >
>> > Yeah, I need to catch up on that CAT thread, but the reality is, people
>> > use it as a programming interface, whether you like it or not.
>>
>> And that's one of the major fuck ups on cgroup's part that must be
>> rectified. Look at the interface being proposed there. It's exposing
>> direct hardware details w/o much abstraction which is fine for a
>> system management interface but at the same time it's intended to be
>> exposed to individual applications.
>
>
> FWIW this is something we've had no significant problems managing with
> separate mount mounts and file system protections. Yes, there are some
> potential warts around atomicity; but we've not found them too onerous.
>
> What I don't quite follow here is the assumption that CAT should would be
> necessarily exposed to individual applications? What's wrong with subsystems
> that are primarily intended only for system management agents, we already
> have several of these.
>
>
>>
>> This lack of distinction makes
>> people skip the attention that they should be paying when they're
>> designing interface exposed to individual programs. Worse, this makes
>> these things fly under the review scrutiny that public API accessible
>> to applications usually receives. Yet, that's what these things end
>> up to be. This just has to stop. cgroups can't continue to be this
>> ghetto shortcut to implementing half-assed APIs.
>
>
> I certainly don't disagree on this point :). But as above, I don't quite
> follow why an API being in cgroups must mean it's accessible to an
> application controlled by that group. This has certainly not been a
> requirement for our use.
>
>>
>>
>> > > For things which don't require hierarchy, the obvious thing to do is
>> > > implementing a usual syscall-like interface be it a separate syscall,
>> > > an prctl command, an ioctl or whatever.
>> >
>> > And then you get /proc extensions to observe them, then people make
>> > those /proc extensions writable and before you know it you've got an
>> > equal or bigger mess back than you started out with :-(
>>
>> What we should be doing is pushing them into the same arena as any
>> other publicly accessible API. I don't think there can be a shortcut
>> to this.
>>
>
> Are you explicitly opposed to non-hierarchical partitions, however? Cpuset
> is [typically] an example of this, where the interface wants to control
> unified properties across a set of processes. Without necessarily being
> usefully hierarchical. (This is just to understand your core position, I'm
> not proposing cpuset should shape *anything*.)
>
>>
>> > > For things which require
>> > > building a hierarchy of member threads, the right thing to do is
>> > > making it a part of the usual process hierarchy - this is *the*
>> > > hierarchy that applications are familiar with and have the facilities
>> > > to deal with, so we can, for example, add a clone or unshare flag
>> > > which puts the calling threads in a new child group and then let that
>> > > use the fore-mentioned syscall-like interface to configure whatever it
>> > > wants to configure.
>> >
>> > And then you get to add support to cgroups to migrate hierarchies, is
>> > that complexity you're waiting for?
>>
>> Absolutely, if it comes to that, that's what we should do. The only
>> other option is spilling and getting locked into half-baked interface
>> to applications which not only harm userland but also kernel.
>>
>> > Not to mention that its an unwieldy interface because then you get spawn
>> > spawning threads etc.. Seeing how its impossible for the main thread to
>> > create N tasks in one subgroup and another M tasks in another subgroup.
>> >
>> > Instead they get to spawn a thread A, with which they then need to
>> > communicate to spawn a further N tasks, then spawn a thread B, and again
>> > communicate for another M tasks.
>> >
>> > That's a rather awkward change to how people usually spawn threads.
>>
>> It is within the usual purview of how userland deals with hierarchies
>> of processes / threads and I don't think it's necessarily bad and more
>> importantly I don't think the use case or the perceived awkwardness
>> justifies introducing a wholely new mechanism.
>>
>> > Also, what to do when a thread changes profile? I can imagine a
>> > situation where a task accepts a connection and depending on the kind of
>> > request it gets, gets placed into a certain sub-group.
>>
>> Migration is a very expensive operation. The obvious thing to do for
>> such cases is having pools of workers for different profiles. Also,
>> as mentioned before, for more specific cases like IO, it makes a lot
>> more sense to override things per operation rather than moving threads
>> around.
>>
>> > But there's no migration facility, so you get to go hand the work
>> > around, which is expensive.
>>
>> That's a lot cheaper than migrating.
>>
>> > If there would be a migration facility, you've just lost naming, so how
>> > are you going to denote the subgroups?
>>
>> I don't think we want migration in sub-process hierarchy but in the
>> off chance we do the naming can follow the same pid/program
>> group/session id scheme, which, again, is a lot easier to deal with
>> from applications.
>
>
> I don't have many objections with hand-off versus migration above, however,
> I think that this is a big drawback. Threads are expensive to create and
> are often cached rather than released. While migration may be expensive,
> creating a more thread is more so. The important to reconfigure a thread's
> personality at run-time is important.
>
>>
>> > > In the long term, this is *way* better than
>> > > letting individual applications fumble with cgroup hierarchy
>> > > delegation and pseudo filesystem access.
>> >
>> > You're worried about the intersection between what a task does and what
>> > the administrator does, and that's a valid worry. But I'm really not
>> > convinced this is going to make it better.
>> >
>> > We already have relative file ops (openat(), mkdirat(), unlinkat()
>> > etc..) can't we make sure they do the right thing in the face of a
>> > process (hierarchy) getting migrated by the administrator.
>>
>> But those are relative to the current directory per operation and
>> there's no way to define a transaction across multiple file
>> operations. There's no way to prevent a process from being migrated
>> inbetween openat() and subsequent write().
>
>
> A forwarding /proc/thread_self/cgroup accessor, or similar, would be another
> way to address some of these issues.
>
>>
>>
>> > That way, things at least _can_ work right, and I think being able to do
>> > the right thing trumps not being able to make a mess -- people are
>> > people, they'll always make a mess.
>>
>> It can't, at least not in the usual manner that file system operations
>> are defined. This is an interface which requires central coordination
>> (even for delegation) and a horrible one to expose to individual
>> applications.
>>
>> > > If hierarchical weight and/or bandwidth limiting for thread hierarchy
>> > > is absolutely necessary, doing this shouldn't be too difficult and I
>> > > suspect it wouldn't be all that different from autogroup.
>> >
>> > Autogroups are a bit icky and have the 'advantage' of not intersecting
>> > with regular cgroups (much). The above has intricate intersection with
>> > the cgroup stuff.
>> >
>> > As said, your migrate process becomes a move hierarchy. You further get
>> > more 'hidden' cgroups. /proc files that report what cgroup a task is in
>> > will report a cgroup that's not actually present in the filesystem
>> > (autogroups already does this, it confuses people). And as stated you
>> > take away a lot of things that are now possible.
>>
>> I don't think it's a lot that per-process is gonna take away.
>> Per-thread use cases are pretty niche to begin with and most can and
>> should be implemented better using a more fitting mechanism. As for
>> having to deal with more complexity in cgroup core, that's fine. If
>> it comes to that, we'll have to bite the bullet and do it. Sure, we
>> want to be simpler but not at the cost of messing up userland API and
>> please note that what we lost with cgroups is this tension.
>
>
> I don't quite agree here. Losing per-thread control within the cpu
> controller is likely going to mean that much of it ends up being
> reimplemented as some duplicate-in-appearance interface that gets us back to
> where we are today. I recognize that these controllers (cpu, cpuacct) are
> square pegs in that per-process makes sense for most other sub-systems; but
> unfortunately, their needs and use-cases are real / dependent on their
> present form.
>
>>
>> This tension between the difficulty and complexity of implementing
>> something which can be used by applications and the necessity or
>> desirability of the proposed use cases is crucial in steering kernel
>> development and the APIs it exposes. Abusing cgroups like we've been
>> doing bypasses that tension and we of course end up locked into an
>> extremely crappy interfaces and mechanisms which could never be
>> justified in the first place. This is about time we stopped this
>> disaster train.
>>
>> Thanks.
>>
>> --
>> tejun
>
>

2015-08-18 20:31:22

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hello, Paul.

On Mon, Aug 17, 2015 at 09:03:30PM -0700, Paul Turner wrote:
> > 2) Control within an address-space. For subsystems with fungible resources,
> > e.g. CPU, it can be useful for an address space to partition its own
> > threads. Losing the capability to do this against the CPU controller would
> > be a large set-back for instance. Occasionally, it is useful to share these
> > groupings between address spaces when processes are cooperative, but this is
> > less of a requirement.
> >
> > This is important to us.

Sure, let's build a proper interface for that. Do you actually need
sub-hierarchy inside a process? Can you describe your use case in
detail and why having hierarchical CPU cycle distribution is essential
for your use case?

> >> And that's one of the major fuck ups on cgroup's part that must be
> >> rectified. Look at the interface being proposed there. It's exposing
> >> direct hardware details w/o much abstraction which is fine for a
> >> system management interface but at the same time it's intended to be
> >> exposed to individual applications.
> >
> > FWIW this is something we've had no significant problems managing with
> > separate mount mounts and file system protections. Yes, there are some
> > potential warts around atomicity; but we've not found them too onerous.

You guys control the whole stack. Of course, you can get away with an
interface which are pretty messed up in terms of layering and
isolation; however, generic kernel interface cannot be designed
according to that standard.

> > What I don't quite follow here is the assumption that CAT should would be
> > necessarily exposed to individual applications? What's wrong with subsystems
> > that are primarily intended only for system management agents, we already
> > have several of these.

Why would you assume that threads of a process wouldn't want to
configure it ever? How is this different from CPU affinity?

> >> This lack of distinction makes
> >> people skip the attention that they should be paying when they're
> >> designing interface exposed to individual programs. Worse, this makes
> >> these things fly under the review scrutiny that public API accessible
> >> to applications usually receives. Yet, that's what these things end
> >> up to be. This just has to stop. cgroups can't continue to be this
> >> ghetto shortcut to implementing half-assed APIs.
> >
> > I certainly don't disagree on this point :). But as above, I don't quite
> > follow why an API being in cgroups must mean it's accessible to an
> > application controlled by that group. This has certainly not been a
> > requirement for our use.

I don't follow what you're trying to way with the above paragraph.
Are you still talking about CAT? If so, that use case isn't the only
one. I'm pretty sure there are people who would want to configure
cache allocation at thread level.

> >> What we should be doing is pushing them into the same arena as any
> >> other publicly accessible API. I don't think there can be a shortcut
> >> to this.
> >
> > Are you explicitly opposed to non-hierarchical partitions, however? Cpuset
> > is [typically] an example of this, where the interface wants to control
> > unified properties across a set of processes. Without necessarily being
> > usefully hierarchical. (This is just to understand your core position, I'm
> > not proposing cpuset should shape *anything*.)

I'm having trouble following what you're trying to say. FWIW, cpuset
is fully hierarchical.

> >> I don't think we want migration in sub-process hierarchy but in the
> >> off chance we do the naming can follow the same pid/program
> >> group/session id scheme, which, again, is a lot easier to deal with
> >> from applications.
> >
> > I don't have many objections with hand-off versus migration above, however,
> > I think that this is a big drawback. Threads are expensive to create and
> > are often cached rather than released. While migration may be expensive,
> > creating a more thread is more so. The important to reconfigure a thread's
> > personality at run-time is important.

The core problem here is picking the hot path. If cgroups as a whole
doesn't pick a position here, controllers have to assume that
migration might not be a very cold path which naturally leads to
overall designs and synchronization schemes which concede hot path
performance to accomodate migration. We simply can't afford to do
that - we end up losing way more in way hotter paths for something
which may be marginally useful in some corner cases.

So, this is a trade-off we're consciously making. If there are
common-enough use cases which require jumping across different cgroup
domains, we'll try to figure out a way to accomodate those but by
default migration is a very cold and expensive path.

> >> But those are relative to the current directory per operation and
> >> there's no way to define a transaction across multiple file
> >> operations. There's no way to prevent a process from being migrated
> >> inbetween openat() and subsequent write().
> >
> > A forwarding /proc/thread_self/cgroup accessor, or similar, would be another
> > way to address some of these issues.

That sounds horrible to me. What if the process wants to do RMW a
config? What if the permissions are different after an intervening
migration? What if the sub-hierarchy no longer exists or has been
replaced by a hierarchy with the same topology but actualy is a
different one?

> > I don't quite agree here. Losing per-thread control within the cpu
> > controller is likely going to mean that much of it ends up being
> > reimplemented as some duplicate-in-appearance interface that gets us back to
> > where we are today. I recognize that these controllers (cpu, cpuacct) are
> > square pegs in that per-process makes sense for most other sub-systems; but
> > unfortunately, their needs and use-cases are real / dependent on their
> > present form.

Let's build an API which actually looks and behaves like an API which
is properly isolated from what external agents may do to the process.
I can't see how that would be "back to where we are today". All of
those are pretty critical attributes for a public kernel API and
utterly broken right now.

Thanks.

--
tejun

2015-08-18 23:41:03

by Kamezawa Hiroyuki

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On 2015/08/19 5:31, Tejun Heo wrote:
> Hello, Paul.
>
> On Mon, Aug 17, 2015 at 09:03:30PM -0700, Paul Turner wrote:
>>> 2) Control within an address-space. For subsystems with fungible resources,
>>> e.g. CPU, it can be useful for an address space to partition its own
>>> threads. Losing the capability to do this against the CPU controller would
>>> be a large set-back for instance. Occasionally, it is useful to share these
>>> groupings between address spaces when processes are cooperative, but this is
>>> less of a requirement.
>>>
>>> This is important to us.
>
> Sure, let's build a proper interface for that. Do you actually need
> sub-hierarchy inside a process? Can you describe your use case in
> detail and why having hierarchical CPU cycle distribution is essential
> for your use case?

An actual per-thread use case in our customers is qemu-kvm + cpuset.
customers pin each vcpus and qemu-kvm's worker threads to cpus.
For example, pinning 4 vcpus to cpu 2-6 and pinning qemu main thread and
others(vhost) to cpu 0-1.

This is an actual kvm tuning on our customers for performance guarantee.

In another case, cpu cgroup's throttling feature is used per vcpu for vm cpu sizing.

Thanks,
-Kame

2015-08-19 03:23:47

by Mike Galbraith

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Tue, 2015-08-18 at 13:31 -0700, Tejun Heo wrote:

> So, this is a trade-off we're consciously making. If there are
> common-enough use cases which require jumping across different cgroup
> domains, we'll try to figure out a way to accomodate those but by
> default migration is a very cold and expensive path.

Hm. I know of a big data outfit to which attach/detach performance was
important enough for them to have plucked an old experimental overhead
reduction hack (mine) off lkml, and shipped it. It must have mattered a
LOT for them (not suicidal crash test dummies) to have done that.

-Mike

2015-08-19 16:23:28

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hello, Kame.

On Wed, Aug 19, 2015 at 08:39:43AM +0900, Kamezawa Hiroyuki wrote:
> An actual per-thread use case in our customers is qemu-kvm + cpuset.
> customers pin each vcpus and qemu-kvm's worker threads to cpus.
> For example, pinning 4 vcpus to cpu 2-6 and pinning qemu main thread and
> others(vhost) to cpu 0-1.

taskset and/or teach qemu how to confiure its worker threads?

> This is an actual kvm tuning on our customers for performance guarantee.
>
> In another case, cpu cgroup's throttling feature is used per vcpu for vm cpu sizing.

Yeap, this is something we likely want to implement in an accessible
way. For kvm, per-thread throttling configuration is enough, right?

Thanks.

--
tejun

2015-08-19 16:41:18

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hello, Mike.

On Wed, Aug 19, 2015 at 05:23:40AM +0200, Mike Galbraith wrote:
> Hm. I know of a big data outfit to which attach/detach performance was
> important enough for them to have plucked an old experimental overhead
> reduction hack (mine) off lkml, and shipped it. It must have mattered a
> LOT for them (not suicidal crash test dummies) to have done that.

There haven't been any guidelines on cgroup usage. Of course people
have been developing in all directions. It's a natural learning
process and there are use cases which can be served by migrating
processes back and forth. Nobody is trying to prevent that; however,
if one examines how resources and their associations need to be
tracked for accounting and control, it's evident that there are
inherent trade-offs between migration and the stuff which happens
while not migrating and it's clear which side is more important.

Most problems can be solved in different ways and I'm doubtful that
e.g. bouncing jobs to worker threads would be more expensive than
migrating the worker back and forth in a lot of cases. If migrating
threads around floats somebody's boat, that's fine but that has never
been and can't be the focus of design and optimization, not at the
cost of the actual hot paths.

Thanks.

--
tejun

2015-08-20 04:01:07

by Mike Galbraith

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Wed, 2015-08-19 at 09:41 -0700, Tejun Heo wrote:

> Most problems can be solved in different ways and I'm doubtful that
> e.g. bouncing jobs to worker threads would be more expensive than
> migrating the worker back and forth in a lot of cases. If migrating
> threads around floats somebody's boat, that's fine but that has never
> been and can't be the focus of design and optimization, not at the
> cost of the actual hot paths.

If create/attach/detach/destroy aren't hot paths, what is? Those are
fork/exec/exit cgroup analogs. If you have thousands upon thousands of
potentially active cgroups (aka customers), you wouldn't want to keep
them all around just in case when you can launch cgroup tasks the same
way we launch any other task. You wouldn't contemplate slowing down
fork/exec/exit, but create/attach/detach/destroy are one and the same..
they need to be just as fast/light as they can be, as they are part and
parcel of the higher level process.

That's why my hack ended up in a large enterprise outfit's product, it
was _needed_ to fix up cgroups performance suckage. That suckage was
fixed up properly quite a bit later.

Anyway, if what they or anybody like them can currently do with their
job launcher/manager gizmos is negatively impacted, they can gripe for
themselves. All I'm saying is that there are definitely users out there
to whom create/attach/detach/destroy are highly important.

-Mike

2015-08-20 07:52:37

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hey, Mike.

On Thu, Aug 20, 2015 at 06:00:59AM +0200, Mike Galbraith wrote:
> If create/attach/detach/destroy aren't hot paths, what is? Those are
> fork/exec/exit cgroup analogs. If you have thousands upon thousands of

Things like page faults? cgroup controllers hook into subsystems and
their hot path operations get affected by the method of cgroup
association.

Also, migration and create/destroy are completely different.
create/destroy don't need much synchronization - a new task is made
visible only after the initial association is set up and a dying
task's association is destroyed only after the task isn't referenced
by anybody. There's nothing dynamic about those compared to
migration.

> potentially active cgroups (aka customers), you wouldn't want to keep
> them all around just in case when you can launch cgroup tasks the same
> way we launch any other task. You wouldn't contemplate slowing down
> fork/exec/exit, but create/attach/detach/destroy are one and the same..
> they need to be just as fast/light as they can be, as they are part and
> parcel of the higher level process.

You're conflating two completely different operations. Also, when I
say migration is a relatively expensive operation, I'm comparing it to
bouncing a request to another thread as opposed to bouncing the
issuing thread to different cgroup request-by-request.

> That's why my hack ended up in a large enterprise outfit's product, it
> was _needed_ to fix up cgroups performance suckage. That suckage was
> fixed up properly quite a bit later.

Hmm... I bet you're talking about the removal of synchronize_rcu() in
migration path, sure, that was a silly thing to have there but also
that comparison is likely a couple orders of magnitude off of what the
thread was originally talking about.

> Anyway, if what they or anybody like them can currently do with their
> job launcher/manager gizmos is negatively impacted, they can gripe for
> themselves. All I'm saying is that there are definitely users out there
> to whom create/attach/detach/destroy are highly important.

Hmmm... I think this discussion got pretty badly derailed at this
point. If I'm not mistaken, you're talking about tens or a few
hundred millisecs of latency per migration which no longer exists and
won't ever come back and the discussion originally was about something
like migrating thread for issuing several IO requests versus bouncing
that to a dedicated issuer thread in that domain.

Thanks.

--
tejun

2015-08-20 08:47:35

by Mike Galbraith

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Thu, 2015-08-20 at 00:52 -0700, Tejun Heo wrote:

> Hmmm... I think this discussion got pretty badly derailed at this
> point. If I'm not mistaken, you're talking about tens or a few
> hundred millisecs of latency per migration which no longer exists and
> won't ever come back and the discussion originally was about something
> like migrating thread for issuing several IO requests versus bouncing
> that to a dedicated issuer thread in that domain.

Yes, ms latencies ever coming back is the concern, whether that be due
to something akin to the old synchronize_rcu() horror.. or some handoff
of whatever to whomever.

-Mike

2015-08-21 19:27:05

by Paul Turner

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Tue, Aug 18, 2015 at 1:31 PM, Tejun Heo <[email protected]> wrote:
> Hello, Paul.
>
> On Mon, Aug 17, 2015 at 09:03:30PM -0700, Paul Turner wrote:
>> > 2) Control within an address-space. For subsystems with fungible resources,
>> > e.g. CPU, it can be useful for an address space to partition its own
>> > threads. Losing the capability to do this against the CPU controller would
>> > be a large set-back for instance. Occasionally, it is useful to share these
>> > groupings between address spaces when processes are cooperative, but this is
>> > less of a requirement.
>> >
>> > This is important to us.
>
> Sure, let's build a proper interface for that. Do you actually need
> sub-hierarchy inside a process? Can you describe your use case in
> detail and why having hierarchical CPU cycle distribution is essential
> for your use case?
>

One common example here is a thread-pool. Having a hierarchical
constraint allows users to specify what proportion of time it should
receive, independent of how many threads are placed in the pool.

A very concrete example of the above is a virtual machine in which you
want to guarantee scheduling for the vCPU threads which must schedule
beside many hypervisor support threads. A hierarchy is the only way
to fix the ratio at which these compete.

An example that's not the cpu controller is that we use cpusets to
expose to applications their "shared" and "private" cores. (These
sets are dynamic based on what is coscheduled on a given machine.)

>> >> And that's one of the major fuck ups on cgroup's part that must be
>> >> rectified. Look at the interface being proposed there. It's exposing
>> >> direct hardware details w/o much abstraction which is fine for a
>> >> system management interface but at the same time it's intended to be
>> >> exposed to individual applications.
>> >
>> > FWIW this is something we've had no significant problems managing with
>> > separate mount mounts and file system protections. Yes, there are some
>> > potential warts around atomicity; but we've not found them too onerous.
>
> You guys control the whole stack. Of course, you can get away with an
> interface which are pretty messed up in terms of layering and
> isolation; however, generic kernel interface cannot be designed
> according to that standard.

I feel like two points are being conflated here:

Yes, it is sufficiently generic that it's possible to configure
nonsensical things.

But, it is also possible to lock things down presently. This is, for
better or worse, the direction that general user-space has also taken
with centralized management daemons such as systemd.

Setting design aside for a moment -- which I fully agree with you that
there is room for large improvement in. The largest idiosyncrasy
today is that the configuration above does depend on having a stable
mount point for applications to manage their sub-hierarchies.
Migrations would improve this greatly, but this is a bit of a detour
because you're looking to fix the fundamental design rather than
improve the state of the world and that's probably a good thing :)

>
>> > What I don't quite follow here is the assumption that CAT should would be
>> > necessarily exposed to individual applications? What's wrong with subsystems
>> > that are primarily intended only for system management agents, we already
>> > have several of these.
>
> Why would you assume that threads of a process wouldn't want to
> configure it ever? How is this different from CPU affinity?

In general cache and CPU behave differently. Generally for it to make
sense between threads in a process they would have to have wholly
disjoint memory, at which point the only sane long-term implementation
is separate processes and the management moves up a level anyway.

That said, there are surely cases in which it might be convenient to
use at a per-thread level to correct a specific performance anomaly.
But at that point, you have certainly reached the level of hammer that
you can coordinate with an external daemon if necessary.

>
>> >> This lack of distinction makes
>> >> people skip the attention that they should be paying when they're
>> >> designing interface exposed to individual programs. Worse, this makes
>> >> these things fly under the review scrutiny that public API accessible
>> >> to applications usually receives. Yet, that's what these things end
>> >> up to be. This just has to stop. cgroups can't continue to be this
>> >> ghetto shortcut to implementing half-assed APIs.
>> >
>> > I certainly don't disagree on this point :). But as above, I don't quite
>> > follow why an API being in cgroups must mean it's accessible to an
>> > application controlled by that group. This has certainly not been a
>> > requirement for our use.
>
> I don't follow what you're trying to way with the above paragraph.
> Are you still talking about CAT? If so, that use case isn't the only
> one. I'm pretty sure there are people who would want to configure
> cache allocation at thread level.

I'm not agreeing with you that "in cgroups" means "must be usable by
applications within that hierarchy". A cgroup subsystem used as a
partitioning API only by system management daemons is entirely
reasonable. CAT is a reasonable example of this.

>
>> >> What we should be doing is pushing them into the same arena as any
>> >> other publicly accessible API. I don't think there can be a shortcut
>> >> to this.
>> >
>> > Are you explicitly opposed to non-hierarchical partitions, however? Cpuset
>> > is [typically] an example of this, where the interface wants to control
>> > unified properties across a set of processes. Without necessarily being
>> > usefully hierarchical. (This is just to understand your core position, I'm
>> > not proposing cpuset should shape *anything*.)
>
> I'm having trouble following what you're trying to say. FWIW, cpuset
> is fully hierarchical.

I think where I was going with this is better addressed above. Here
all I meant is that it's difficult to construct useful sub-hierarchies
on the cpuset side, especially for memory. But this is a little
x86-centric so let's drop it.

>
>> >> I don't think we want migration in sub-process hierarchy but in the
>> >> off chance we do the naming can follow the same pid/program
>> >> group/session id scheme, which, again, is a lot easier to deal with
>> >> from applications.
>> >
>> > I don't have many objections with hand-off versus migration above, however,
>> > I think that this is a big drawback. Threads are expensive to create and
>> > are often cached rather than released. While migration may be expensive,
>> > creating a more thread is more so. The important to reconfigure a thread's
>> > personality at run-time is important.
>
> The core problem here is picking the hot path. If cgroups as a whole
> doesn't pick a position here, controllers have to assume that
> migration might not be a very cold path which naturally leads to
> overall designs and synchronization schemes which concede hot path
> performance to accomodate migration. We simply can't afford to do
> that - we end up losing way more in way hotter paths for something
> which may be marginally useful in some corner cases.
>
> So, this is a trade-off we're consciously making. If there are
> common-enough use cases which require jumping across different cgroup
> domains, we'll try to figure out a way to accomodate those but by
> default migration is a very cold and expensive path.
>

The core here was the need for allowing sub-process migration. I'm
not sure I follow the performance trade-off argument; haven't we
historically seen the opposite? That migration has been a slow-path
without optimizations and people pushing to make it faster? This
seems a hard generalization to make for something that's inherently
tied to a particular controller.

I don't care if we try turning that dial back to assume it's a cold
path once more, only that it's supported.

>> >> But those are relative to the current directory per operation and
>> >> there's no way to define a transaction across multiple file
>> >> operations. There's no way to prevent a process from being migrated
>> >> inbetween openat() and subsequent write().
>> >
>> > A forwarding /proc/thread_self/cgroup accessor, or similar, would be another
>> > way to address some of these issues.
>
> That sounds horrible to me. What if the process wants to do RMW a
> config?

Locking within a process is easy.

> What if the permissions are different after an intervening
> migration?

This is a side-effect of migration not being properly supported.

> What if the sub-hierarchy no longer exists or has been
> replaced by a hierarchy with the same topology but actualy is a
> different one?

The easy answer is that: Only a process should be managing its
sub-hierarchy. That's the nice thing about hierarchies.

The harder answer is: How do we handle non-fungible resources such as
CPU assignments within a hierarchy? This is a big part of why I make
arguments for certain partitions being management-software only above.
This is imperfect, but better then where we stand today.

>
>> > I don't quite agree here. Losing per-thread control within the cpu
>> > controller is likely going to mean that much of it ends up being
>> > reimplemented as some duplicate-in-appearance interface that gets us back to
>> > where we are today. I recognize that these controllers (cpu, cpuacct) are
>> > square pegs in that per-process makes sense for most other sub-systems; but
>> > unfortunately, their needs and use-cases are real / dependent on their
>> > present form.
>
> Let's build an API which actually looks and behaves like an API which
> is properly isolated from what external agents may do to the process.
> I can't see how that would be "back to where we are today". All of
> those are pretty critical attributes for a public kernel API and
> utterly broken right now.
>

Sure, but I don't think you can throw out per-thread control for all
controllers to enable this. Which makes everything else harder. A
intermediary step in unification might be that we move from N mounts
to 2. Those that can be managed at the process level, and those that
can't. It's a compromise, but may allow cleaner abstractions for the
former case.

2015-08-22 18:29:54

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hello, Paul.

On Fri, Aug 21, 2015 at 12:26:30PM -0700, Paul Turner wrote:
...
> A very concrete example of the above is a virtual machine in which you
> want to guarantee scheduling for the vCPU threads which must schedule
> beside many hypervisor support threads. A hierarchy is the only way
> to fix the ratio at which these compete.

Just to learn more, what sort of hypervisor support threads are we
talking about? They would have to consume considerable amount of cpu
cycles for problems like this to be relevant and be dynamic in numbers
in a way which letting them competing against vcpus makes sense. Do
IO helpers meet these criteria?

> An example that's not the cpu controller is that we use cpusets to
> expose to applications their "shared" and "private" cores. (These
> sets are dynamic based on what is coscheduled on a given machine.)

Can you please go into more details with these?

> > Why would you assume that threads of a process wouldn't want to
> > configure it ever? How is this different from CPU affinity?
>
> In general cache and CPU behave differently. Generally for it to make
> sense between threads in a process they would have to have wholly
> disjoint memory, at which point the only sane long-term implementation
> is separate processes and the management moves up a level anyway.
>
> That said, there are surely cases in which it might be convenient to
> use at a per-thread level to correct a specific performance anomaly.
> But at that point, you have certainly reached the level of hammer that
> you can coordinate with an external daemon if necessary.

So, I'm not super familiar with all the use cases but the whole cache
allocation thing is almost by nature a specific niche thing and I feel
pretty reluctant to blow off per-thread usages as too niche to worry
about.

> > I don't follow what you're trying to way with the above paragraph.
> > Are you still talking about CAT? If so, that use case isn't the only
> > one. I'm pretty sure there are people who would want to configure
> > cache allocation at thread level.
>
> I'm not agreeing with you that "in cgroups" means "must be usable by
> applications within that hierarchy". A cgroup subsystem used as a
> partitioning API only by system management daemons is entirely
> reasonable. CAT is a reasonable example of this.

I see. The same argument. I don't think CAT just being system
management thing makes sense.

> > So, this is a trade-off we're consciously making. If there are
> > common-enough use cases which require jumping across different cgroup
> > domains, we'll try to figure out a way to accomodate those but by
> > default migration is a very cold and expensive path.
>
> The core here was the need for allowing sub-process migration. I'm
> not sure I follow the performance trade-off argument; haven't we
> historically seen the opposite? That migration has been a slow-path
> without optimizations and people pushing to make it faster? This
> seems a hard generalization to make for something that's inherently
> tied to a particular controller.

It isn't something tied to a particular controller. Some controllers
may get impacted less by than others but there's an inherent
connection between how dynamic an association is and how expensive the
locking around it needs to be and we need to set up basic behavior and
usage conventions so that different controllers are designed and
implemented assuming similar usage patterns; otherwise, we end up with
the chaotic shit show that we have had where everything behaves
differently and nobody knows what's the right way to do things and we
end up locked into weird requirements which some controller induced
for no good reason but cause significant pain on use cases which
actually matter.

> I don't care if we try turning that dial back to assume it's a cold
> path once more, only that it's supported.

It has always been a cold path and I'm not saying this is gonna be
noticeably worse in the future but usages like bouncing threads on
request-by-request basis are and will be clearly worse than bouncing
to threads which are already in the target domain.

> >> > A forwarding /proc/thread_self/cgroup accessor, or similar, would be another
> >> > way to address some of these issues.
> >
> > That sounds horrible to me. What if the process wants to do RMW a
> > config?
>
> Locking within a process is easy.

It's not contained in the process at all. What if an external entity
decides to migrate the process into another cgroup inbetween?

> > What if the permissions are different after an intervening
> > migration?
>
> This is a side-effect of migration not being properly supported.
>
> > What if the sub-hierarchy no longer exists or has been
> > replaced by a hierarchy with the same topology but actualy is a
> > different one?
>
> The easy answer is that: Only a process should be managing its
> sub-hierarchy. That's the nice thing about hierarchies.

cgroupfs is a horrible place to implement that part of interface. It
doesn't make any sense to combine those two into the same hierarchy.
You're agreeing to the identified problem but somehow still suggesting
doing what we've been doing when the root cause of the said problem is
conflating and interlocking these two separate things.

> The harder answer is: How do we handle non-fungible resources such as
> CPU assignments within a hierarchy? This is a big part of why I make
> arguments for certain partitions being management-software only above.
> This is imperfect, but better then where we stand today.

I'm not following. Why is that different?

> > Let's build an API which actually looks and behaves like an API which
> > is properly isolated from what external agents may do to the process.
> > I can't see how that would be "back to where we are today". All of
> > those are pretty critical attributes for a public kernel API and
> > utterly broken right now.
>
> Sure, but I don't think you can throw out per-thread control for all
> controllers to enable this. Which makes everything else harder. A
> intermediary step in unification might be that we move from N mounts
> to 2. Those that can be managed at the process level, and those that
> can't. It's a compromise, but may allow cleaner abstractions for the
> former case.

The transition can already be gradual. Why would you add yet another
transition step?

Thanks.

--
tejun

2015-08-24 15:47:10

by Austin S Hemmelgarn

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On 2015-08-22 14:29, Tejun Heo wrote:
> Hello, Paul.
>
> On Fri, Aug 21, 2015 at 12:26:30PM -0700, Paul Turner wrote:
> ...
>> A very concrete example of the above is a virtual machine in which you
>> want to guarantee scheduling for the vCPU threads which must schedule
>> beside many hypervisor support threads. A hierarchy is the only way
>> to fix the ratio at which these compete.
>
> Just to learn more, what sort of hypervisor support threads are we
> talking about? They would have to consume considerable amount of cpu
> cycles for problems like this to be relevant and be dynamic in numbers
> in a way which letting them competing against vcpus makes sense. Do
> IO helpers meet these criteria?
>
Depending on the configuration, yes they can. VirtualBox has some
rather CPU intensive threads that aren't vCPU threads (their emulated
APIC thread immediately comes to mind), and so does QEMU depending on
the emulated hardware configuration (it gets more noticeable when the
disk images are stored on a SAN and served through iSCSI, NBD, FCoE, or
ATAoE, which is pretty typical usage for large virtualization
deployments). I've seen cases first hand where the vCPU's can make no
reasonable progress because they are constantly getting crowded out by
other threads.

The use of the term 'hypervisor support threads' for this is probably
not the best way of describing the contention, as it's almost always a
full system virtualization issue, and the contending threads are usually
storage back-end access threads.

I would argue that there are better ways to deal properly with this
(Isolate the non vCPU threads on separate physical CPU's from the
hardware emulation threads), but such methods require large systems to
be practical at any scale, and many people don't have the budget for
such large systems, and this way of doing things is much more flexible
for small scale use cases (for example, someone running one or two VM's
on a laptop under QEMU or VirtualBox).


Attachments:
smime.p7s (2.95 kB)
S/MIME Cryptographic Signature

2015-08-24 17:04:34

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hello, Austin.

On Mon, Aug 24, 2015 at 11:47:02AM -0400, Austin S Hemmelgarn wrote:
> >Just to learn more, what sort of hypervisor support threads are we
> >talking about? They would have to consume considerable amount of cpu
> >cycles for problems like this to be relevant and be dynamic in numbers
> >in a way which letting them competing against vcpus makes sense. Do
> >IO helpers meet these criteria?
> >
> Depending on the configuration, yes they can. VirtualBox has some rather
> CPU intensive threads that aren't vCPU threads (their emulated APIC thread
> immediately comes to mind), and so does QEMU depending on the emulated

And the number of those threads fluctuate widely and dynamically?

> hardware configuration (it gets more noticeable when the disk images are
> stored on a SAN and served through iSCSI, NBD, FCoE, or ATAoE, which is
> pretty typical usage for large virtualization deployments). I've seen cases
> first hand where the vCPU's can make no reasonable progress because they are
> constantly getting crowded out by other threads.

That alone doesn't require hierarchical resource distribution tho.
Setting nice levels reasonably is likely to alleviate most of the
problem.

> The use of the term 'hypervisor support threads' for this is probably not
> the best way of describing the contention, as it's almost always a full
> system virtualization issue, and the contending threads are usually storage
> back-end access threads.
>
> I would argue that there are better ways to deal properly with this (Isolate
> the non vCPU threads on separate physical CPU's from the hardware emulation
> threads), but such methods require large systems to be practical at any
> scale, and many people don't have the budget for such large systems, and
> this way of doing things is much more flexible for small scale use cases
> (for example, someone running one or two VM's on a laptop under QEMU or
> VirtualBox).

I don't know. "Someone running one or two VM's on a laptop under
QEMU" doesn't really sound like the use case which absolutely requires
hierarchical cpu cycle distribution.

Thanks.

--
tejun

2015-08-24 19:18:27

by Mike Galbraith

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Mon, 2015-08-24 at 13:04 -0400, Tejun Heo wrote:
> Hello, Austin.
>
> On Mon, Aug 24, 2015 at 11:47:02AM -0400, Austin S Hemmelgarn wrote:
> > >Just to learn more, what sort of hypervisor support threads are we
> > >talking about? They would have to consume considerable amount of cpu
> > >cycles for problems like this to be relevant and be dynamic in numbers
> > >in a way which letting them competing against vcpus makes sense. Do
> > >IO helpers meet these criteria?
> > >
> > Depending on the configuration, yes they can. VirtualBox has some rather
> > CPU intensive threads that aren't vCPU threads (their emulated APIC thread
> > immediately comes to mind), and so does QEMU depending on the emulated
>
> And the number of those threads fluctuate widely and dynamically?
>
> > hardware configuration (it gets more noticeable when the disk images are
> > stored on a SAN and served through iSCSI, NBD, FCoE, or ATAoE, which is
> > pretty typical usage for large virtualization deployments). I've seen cases
> > first hand where the vCPU's can make no reasonable progress because they are
> > constantly getting crowded out by other threads.

Hm. Serious CPU starvation would seem to require quite a few hungry
threads, but even a few IO threads with kick butt hardware under them
could easily tilt fairness heavily in favor of VPUs generating IO.

> That alone doesn't require hierarchical resource distribution tho.
> Setting nice levels reasonably is likely to alleviate most of the
> problem.

Unless the CPU controller is in use.

-Mike

2015-08-24 20:01:00

by Austin S Hemmelgarn

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On 2015-08-24 13:04, Tejun Heo wrote:
> Hello, Austin.
>
> On Mon, Aug 24, 2015 at 11:47:02AM -0400, Austin S Hemmelgarn wrote:
>>> Just to learn more, what sort of hypervisor support threads are we
>>> talking about? They would have to consume considerable amount of cpu
>>> cycles for problems like this to be relevant and be dynamic in numbers
>>> in a way which letting them competing against vcpus makes sense. Do
>>> IO helpers meet these criteria?
>>>
>> Depending on the configuration, yes they can. VirtualBox has some rather
>> CPU intensive threads that aren't vCPU threads (their emulated APIC thread
>> immediately comes to mind), and so does QEMU depending on the emulated
>
> And the number of those threads fluctuate widely and dynamically?
It depends, usually there isn't dynamic fluctuation unless there is a
lot of hot[un]plugging of virtual devices going on (which can be the
case for situations with tight host/guest integration), but the number
of threads can vary widely between configurations (most of the VM's I
run under QEMU have about 16 threads on average, but I've seen instances
with more than 100 threads). The most likely case to cause wide and
dynamic fluctuations of threads would be systems set up to dynamically
hot[un]plug vCPU's based on system load (such systems have other issues
to contend with also, but they do exist).
>> hardware configuration (it gets more noticeable when the disk images are
>> stored on a SAN and served through iSCSI, NBD, FCoE, or ATAoE, which is
>> pretty typical usage for large virtualization deployments). I've seen cases
>> first hand where the vCPU's can make no reasonable progress because they are
>> constantly getting crowded out by other threads.
>
> That alone doesn't require hierarchical resource distribution tho.
> Setting nice levels reasonably is likely to alleviate most of the
> problem.
In the cases I've dealt with this myself, nice levels didn't cut it, and
I had to resort to SCHED_RR with particular care to avoid priority
inversions.
>> The use of the term 'hypervisor support threads' for this is probably not
>> the best way of describing the contention, as it's almost always a full
>> system virtualization issue, and the contending threads are usually storage
>> back-end access threads.
>>
>> I would argue that there are better ways to deal properly with this (Isolate
>> the non vCPU threads on separate physical CPU's from the hardware emulation
>> threads), but such methods require large systems to be practical at any
>> scale, and many people don't have the budget for such large systems, and
>> this way of doing things is much more flexible for small scale use cases
>> (for example, someone running one or two VM's on a laptop under QEMU or
>> VirtualBox).
>
> I don't know. "Someone running one or two VM's on a laptop under
> QEMU" doesn't really sound like the use case which absolutely requires
> hierarchical cpu cycle distribution.
It depends on the use case. I never have more than 2 VM's running on my
laptop (always under QEMU, setting up Xen is kind of pointless ona quad
core system with only 8G of RAM), and I take extensive advantage of the
cpu cgroup to partition resources among various services on the host.



Attachments:
smime.p7s (2.95 kB)
S/MIME Cryptographic Signature

2015-08-24 20:25:15

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hello, Austin.

On Mon, Aug 24, 2015 at 04:00:49PM -0400, Austin S Hemmelgarn wrote:
> >That alone doesn't require hierarchical resource distribution tho.
> >Setting nice levels reasonably is likely to alleviate most of the
> >problem.
>
> In the cases I've dealt with this myself, nice levels didn't cut it, and I
> had to resort to SCHED_RR with particular care to avoid priority inversions.

I wonder why. The difference between -20 and 20 is around 2500x in
terms of weight. That should have been enough for expressing whatever
precedence the vcpus should have over other threads.

> >I don't know. "Someone running one or two VM's on a laptop under
> >QEMU" doesn't really sound like the use case which absolutely requires
> >hierarchical cpu cycle distribution.
>
> It depends on the use case. I never have more than 2 VM's running on my
> laptop (always under QEMU, setting up Xen is kind of pointless ona quad core
> system with only 8G of RAM), and I take extensive advantage of the cpu
> cgroup to partition resources among various services on the host.

Hmmm... I'm trying to understand the usecases where having hierarchy
inside a process are actually required so that we don't end up doing
something complex unnecessarily. So far, it looks like an easy
alternative for qemu would be teaching it to manage priorities of its
threads given that the threads are mostly static - vcpus going up and
down are explicit operations which can trigger priority adjustments if
necessary, which is unlikely to begin with.

Thanks.

--
tejun

2015-08-24 20:52:35

by Paul Turner

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Sat, Aug 22, 2015 at 11:29 AM, Tejun Heo <[email protected]> wrote:
> Hello, Paul.
>
> On Fri, Aug 21, 2015 at 12:26:30PM -0700, Paul Turner wrote:
> ...
>> A very concrete example of the above is a virtual machine in which you
>> want to guarantee scheduling for the vCPU threads which must schedule
>> beside many hypervisor support threads. A hierarchy is the only way
>> to fix the ratio at which these compete.
>
> Just to learn more, what sort of hypervisor support threads are we
> talking about? They would have to consume considerable amount of cpu
> cycles for problems like this to be relevant and be dynamic in numbers
> in a way which letting them competing against vcpus makes sense. Do
> IO helpers meet these criteria?

I'm not sure what you mean by an IO helper. By support threads I mean
any threads that are used in the hypervisor implementation that are
not hosting a vCPU.

>
>> An example that's not the cpu controller is that we use cpusets to
>> expose to applications their "shared" and "private" cores. (These
>> sets are dynamic based on what is coscheduled on a given machine.)
>
> Can you please go into more details with these?

We typically share our machines between many jobs, these jobs can have
cores that are "private" (and not shared with other jobs) and cores
that are "shared" (general purpose cores accessible to all jobs on the
same machine).

The pool of cpus in the "shared" pool is dynamic as jobs entering and
leaving the machine take or release their associated "private" cores.

By creating the appropriate sub-containers within the cpuset group we
allow jobs to pin specific threads to run on their (typically) private
cores. This also allows the management daemons additional flexibility
as it's possible to update which cores we place as private, without
synchronization with the application. Note that sched_setaffinity()
is a non-starter here.

>
>> > Why would you assume that threads of a process wouldn't want to
>> > configure it ever? How is this different from CPU affinity?
>>
>> In general cache and CPU behave differently. Generally for it to make
>> sense between threads in a process they would have to have wholly
>> disjoint memory, at which point the only sane long-term implementation
>> is separate processes and the management moves up a level anyway.
>>
>> That said, there are surely cases in which it might be convenient to
>> use at a per-thread level to correct a specific performance anomaly.
>> But at that point, you have certainly reached the level of hammer that
>> you can coordinate with an external daemon if necessary.
>
> So, I'm not super familiar with all the use cases but the whole cache
> allocation thing is almost by nature a specific niche thing and I feel
> pretty reluctant to blow off per-thread usages as too niche to worry
> about.

Let me try to restate:
I think that we can specify the usage is specifically niche that it
will *typically* be used by higher level management daemons which
prefer a more technical and specific interface. This does not
preclude use by threads, it just makes it less convenient; I think
that we should be optimizing for flexibility over ease-of-use for a
very small number of cases here.

>
>> > I don't follow what you're trying to way with the above paragraph.
>> > Are you still talking about CAT? If so, that use case isn't the only
>> > one. I'm pretty sure there are people who would want to configure
>> > cache allocation at thread level.
>>
>> I'm not agreeing with you that "in cgroups" means "must be usable by
>> applications within that hierarchy". A cgroup subsystem used as a
>> partitioning API only by system management daemons is entirely
>> reasonable. CAT is a reasonable example of this.
>
> I see. The same argument. I don't think CAT just being system
> management thing makes sense.
>
>> > So, this is a trade-off we're consciously making. If there are
>> > common-enough use cases which require jumping across different cgroup
>> > domains, we'll try to figure out a way to accomodate those but by
>> > default migration is a very cold and expensive path.
>>
>> The core here was the need for allowing sub-process migration. I'm
>> not sure I follow the performance trade-off argument; haven't we
>> historically seen the opposite? That migration has been a slow-path
>> without optimizations and people pushing to make it faster? This
>> seems a hard generalization to make for something that's inherently
>> tied to a particular controller.
>
> It isn't something tied to a particular controller. Some controllers
> may get impacted less by than others but there's an inherent
> connection between how dynamic an association is and how expensive the
> locking around it needs to be and we need to set up basic behavior and
> usage conventions so that different controllers are designed and
> implemented assuming similar usage patterns; otherwise, we end up with
> the chaotic shit show that we have had where everything behaves
> differently and nobody knows what's the right way to do things and we
> end up locked into weird requirements which some controller induced
> for no good reason but cause significant pain on use cases which
> actually matter.
>
>> I don't care if we try turning that dial back to assume it's a cold
>> path once more, only that it's supported.
>
> It has always been a cold path and I'm not saying this is gonna be
> noticeably worse in the future but usages like bouncing threads on
> request-by-request basis are and will be clearly worse than bouncing
> to threads which are already in the target domain.
>
>> >> > A forwarding /proc/thread_self/cgroup accessor, or similar, would be another
>> >> > way to address some of these issues.
>> >
>> > That sounds horrible to me. What if the process wants to do RMW a
>> > config?
>>
>> Locking within a process is easy.
>
> It's not contained in the process at all. What if an external entity
> decides to migrate the process into another cgroup inbetween?
>

If we have 'atomic' moves and a way to access our sub-containers from
the process in a consistent fashion (e.g. relative paths) then this is
not an issue.

>> > What if the permissions are different after an intervening
>> > migration?
>>
>> This is a side-effect of migration not being properly supported.
>>
>> > What if the sub-hierarchy no longer exists or has been
>> > replaced by a hierarchy with the same topology but actualy is a
>> > different one?
>>
>> The easy answer is that: Only a process should be managing its
>> sub-hierarchy. That's the nice thing about hierarchies.
>
> cgroupfs is a horrible place to implement that part of interface. It
> doesn't make any sense to combine those two into the same hierarchy.
> You're agreeing to the identified problem but somehow still suggesting
> doing what we've been doing when the root cause of the said problem is
> conflating and interlocking these two separate things.

I am not endorsing the world we are in today, only describing how it
can be somewhat sanely managed. Some of these lessons could be
formalized in imagining the world of tomorrow. E.g. the sub-process
mounts could appear within some (non-movable) alternate file-system
path.

>
>> The harder answer is: How do we handle non-fungible resources such as
>> CPU assignments within a hierarchy? This is a big part of why I make
>> arguments for certain partitions being management-software only above.
>> This is imperfect, but better then where we stand today.
>
> I'm not following. Why is that different?

This is generally any time a change in the external-to-application's
cgroup-parent requires changes in the sub-hierarchy. This is most
visible with a resource such as a cpu which is uniquely identified,
but similarly applies to any limits.

>
>> > Let's build an API which actually looks and behaves like an API which
>> > is properly isolated from what external agents may do to the process.
>> > I can't see how that would be "back to where we are today". All of
>> > those are pretty critical attributes for a public kernel API and
>> > utterly broken right now.
>>
>> Sure, but I don't think you can throw out per-thread control for all
>> controllers to enable this. Which makes everything else harder. A
>> intermediary step in unification might be that we move from N mounts
>> to 2. Those that can be managed at the process level, and those that
>> can't. It's a compromise, but may allow cleaner abstractions for the
>> former case.
>
> The transition can already be gradual. Why would you add yet another
> transition step?

Because what's being proposed today does not offer any replacement for
the sub-process control that we depend on today? Why would we embark
on merging the new interface before these details are sufficiently
resolved?

>
> Thanks.
>
> --
> tejun

2015-08-24 20:54:41

by Paul Turner

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Mon, Aug 24, 2015 at 10:04 AM, Tejun Heo <[email protected]> wrote:
> Hello, Austin.
>
> On Mon, Aug 24, 2015 at 11:47:02AM -0400, Austin S Hemmelgarn wrote:
>> >Just to learn more, what sort of hypervisor support threads are we
>> >talking about? They would have to consume considerable amount of cpu
>> >cycles for problems like this to be relevant and be dynamic in numbers
>> >in a way which letting them competing against vcpus makes sense. Do
>> >IO helpers meet these criteria?
>> >
>> Depending on the configuration, yes they can. VirtualBox has some rather
>> CPU intensive threads that aren't vCPU threads (their emulated APIC thread
>> immediately comes to mind), and so does QEMU depending on the emulated
>
> And the number of those threads fluctuate widely and dynamically?
>
>> hardware configuration (it gets more noticeable when the disk images are
>> stored on a SAN and served through iSCSI, NBD, FCoE, or ATAoE, which is
>> pretty typical usage for large virtualization deployments). I've seen cases
>> first hand where the vCPU's can make no reasonable progress because they are
>> constantly getting crowded out by other threads.
>
> That alone doesn't require hierarchical resource distribution tho.
> Setting nice levels reasonably is likely to alleviate most of the
> problem.

Nice is not sufficient here. There could be arbitrarily many threads
within the hypervisor that are not actually hosting guest CPU threads.
The only way to have this competition occur at a reasonably fixed
ratio is a sub-hierarchy.

>
>> The use of the term 'hypervisor support threads' for this is probably not
>> the best way of describing the contention, as it's almost always a full
>> system virtualization issue, and the contending threads are usually storage
>> back-end access threads.
>>
>> I would argue that there are better ways to deal properly with this (Isolate
>> the non vCPU threads on separate physical CPU's from the hardware emulation
>> threads), but such methods require large systems to be practical at any
>> scale, and many people don't have the budget for such large systems, and
>> this way of doing things is much more flexible for small scale use cases
>> (for example, someone running one or two VM's on a laptop under QEMU or
>> VirtualBox).
>
> I don't know. "Someone running one or two VM's on a laptop under
> QEMU" doesn't really sound like the use case which absolutely requires
> hierarchical cpu cycle distribution.
>

We run more than 'one or two' VMs using this configuration. :)

2015-08-24 21:01:30

by Paul Turner

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Mon, Aug 24, 2015 at 1:25 PM, Tejun Heo <[email protected]> wrote:
> Hello, Austin.
>
> On Mon, Aug 24, 2015 at 04:00:49PM -0400, Austin S Hemmelgarn wrote:
>> >That alone doesn't require hierarchical resource distribution tho.
>> >Setting nice levels reasonably is likely to alleviate most of the
>> >problem.
>>
>> In the cases I've dealt with this myself, nice levels didn't cut it, and I
>> had to resort to SCHED_RR with particular care to avoid priority inversions.
>
> I wonder why. The difference between -20 and 20 is around 2500x in
> terms of weight. That should have been enough for expressing whatever
> precedence the vcpus should have over other threads.

This strongly perturbs the load-balancer which performs busiest cpu
selection by weight.

Note that also we do not necessarily want total dominance by vCPU
threads, the hypervisor threads are almost always doing work on their
behalf and we want to provision them with _some_ time. A
sub-hierarchy allows this to be performed in a way that is independent
of how many vCPUs or support threads that are present.

>
>> >I don't know. "Someone running one or two VM's on a laptop under
>> >QEMU" doesn't really sound like the use case which absolutely requires
>> >hierarchical cpu cycle distribution.
>>
>> It depends on the use case. I never have more than 2 VM's running on my
>> laptop (always under QEMU, setting up Xen is kind of pointless ona quad core
>> system with only 8G of RAM), and I take extensive advantage of the cpu
>> cgroup to partition resources among various services on the host.
>
> Hmmm... I'm trying to understand the usecases where having hierarchy
> inside a process are actually required so that we don't end up doing
> something complex unnecessarily. So far, it looks like an easy
> alternative for qemu would be teaching it to manage priorities of its
> threads given that the threads are mostly static - vcpus going up and
> down are explicit operations which can trigger priority adjustments if
> necessary, which is unlikely to begin with.

What you're proposing is both unnecessarily complex and imprecise.
Arbitrating competition between groups of threads is exactly why we
support sub-hierarchies within cpu.

>
> Thanks.
>
> --
> tejun

2015-08-24 21:02:28

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hello,

On Mon, Aug 24, 2015 at 01:54:08PM -0700, Paul Turner wrote:
> > That alone doesn't require hierarchical resource distribution tho.
> > Setting nice levels reasonably is likely to alleviate most of the
> > problem.
>
> Nice is not sufficient here. There could be arbitrarily many threads
> within the hypervisor that are not actually hosting guest CPU threads.
> The only way to have this competition occur at a reasonably fixed
> ratio is a sub-hierarchy.

I get that having hierarchy of threads would be nicer but am having a
bit of difficulty seeing why adjusting priorities of threads wouldn't
be sufficient. It's not like threads of the same process competing
with each other is a new problem. People have been dealing with it
for ages. Hierarchical management can be a nice plus but we want the
problem and proposed solution to be justifiable.

Thanks.

--
tejun

2015-08-24 21:10:50

by Paul Turner

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Mon, Aug 24, 2015 at 2:02 PM, Tejun Heo <[email protected]> wrote:
> Hello,
>
> On Mon, Aug 24, 2015 at 01:54:08PM -0700, Paul Turner wrote:
>> > That alone doesn't require hierarchical resource distribution tho.
>> > Setting nice levels reasonably is likely to alleviate most of the
>> > problem.
>>
>> Nice is not sufficient here. There could be arbitrarily many threads
>> within the hypervisor that are not actually hosting guest CPU threads.
>> The only way to have this competition occur at a reasonably fixed
>> ratio is a sub-hierarchy.
>
> I get that having hierarchy of threads would be nicer but am having a
> bit of difficulty seeing why adjusting priorities of threads wouldn't
> be sufficient. It's not like threads of the same process competing
> with each other is a new problem. People have been dealing with it
> for ages. Hierarchical management can be a nice plus but we want the
> problem and proposed solution to be justifiable.

Consider what happens with load asymmetry:

Suppose that we have 10 vcpu threads and 100 support threads.
Suppose that we want the support threads to receive up to 10% of the
time available to the VM as a whole on that machine.

If I have one particular support thread that is busy, I want it to
receive that entire 10% (maybe a guest is pounding on scsi for
example, or in the thread-pool case, I've passed a single expensive
computation). Conversely, suppose the guest is doing lots of
different things and several support threads are active, I want the
time to be shared between them.

There is no way to implement this with nice. Either a single thread
can consume 10%, and the group can dominate, or the group cannot
dominate and the single thread can be starved.

>
> Thanks.
>
> --
> tejun

2015-08-24 21:12:43

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hello, Paul.

On Mon, Aug 24, 2015 at 02:00:54PM -0700, Paul Turner wrote:
> > Hmmm... I'm trying to understand the usecases where having hierarchy
> > inside a process are actually required so that we don't end up doing
> > something complex unnecessarily. So far, it looks like an easy
> > alternative for qemu would be teaching it to manage priorities of its
> > threads given that the threads are mostly static - vcpus going up and
> > down are explicit operations which can trigger priority adjustments if
> > necessary, which is unlikely to begin with.
>
> What you're proposing is both unnecessarily complex and imprecise.
> Arbitrating competition between groups of threads is exactly why we
> support sub-hierarchies within cpu.

Sure, and to make that behave half-way acceptable, we'll have to take
on significant amount of effort and likely complexity and I'm trying
to see whether the usecases are actually justifiable. I get that
priority based solution will be less precise and more complex on the
application side but by how much and does the added precision enough
to justify the extra facilities to support that? If it is, sure,
let's get to it but it'd be great if the concrete prolem cases are
properly identified and understood. I'll continue on the other reply.

Thanks.

--
tejun

2015-08-24 21:16:10

by Paul Turner

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Mon, Aug 24, 2015 at 2:12 PM, Tejun Heo <[email protected]> wrote:
> Hello, Paul.
>
> On Mon, Aug 24, 2015 at 02:00:54PM -0700, Paul Turner wrote:
>> > Hmmm... I'm trying to understand the usecases where having hierarchy
>> > inside a process are actually required so that we don't end up doing
>> > something complex unnecessarily. So far, it looks like an easy
>> > alternative for qemu would be teaching it to manage priorities of its
>> > threads given that the threads are mostly static - vcpus going up and
>> > down are explicit operations which can trigger priority adjustments if
>> > necessary, which is unlikely to begin with.
>>
>> What you're proposing is both unnecessarily complex and imprecise.
>> Arbitrating competition between groups of threads is exactly why we
>> support sub-hierarchies within cpu.
>
> Sure, and to make that behave half-way acceptable, we'll have to take
> on significant amount of effort and likely complexity and I'm trying
> to see whether the usecases are actually justifiable. I get that
> priority based solution will be less precise and more complex on the
> application side but by how much and does the added precision enough
> to justify the extra facilities to support that? If it is, sure,
> let's get to it but it'd be great if the concrete prolem cases are
> properly identified and understood. I'll continue on the other reply.
>

No problem, I think the conversation is absolutely
constructive/important to have and am happy to help drill down.

> Thanks.
>
> --
> tejun

2015-08-24 21:17:12

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hello,

On Mon, Aug 24, 2015 at 02:10:17PM -0700, Paul Turner wrote:
> Suppose that we have 10 vcpu threads and 100 support threads.
> Suppose that we want the support threads to receive up to 10% of the
> time available to the VM as a whole on that machine.
>
> If I have one particular support thread that is busy, I want it to
> receive that entire 10% (maybe a guest is pounding on scsi for
> example, or in the thread-pool case, I've passed a single expensive
> computation). Conversely, suppose the guest is doing lots of
> different things and several support threads are active, I want the
> time to be shared between them.
>
> There is no way to implement this with nice. Either a single thread
> can consume 10%, and the group can dominate, or the group cannot
> dominate and the single thread can be starved.

Would it be possible for you to give realistic and concrete examples?
I'm not trying to play down the use cases but concrete examples are
usually helpful at putting things in perspective.

Thanks.

--
tejun

2015-08-24 21:20:04

by Paul Turner

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Mon, Aug 24, 2015 at 2:17 PM, Tejun Heo <[email protected]> wrote:
> Hello,
>
> On Mon, Aug 24, 2015 at 02:10:17PM -0700, Paul Turner wrote:
>> Suppose that we have 10 vcpu threads and 100 support threads.
>> Suppose that we want the support threads to receive up to 10% of the
>> time available to the VM as a whole on that machine.
>>
>> If I have one particular support thread that is busy, I want it to
>> receive that entire 10% (maybe a guest is pounding on scsi for
>> example, or in the thread-pool case, I've passed a single expensive
>> computation). Conversely, suppose the guest is doing lots of
>> different things and several support threads are active, I want the
>> time to be shared between them.
>>
>> There is no way to implement this with nice. Either a single thread
>> can consume 10%, and the group can dominate, or the group cannot
>> dominate and the single thread can be starved.
>
> Would it be possible for you to give realistic and concrete examples?
> I'm not trying to play down the use cases but concrete examples are
> usually helpful at putting things in perspective.

I don't think there's anything that's not realistic or concrete about
the example above. The "suppose" parts were only for qualifying the
pool sizes for vcpu and non-vcpu threads above since discussion of
implementation using nice is dependent on knowing these counts.


>
> Thanks.
>
> --
> tejun

2015-08-24 21:36:06

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hello, Paul.

On Mon, Aug 24, 2015 at 01:52:01PM -0700, Paul Turner wrote:
> We typically share our machines between many jobs, these jobs can have
> cores that are "private" (and not shared with other jobs) and cores
> that are "shared" (general purpose cores accessible to all jobs on the
> same machine).
>
> The pool of cpus in the "shared" pool is dynamic as jobs entering and
> leaving the machine take or release their associated "private" cores.
>
> By creating the appropriate sub-containers within the cpuset group we
> allow jobs to pin specific threads to run on their (typically) private
> cores. This also allows the management daemons additional flexibility
> as it's possible to update which cores we place as private, without
> synchronization with the application. Note that sched_setaffinity()
> is a non-starter here.

Why isn't it? Because the programs themselves might try to override
it?

> Let me try to restate:
> I think that we can specify the usage is specifically niche that it
> will *typically* be used by higher level management daemons which

I really don't think that's the case.

> prefer a more technical and specific interface. This does not
> preclude use by threads, it just makes it less convenient; I think
> that we should be optimizing for flexibility over ease-of-use for a
> very small number of cases here.

It's more like there are two niche sets of use cases. If a
programmable interface or cgroups has to be picked as an exclusive
alternative, it's pretty clear that programmable interface is the way
to go.

> > It's not contained in the process at all. What if an external entity
> > decides to migrate the process into another cgroup inbetween?
> >
>
> If we have 'atomic' moves and a way to access our sub-containers from
> the process in a consistent fashion (e.g. relative paths) then this is
> not an issue.

But it gets so twisted. Relative paths aren't enough. It actually
has to proxy accesses to already open files. At that point, why would
we even keep it as a file-system based interface?

> I am not endorsing the world we are in today, only describing how it
> can be somewhat sanely managed. Some of these lessons could be
> formalized in imagining the world of tomorrow. E.g. the sub-process
> mounts could appear within some (non-movable) alternate file-system
> path.

Ditto. Wouldn't it be better to implement something which resemables
conventional programming interface rather than contorting the
filesystem semantics?

> >> The harder answer is: How do we handle non-fungible resources such as
> >> CPU assignments within a hierarchy? This is a big part of why I make
> >> arguments for certain partitions being management-software only above.
> >> This is imperfect, but better then where we stand today.
> >
> > I'm not following. Why is that different?
>
> This is generally any time a change in the external-to-application's
> cgroup-parent requires changes in the sub-hierarchy. This is most
> visible with a resource such as a cpu which is uniquely identified,
> but similarly applies to any limits.

So, except for cpuset, this doesn't matter for controllers. All
limits are hierarchical and that's it. For cpuset, it's tricky
because a nested cgroup might end up with no intersecting execution
resource. The kernel can't have threads which don't have any
execution resources and the solution has been assuming the resources
from higher-ups till there's some. Application control has always
behaved the same way. If the configured affinity becomes empty, the
scheduler ignored it.

> > The transition can already be gradual. Why would you add yet another
> > transition step?
>
> Because what's being proposed today does not offer any replacement for
> the sub-process control that we depend on today? Why would we embark
> on merging the new interface before these details are sufficiently
> resolved?

Because the details on this particular issue can be hashed out in the
future? There's nothing permanently blocking any direction that we
might choose in the future and what's working today will keep working.
Why block the whole thing which can be useful for the majority of use
cases for this particular corner case?

Thanks.

--
tejun

2015-08-24 21:40:06

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Mon, Aug 24, 2015 at 02:19:29PM -0700, Paul Turner wrote:
> > Would it be possible for you to give realistic and concrete examples?
> > I'm not trying to play down the use cases but concrete examples are
> > usually helpful at putting things in perspective.
>
> I don't think there's anything that's not realistic or concrete about
> the example above. The "suppose" parts were only for qualifying the
> pool sizes for vcpu and non-vcpu threads above since discussion of
> implementation using nice is dependent on knowing these counts.

Hmm... I was hoping for an actual configurations and usage scenarios.
Preferably something people can set up and play with. I take that the
CPU intensive helper threads are usually IO workers? Is the scenario
where the VM is set up with a lot of IO devices and different ones may
consume large amount of CPU cycles at any given point?

Thanks.

--
tejun

2015-08-24 21:58:56

by Paul Turner

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Mon, Aug 24, 2015 at 2:36 PM, Tejun Heo <[email protected]> wrote:
> Hello, Paul.
>
> On Mon, Aug 24, 2015 at 01:52:01PM -0700, Paul Turner wrote:
>> We typically share our machines between many jobs, these jobs can have
>> cores that are "private" (and not shared with other jobs) and cores
>> that are "shared" (general purpose cores accessible to all jobs on the
>> same machine).
>>
>> The pool of cpus in the "shared" pool is dynamic as jobs entering and
>> leaving the machine take or release their associated "private" cores.
>>
>> By creating the appropriate sub-containers within the cpuset group we
>> allow jobs to pin specific threads to run on their (typically) private
>> cores. This also allows the management daemons additional flexibility
>> as it's possible to update which cores we place as private, without
>> synchronization with the application. Note that sched_setaffinity()
>> is a non-starter here.
>
> Why isn't it? Because the programs themselves might try to override
> it?

The major reasons are:

1) Isolation. Doing everything with sched_setaffinity means that
programs can use arbitrary resources if they desire.
1a) These restrictions need to also apply to threads created by
library code. Which may be 3rd party.
2) Interaction between cpusets and sched_setaffinity. For necessary
reasons, a cpuset update always overwrites all extant
sched_setaffinity values. ...And we need some cpusets for (1)....And
we need periodic updates for access to shared cores.
3) Virtualization of CPU ids. (Multiple applications all binding to
core 1 is a bad thing.)

>
>> Let me try to restate:
>> I think that we can specify the usage is specifically niche that it
>> will *typically* be used by higher level management daemons which
>
> I really don't think that's the case.
>

Can you provide examples of non-exceptional usage in this fashion?

>> prefer a more technical and specific interface. This does not
>> preclude use by threads, it just makes it less convenient; I think
>> that we should be optimizing for flexibility over ease-of-use for a
>> very small number of cases here.
>
> It's more like there are two niche sets of use cases. If a
> programmable interface or cgroups has to be picked as an exclusive
> alternative, it's pretty clear that programmable interface is the way
> to go.

I strongly disagree here:
The *major obvious use* is partitioning of a system, which must act
on groups of processes. Cgroups is the only interface we have which
satisfies this today.

>
>> > It's not contained in the process at all. What if an external entity
>> > decides to migrate the process into another cgroup inbetween?
>> >
>>
>> If we have 'atomic' moves and a way to access our sub-containers from
>> the process in a consistent fashion (e.g. relative paths) then this is
>> not an issue.
>
> But it gets so twisted. Relative paths aren't enough. It actually
> has to proxy accesses to already open files. At that point, why would
> we even keep it as a file-system based interface?

Well no, this can just be reversed and we can have the relative paths
be the actual files which the hierarchy points back at.

Ultimately, they could potentially not even be exposed in the regular
hierarchy. At this point we could not expose anything that does not
support sub-process splits within processes' hierarchy and we're at a
more reasonable state of affairs.

There is real value in being able to duplicate interface between
process and sub-process level control.

>
>> I am not endorsing the world we are in today, only describing how it
>> can be somewhat sanely managed. Some of these lessons could be
>> formalized in imagining the world of tomorrow. E.g. the sub-process
>> mounts could appear within some (non-movable) alternate file-system
>> path.
>
> Ditto. Wouldn't it be better to implement something which resemables
> conventional programming interface rather than contorting the
> filesystem semantics?
>

Maybe? This is a trade-off, some of which is built on the assumptions
we're now debating.

There is also value, cost-wise, in iterative improvement of what we
have today rather than trying to nuke it from orbit. I do not know
which of these is the right choice, it likely depends strongly on
where we end up for sub-process interfaces. If we do support those
I'm not sure it makes sense for them to have an entirely different API
from process-level coordination, at which point the file-system
overload is a trade-off rather than a cost.

>> >> The harder answer is: How do we handle non-fungible resources such as
>> >> CPU assignments within a hierarchy? This is a big part of why I make
>> >> arguments for certain partitions being management-software only above.
>> >> This is imperfect, but better then where we stand today.
>> >
>> > I'm not following. Why is that different?
>>
>> This is generally any time a change in the external-to-application's
>> cgroup-parent requires changes in the sub-hierarchy. This is most
>> visible with a resource such as a cpu which is uniquely identified,
>> but similarly applies to any limits.
>
> So, except for cpuset, this doesn't matter for controllers. All
> limits are hierarchical and that's it.

Well no, it still matters because I might want to lower the limit
below what children have set.

> For cpuset, it's tricky
> because a nested cgroup might end up with no intersecting execution
> resource. The kernel can't have threads which don't have any
> execution resources and the solution has been assuming the resources
> from higher-ups till there's some. Application control has always
> behaved the same way. If the configured affinity becomes empty, the
> scheduler ignored it.

Actually no, any configuration change that would result in this state
is rejected.

It's not possible to configure an empty cpuset once tasks are in it,
or attach tasks to an empty set.
It's also not possible to create this state using setaffinity, these
restrictions are always over-ridden by updates, even if they do not
need to be.

>
>> > The transition can already be gradual. Why would you add yet another
>> > transition step?
>>
>> Because what's being proposed today does not offer any replacement for
>> the sub-process control that we depend on today? Why would we embark
>> on merging the new interface before these details are sufficiently
>> resolved?
>
> Because the details on this particular issue can be hashed out in the
> future? There's nothing permanently blocking any direction that we
> might choose in the future and what's working today will keep working.
> Why block the whole thing which can be useful for the majority of use
> cases for this particular corner case?
>

Because I do not think sub-process hierarchies are the corner case
that you're making them out to be for these controllers and that has
real implications for the ultimate direction of this interface.

Also. If we are making disruptive changes here, I would want to
discuss merging cpu, cpuset, and cpuacct. What this merge looks like
depends on the above.

2015-08-24 22:03:38

by Paul Turner

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Mon, Aug 24, 2015 at 2:40 PM, Tejun Heo <[email protected]> wrote:
> On Mon, Aug 24, 2015 at 02:19:29PM -0700, Paul Turner wrote:
>> > Would it be possible for you to give realistic and concrete examples?
>> > I'm not trying to play down the use cases but concrete examples are
>> > usually helpful at putting things in perspective.
>>
>> I don't think there's anything that's not realistic or concrete about
>> the example above. The "suppose" parts were only for qualifying the
>> pool sizes for vcpu and non-vcpu threads above since discussion of
>> implementation using nice is dependent on knowing these counts.
>
> Hmm... I was hoping for an actual configurations and usage scenarios.
> Preferably something people can set up and play with.

This is much easier to set up and play with synthetically. Just
create the 10 threads and 100 threads above then experiment with
configurations designed at guaranteeing the set of 100 threads
relatively uniform throughput regardless of how many are active. I
don't think trying to run a VM stack adds anything except complexity
of reproduction here.

> I take that the
> CPU intensive helper threads are usually IO workers? Is the scenario
> where the VM is set up with a lot of IO devices and different ones may
> consume large amount of CPU cycles at any given point?

Yes, generally speaking there are a few major classes of IO (flash,
disk, network) that a guest may invoke. Each of these backends is
separate and chooses its own threading.

2015-08-24 22:19:41

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hey,

On Mon, Aug 24, 2015 at 02:58:23PM -0700, Paul Turner wrote:
> > Why isn't it? Because the programs themselves might try to override
> > it?
>
> The major reasons are:
>
> 1) Isolation. Doing everything with sched_setaffinity means that
> programs can use arbitrary resources if they desire.
> 1a) These restrictions need to also apply to threads created by
> library code. Which may be 3rd party.
> 2) Interaction between cpusets and sched_setaffinity. For necessary
> reasons, a cpuset update always overwrites all extant
> sched_setaffinity values. ...And we need some cpusets for (1)....And
> we need periodic updates for access to shared cores.

This is an erratic behavior on cpuset's part tho. Nothing else
behaves this way and it's borderline buggy.

> 3) Virtualization of CPU ids. (Multiple applications all binding to
> core 1 is a bad thing.)

This is about who's setting the affinity, right? As long as an agent
which knows system details sets it, which mechanism doesn't really
matter.

> >> Let me try to restate:
> >> I think that we can specify the usage is specifically niche that it
> >> will *typically* be used by higher level management daemons which
> >
> > I really don't think that's the case.
> >
>
> Can you provide examples of non-exceptional usage in this fashion?

I heard of two use cases. One is sytem-partitioning that you're
talking about and the other is preventing threads of the same process
from stepping on each other's toes. There was a fancy word for the
cacheline cannibalizing behavior which shows up in those scenarios.

> > It's more like there are two niche sets of use cases. If a
> > programmable interface or cgroups has to be picked as an exclusive
> > alternative, it's pretty clear that programmable interface is the way
> > to go.
>
> I strongly disagree here:
> The *major obvious use* is partitioning of a system, which must act

I don't know. Why is that more major obvious use? This is super
duper fringe to begin with. It's like tallying up beans. Sure, some
may be taller than others but they're all still beans and I'm not even
sure there's a big difference between the two use cases here.

> on groups of processes. Cgroups is the only interface we have which
> satisfies this today.

Well, not really. cgroups is more convenient / better at these things
but not the only way to do it. People have been doing isolation to
varying degrees with other mechanisms for ages.

> > Ditto. Wouldn't it be better to implement something which resemables
> > conventional programming interface rather than contorting the
> > filesystem semantics?
>
> Maybe? This is a trade-off, some of which is built on the assumptions
> we're now debating.
>
> There is also value, cost-wise, in iterative improvement of what we
> have today rather than trying to nuke it from orbit. I do not know
> which of these is the right choice, it likely depends strongly on
> where we end up for sub-process interfaces. If we do support those
> I'm not sure it makes sense for them to have an entirely different API
> from process-level coordination, at which point the file-system
> overload is a trade-off rather than a cost.

Yeah, I understand the similarity part but don't buy that the benefit
there is big enough to introduce a kernel API which is expected to be
used by individual programs which is radically different from how
processes / threads are organized and applications interact with the
kernel. These are a lot more grave issues and if we end up paying
some complexity from kernel side internally, so be it.

> > So, except for cpuset, this doesn't matter for controllers. All
> > limits are hierarchical and that's it.
>
> Well no, it still matters because I might want to lower the limit
> below what children have set.

All controllers only get what their ancestors can hand down to them.
That's basic hierarchical behavior.

> > For cpuset, it's tricky
> > because a nested cgroup might end up with no intersecting execution
> > resource. The kernel can't have threads which don't have any
> > execution resources and the solution has been assuming the resources
> > from higher-ups till there's some. Application control has always
> > behaved the same way. If the configured affinity becomes empty, the
> > scheduler ignored it.
>
> Actually no, any configuration change that would result in this state
> is rejected.
>
> It's not possible to configure an empty cpuset once tasks are in it,
> or attach tasks to an empty set.
> It's also not possible to create this state using setaffinity, these
> restrictions are always over-ridden by updates, even if they do not
> need to be.

So, even in traditional hierarchies, this isn't true. You can get to
no-resource config through cpu hot-unplug and cpuset currently ejects
tasks to the closest ancestor with execution resources.

> > Because the details on this particular issue can be hashed out in the
> > future? There's nothing permanently blocking any direction that we
> > might choose in the future and what's working today will keep working.
> > Why block the whole thing which can be useful for the majority of use
> > cases for this particular corner case?
>
> Because I do not think sub-process hierarchies are the corner case
> that you're making them out to be for these controllers and that has
> real implications for the ultimate direction of this interface.

If that's the case and we fail miserably at creating a reasonable
programming interface for that, we can always revive thread
granularity. This is mostly a policy decision after all.

> Also. If we are making disruptive changes here, I would want to
> discuss merging cpu, cpuset, and cpuacct. What this merge looks like
> depends on the above.

So, the proposed patches already merge cpu and cpuacct, at least in
appearance. Given the kitchen-sink nature of cpuset, I don't think it
makes sense to fuse it with cpu.

Thanks.

--
tejun

2015-08-24 22:49:59

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hello,

On Mon, Aug 24, 2015 at 03:03:05PM -0700, Paul Turner wrote:
> > Hmm... I was hoping for an actual configurations and usage scenarios.
> > Preferably something people can set up and play with.
>
> This is much easier to set up and play with synthetically. Just
> create the 10 threads and 100 threads above then experiment with
> configurations designed at guaranteeing the set of 100 threads
> relatively uniform throughput regardless of how many are active. I
> don't think trying to run a VM stack adds anything except complexity
> of reproduction here.

Well, but that loses most of details and why such use cases matter to
begin with. We can imagine up stuff to induce arbitrary set of
requirements.

> > I take that the
> > CPU intensive helper threads are usually IO workers? Is the scenario
> > where the VM is set up with a lot of IO devices and different ones may
> > consume large amount of CPU cycles at any given point?
>
> Yes, generally speaking there are a few major classes of IO (flash,
> disk, network) that a guest may invoke. Each of these backends is
> separate and chooses its own threading.

Hmmm... if that's the case, would limiting iops on those IO devices
(or classes of them) work? qemu already implements IO limit mechanism
after all.

Anyways, a point here is that threads of the same process competing
isn't a new problem. There are many ways to make those threads play
nice as the application itself often has to be involved anyway,
especially for something like qemu which is heavily involved in
provisioning resources.

cgroups can be a nice brute-force add-on which lets sysadmins do wild
things but it's inherently hacky and incomplete for coordinating
threads. For example, what is it gonna do if qemu cloned vcpus and IO
helpers dynamically off of the same parent thread? It requires
application's cooperation anyway but at the same time is painful to
actually interact from those applications.

Thanks.

--
tejun

2015-08-24 23:07:15

by Paul Turner

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Mon, Aug 24, 2015 at 3:19 PM, Tejun Heo <[email protected]> wrote:
> Hey,
>
> On Mon, Aug 24, 2015 at 02:58:23PM -0700, Paul Turner wrote:
>> > Why isn't it? Because the programs themselves might try to override
>> > it?
>>
>> The major reasons are:
>>
>> 1) Isolation. Doing everything with sched_setaffinity means that
>> programs can use arbitrary resources if they desire.
>> 1a) These restrictions need to also apply to threads created by
>> library code. Which may be 3rd party.
>> 2) Interaction between cpusets and sched_setaffinity. For necessary
>> reasons, a cpuset update always overwrites all extant
>> sched_setaffinity values. ...And we need some cpusets for (1)....And
>> we need periodic updates for access to shared cores.
>
> This is an erratic behavior on cpuset's part tho. Nothing else
> behaves this way and it's borderline buggy.
>

It's actually the only sane possible interaction here.

If you don't overwrite the masks you can no longer manage cpusets with
a multi-threaded application.
If you partially overwrite the masks you can create a host of
inconsistent behaviors where an application suddenly loses
parallelism.

The *only* consistent way is to clobber all masks uniformly. Then
either arrange for some notification to the application to re-sync, or
use sub-sub-containers within the cpuset hierarchy to advertise
finer-partitions.

(Generally speaking, there is no real way to mate these APIs and part
of the reason we use sub-containers here. What's being proposed will
make this worse rather than better.)

>> 3) Virtualization of CPU ids. (Multiple applications all binding to
>> core 1 is a bad thing.)
>
> This is about who's setting the affinity, right? As long as an agent
> which knows system details sets it, which mechanism doesn't really
> matter.

Yes, there are other ways to implement this.

>
>> >> Let me try to restate:
>> >> I think that we can specify the usage is specifically niche that it
>> >> will *typically* be used by higher level management daemons which
>> >
>> > I really don't think that's the case.
>> >
>>
>> Can you provide examples of non-exceptional usage in this fashion?
>
> I heard of two use cases. One is sytem-partitioning that you're
> talking about and the other is preventing threads of the same process
> from stepping on each other's toes. There was a fancy word for the
> cacheline cannibalizing behavior which shows up in those scenarios.

So this is a single example right, since the system partitioning case
is the one in which it's exclusively used by a higher level management
daemon.

The case of an process with specifically identified threads in
conflict certainly seems exceptional in the level of optimization both
in implementation and analysis present. I would expect in this case
that either they are comfortable with the more technical API, or they
can coordinate with an external controller. Which is much less
overloaded both by number of callers and by number of interfaces than
it is in the cpuset case.

>
>> > It's more like there are two niche sets of use cases. If a
>> > programmable interface or cgroups has to be picked as an exclusive
>> > alternative, it's pretty clear that programmable interface is the way
>> > to go.
>>
>> I strongly disagree here:
>> The *major obvious use* is partitioning of a system, which must act
>
> I don't know. Why is that more major obvious use? This is super
> duper fringe to begin with. It's like tallying up beans. Sure, some
> may be taller than others but they're all still beans and I'm not even
> sure there's a big difference between the two use cases here.

I don't think the case of having a large compute farm with
"unimportant" and "important" work is particularly fringe. Reducing
the impact on the "important" work so that we can scavenge more cycles
for the latency insensitive "unimportant" is very real.

>
>> on groups of processes. Cgroups is the only interface we have which
>> satisfies this today.
>
> Well, not really. cgroups is more convenient / better at these things
> but not the only way to do it. People have been doing isolation to
> varying degrees with other mechanisms for ages.
>

Right, but it's exactly because of _how bad_ those other mechanisms
_are_ that cgroups was originally created. Its growth was not
managed well from there, but let's not step away from the fact that
this interface was created to solve this problem.

>> > Ditto. Wouldn't it be better to implement something which resemables
>> > conventional programming interface rather than contorting the
>> > filesystem semantics?
>>
>> Maybe? This is a trade-off, some of which is built on the assumptions
>> we're now debating.
>>
>> There is also value, cost-wise, in iterative improvement of what we
>> have today rather than trying to nuke it from orbit. I do not know
>> which of these is the right choice, it likely depends strongly on
>> where we end up for sub-process interfaces. If we do support those
>> I'm not sure it makes sense for them to have an entirely different API
>> from process-level coordination, at which point the file-system
>> overload is a trade-off rather than a cost.
>
> Yeah, I understand the similarity part but don't buy that the benefit
> there is big enough to introduce a kernel API which is expected to be
> used by individual programs which is radically different from how
> processes / threads are organized and applications interact with the
> kernel.

Sorry, I don't quite follow, in what way is it radically different?
What is magically different about a process versus a thread in this
sub-division?

> These are a lot more grave issues and if we end up paying
> some complexity from kernel side internally, so be it.
>
>> > So, except for cpuset, this doesn't matter for controllers. All
>> > limits are hierarchical and that's it.
>>
>> Well no, it still matters because I might want to lower the limit
>> below what children have set.
>
> All controllers only get what their ancestors can hand down to them.
> That's basic hierarchical behavior.
>

And many users want non work-conserving systems in which we can add
and remove idle resources. This means that how much bandwidth an
ancestor has is not fixed in stone.

>> > For cpuset, it's tricky
>> > because a nested cgroup might end up with no intersecting execution
>> > resource. The kernel can't have threads which don't have any
>> > execution resources and the solution has been assuming the resources
>> > from higher-ups till there's some. Application control has always
>> > behaved the same way. If the configured affinity becomes empty, the
>> > scheduler ignored it.
>>
>> Actually no, any configuration change that would result in this state
>> is rejected.
>>
>> It's not possible to configure an empty cpuset once tasks are in it,
>> or attach tasks to an empty set.
>> It's also not possible to create this state using setaffinity, these
>> restrictions are always over-ridden by updates, even if they do not
>> need to be.
>
> So, even in traditional hierarchies, this isn't true. You can get to
> no-resource config through cpu hot-unplug and cpuset currently ejects
> tasks to the closest ancestor with execution resources.

This is exactly congruent with what I said. It's not possible to have
tasks attached to an empty cpuset. Ejection is only maintaining this
in the face of a non-failable operation.

>
>> > Because the details on this particular issue can be hashed out in the
>> > future? There's nothing permanently blocking any direction that we
>> > might choose in the future and what's working today will keep working.
>> > Why block the whole thing which can be useful for the majority of use
>> > cases for this particular corner case?
>>
>> Because I do not think sub-process hierarchies are the corner case
>> that you're making them out to be for these controllers and that has
>> real implications for the ultimate direction of this interface.
>
> If that's the case and we fail miserably at creating a reasonable
> programming interface for that, we can always revive thread
> granularity. This is mostly a policy decision after all.

These interfaces should be presented side-by-side. This is not a
reasonable patch-later part of the interface as we depend on it today.

>
>> Also. If we are making disruptive changes here, I would want to
>> discuss merging cpu, cpuset, and cpuacct. What this merge looks like
>> depends on the above.
>
> So, the proposed patches already merge cpu and cpuacct, at least in
> appearance. Given the kitchen-sink nature of cpuset, I don't think it
> makes sense to fuse it with cpu.

Arguments in favor of this:
a) Today the load-balancer has _no_ understanding of group level
cpu-affinity masks.
b) With SCHED_NUMA, we can benefit from also being able to apply (b)
to understand which nodes are usable.

>
> Thanks.
>
> --
> tejun

2015-08-24 23:16:32

by Paul Turner

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Mon, Aug 24, 2015 at 3:49 PM, Tejun Heo <[email protected]> wrote:
> Hello,
>
> On Mon, Aug 24, 2015 at 03:03:05PM -0700, Paul Turner wrote:
>> > Hmm... I was hoping for an actual configurations and usage scenarios.
>> > Preferably something people can set up and play with.
>>
>> This is much easier to set up and play with synthetically. Just
>> create the 10 threads and 100 threads above then experiment with
>> configurations designed at guaranteeing the set of 100 threads
>> relatively uniform throughput regardless of how many are active. I
>> don't think trying to run a VM stack adds anything except complexity
>> of reproduction here.
>
> Well, but that loses most of details and why such use cases matter to
> begin with. We can imagine up stuff to induce arbitrary set of
> requirements.

All that's being proved or disproved here is that it's difficult to
coordinate the consumption of asymmetric thread pools using nice. The
constraints here were drawn from a real-world example.

>
>> > I take that the
>> > CPU intensive helper threads are usually IO workers? Is the scenario
>> > where the VM is set up with a lot of IO devices and different ones may
>> > consume large amount of CPU cycles at any given point?
>>
>> Yes, generally speaking there are a few major classes of IO (flash,
>> disk, network) that a guest may invoke. Each of these backends is
>> separate and chooses its own threading.
>
> Hmmm... if that's the case, would limiting iops on those IO devices
> (or classes of them) work? qemu already implements IO limit mechanism
> after all.

No.

1) They should proceed at the maximum rate that they can that's still
within their provisioning budget.
2) The cost/IO is both inconsistent and changes over time. Attempting
to micro-optimize every backend for this is infeasible, this is
exactly the type of problem that the scheduler can usefully help
arbitrate.
3) Even pretending (2) is fixable, dynamically dividing these
right-to-work tokens between different I/O device backends is
extremely complex.

>
> Anyways, a point here is that threads of the same process competing
> isn't a new problem. There are many ways to make those threads play
> nice as the application itself often has to be involved anyway,
> especially for something like qemu which is heavily involved in
> provisioning resources.

It's certainly not a new problem, but it's a real one, and it's
_hard_. You're proposing removing the best known solution.

>
> cgroups can be a nice brute-force add-on which lets sysadmins do wild
> things but it's inherently hacky and incomplete for coordinating
> threads. For example, what is it gonna do if qemu cloned vcpus and IO
> helpers dynamically off of the same parent thread?

We're talking about sub-process usage here. This is the application
coordinating itself, NOT the sysadmin. Processes are becoming larger
and larger, we need many of the same controls within them that we have
between them.

> It requires
> application's cooperation anyway but at the same time is painful to
> actually interact from those applications.

As discussed elsewhere on thread this is really not a problem if you
define consistent rules with respect to which parts are managed by
who. The argument of potential interference is no different to
messing with an application's on-disk configuration behind its back.
Alternate strawmen which greatly improve this from where we are today
have also been proposed.

>
> Thanks.
>
> --
> tejun

2015-08-25 02:37:31

by Kamezawa Hiroyuki

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On 2015/08/25 8:15, Paul Turner wrote:
> On Mon, Aug 24, 2015 at 3:49 PM, Tejun Heo <[email protected]> wrote:
>> Hello,
>>
>> On Mon, Aug 24, 2015 at 03:03:05PM -0700, Paul Turner wrote:
>>>> Hmm... I was hoping for an actual configurations and usage scenarios.
>>>> Preferably something people can set up and play with.
>>>
>>> This is much easier to set up and play with synthetically. Just
>>> create the 10 threads and 100 threads above then experiment with
>>> configurations designed at guaranteeing the set of 100 threads
>>> relatively uniform throughput regardless of how many are active. I
>>> don't think trying to run a VM stack adds anything except complexity
>>> of reproduction here.
>>
>> Well, but that loses most of details and why such use cases matter to
>> begin with. We can imagine up stuff to induce arbitrary set of
>> requirements.
>
> All that's being proved or disproved here is that it's difficult to
> coordinate the consumption of asymmetric thread pools using nice. The
> constraints here were drawn from a real-world example.
>
>>
>>>> I take that the
>>>> CPU intensive helper threads are usually IO workers? Is the scenario
>>>> where the VM is set up with a lot of IO devices and different ones may
>>>> consume large amount of CPU cycles at any given point?
>>>
>>> Yes, generally speaking there are a few major classes of IO (flash,
>>> disk, network) that a guest may invoke. Each of these backends is
>>> separate and chooses its own threading.
>>
>> Hmmm... if that's the case, would limiting iops on those IO devices
>> (or classes of them) work? qemu already implements IO limit mechanism
>> after all.
>
> No.
>
> 1) They should proceed at the maximum rate that they can that's still
> within their provisioning budget.
> 2) The cost/IO is both inconsistent and changes over time. Attempting
> to micro-optimize every backend for this is infeasible, this is
> exactly the type of problem that the scheduler can usefully help
> arbitrate.
> 3) Even pretending (2) is fixable, dynamically dividing these
> right-to-work tokens between different I/O device backends is
> extremely complex.
>

I think I should explain my customer's use case of qemu + cpuset/cpu (via libvirt)

(1) Isolating hypervisor thread.
As already discussed, hypervisor threads are isolated by cpuset. But their purpose
is to avoid _latency_ spike caused by hypervisor behavior. So, "nice" cannot be solution
as already discussed.

(2) Fixed rate vcpu service.
With using cpu controller's quota/period feature, my customer creates vcpu models like
Low(1GHz), Mid(2GHz), High(3GHz) for IaaS system.

To do this, each vcpus should be quota-limited independently, with per-thread cpu control.

Especially, the method (1) is used in several enterprise customers for stabilizing their system.

Sub-process control should be provided by some way.

Thanks,
-Kame



>>
>> Anyways, a point here is that threads of the same process competing
>> isn't a new problem. There are many ways to make those threads play
>> nice as the application itself often has to be involved anyway,
>> especially for something like qemu which is heavily involved in
>> provisioning resources.
>
> It's certainly not a new problem, but it's a real one, and it's
> _hard_. You're proposing removing the best known solution.
>
>>
>> cgroups can be a nice brute-force add-on which lets sysadmins do wild
>> things but it's inherently hacky and incomplete for coordinating
>> threads. For example, what is it gonna do if qemu cloned vcpus and IO
>> helpers dynamically off of the same parent thread?
>
> We're talking about sub-process usage here. This is the application
> coordinating itself, NOT the sysadmin. Processes are becoming larger
> and larger, we need many of the same controls within them that we have
> between them.
>
>> It requires
>> application's cooperation anyway but at the same time is painful to
>> actually interact from those applications.
>
> As discussed elsewhere on thread this is really not a problem if you
> define consistent rules with respect to which parts are managed by
> who. The argument of potential interference is no different to
> messing with an application's on-disk configuration behind its back.
> Alternate strawmen which greatly improve this from where we are today
> have also been proposed.
>
>>
>> Thanks.
>>
>> --
>> tejun
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to [email protected]
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>

2015-08-25 09:24:51

by Ingo Molnar

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy


* Paul Turner <[email protected]> wrote:

> > Anyways, a point here is that threads of the same process competing
> > isn't a new problem. There are many ways to make those threads play
> > nice as the application itself often has to be involved anyway,
> > especially for something like qemu which is heavily involved in
> > provisioning resources.
>
> It's certainly not a new problem, but it's a real one, and it's
> _hard_. You're proposing removing the best known solution.

Also, just to make sure this is resolved properly, I'm NAK-ing the current
scheduler bits in this series:

NAKed-by: Ingo Molnar <[email protected]>

until all of pjt's API design concerns are resolved. This is conceptual, it is not
a 'we can fix it later' detail.

Tejun, please keep me Cc:-ed to future versions of this series so that I can lift
the NAK if things get resolved.

Thanks,

Ingo

2015-08-25 10:01:11

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

On Tue, Aug 25, 2015 at 11:24:42AM +0200, Ingo Molnar wrote:
>
> * Paul Turner <[email protected]> wrote:
>
> > > Anyways, a point here is that threads of the same process competing
> > > isn't a new problem. There are many ways to make those threads play
> > > nice as the application itself often has to be involved anyway,
> > > especially for something like qemu which is heavily involved in
> > > provisioning resources.
> >
> > It's certainly not a new problem, but it's a real one, and it's
> > _hard_. You're proposing removing the best known solution.
>
> Also, just to make sure this is resolved properly, I'm NAK-ing the current
> scheduler bits in this series:
>
> NAKed-by: Ingo Molnar <[email protected]>
>
> until all of pjt's API design concerns are resolved. This is conceptual, it is not
> a 'we can fix it later' detail.
>
> Tejun, please keep me Cc:-ed to future versions of this series so that I can lift
> the NAK if things get resolved.

You can add:

NAKed-by: Peter Zijlstra <[email protected]>

to that.

There have been at least 3 different groups of people:

- Mike, representing Suse customers
- Kamezawa-san, representing Fujitsu customers
- Paul, representing Google

that claim per-thread control groups are in use and important.

Any replacement _must_ provide for this use case up front; its not
something that can be cobbled on later.

2015-08-25 19:18:48

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hello, Paul.

On Mon, Aug 24, 2015 at 04:15:59PM -0700, Paul Turner wrote:
> > Hmmm... if that's the case, would limiting iops on those IO devices
> > (or classes of them) work? qemu already implements IO limit mechanism
> > after all.
>
> No.
>
> 1) They should proceed at the maximum rate that they can that's still
> within their provisioning budget.

Ooh, right.

> 2) The cost/IO is both inconsistent and changes over time. Attempting
> to micro-optimize every backend for this is infeasible, this is
> exactly the type of problem that the scheduler can usefully help
> arbitrate.
> 3) Even pretending (2) is fixable, dynamically dividing these
> right-to-work tokens between different I/O device backends is
> extremely complex.
>
> > Anyways, a point here is that threads of the same process competing
> > isn't a new problem. There are many ways to make those threads play
> > nice as the application itself often has to be involved anyway,
> > especially for something like qemu which is heavily involved in
> > provisioning resources.
>
> It's certainly not a new problem, but it's a real one, and it's
> _hard_. You're proposing removing the best known solution.

Well, I'm trying to figure out whether we actually need it and
implement something sane if so. We actually can't do hierarchical
resource distribution with existing mechanisms, so if that is
something which is beneficial enough, let's go ahead and figure it
out.

> > cgroups can be a nice brute-force add-on which lets sysadmins do wild
> > things but it's inherently hacky and incomplete for coordinating
> > threads. For example, what is it gonna do if qemu cloned vcpus and IO
> > helpers dynamically off of the same parent thread?
>
> We're talking about sub-process usage here. This is the application
> coordinating itself, NOT the sysadmin. Processes are becoming larger
> and larger, we need many of the same controls within them that we have
> between them.
>
> > It requires
> > application's cooperation anyway but at the same time is painful to
> > actually interact from those applications.
>
> As discussed elsewhere on thread this is really not a problem if you
> define consistent rules with respect to which parts are managed by
> who. The argument of potential interference is no different to
> messing with an application's on-disk configuration behind its back.
> Alternate strawmen which greatly improve this from where we are today
> have also been proposed.

Let's continue in the other sub-thread but it's not just system
management and applications not stepping on each other's toes although
even just that is extremely painful with the current interface.
cgroup membership is inherently tied to process tree no matter who's
managing it which requires coordination from the application side for
sub-process management and at that point it's really matter of putting
one and one together.

Thanks.

--
tejun

2015-08-25 21:02:41

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hello,

On Mon, Aug 24, 2015 at 04:06:39PM -0700, Paul Turner wrote:
> > This is an erratic behavior on cpuset's part tho. Nothing else
> > behaves this way and it's borderline buggy.
>
> It's actually the only sane possible interaction here.
>
> If you don't overwrite the masks you can no longer manage cpusets with
> a multi-threaded application.
> If you partially overwrite the masks you can create a host of
> inconsistent behaviors where an application suddenly loses
> parallelism.

It's a layering problem. It'd be fine if cpuset either did "layer
per-thread affinities below w/ config change notification" or "ignore
and/or reject per-thread affinities". What we have now is two layers
manipulating the same field without any mechanism for coordination.

> The *only* consistent way is to clobber all masks uniformly. Then
> either arrange for some notification to the application to re-sync, or
> use sub-sub-containers within the cpuset hierarchy to advertise
> finer-partitions.

I don't get it. How is that the only consistent way? Why is making
irreversible changes even a good way? Just layer the masks and
trigger notification on changes.

> I don't think the case of having a large compute farm with
> "unimportant" and "important" work is particularly fringe. Reducing
> the impact on the "important" work so that we can scavenge more cycles
> for the latency insensitive "unimportant" is very real.

What if optimizing cache allocation across competing threads of a
process can yield, say, 3% gain across large compute farm? Is that
fringe?

> Right, but it's exactly because of _how bad_ those other mechanisms
> _are_ that cgroups was originally created. Its growth was not
> managed well from there, but let's not step away from the fact that
> this interface was created to solve this problem.

Sure, at the same time, please don't forget that there are ample
reasons we can't replace more basic mechanisms with cgroups. I'm not
saying this can't be part of cgroup but rather that it's misguided to
do plunge into cgroups as the first and only step.

More importantly, I am extremely doubtful that we understand the usage
scenarios and their benefits very well at this point and want to avoid
over-committing to something we'll look back and regret. As it
currently stands, this has a high likelihood of becoming a mismanaged
growth.

For the cache allocation thing, I'd strongly suggest something way
simpler and non-commmittal - e.g. create a char device node with
simple configuration and basic access control. If this *really* turns
out to be useful and its configuration complex enough to warrant
cgroup integration, let's do it then, and if we actually end up there,
I bet the interface that we'd come up with at that point would be
different from what people are proposing now.

> > Yeah, I understand the similarity part but don't buy that the benefit
> > there is big enough to introduce a kernel API which is expected to be
> > used by individual programs which is radically different from how
> > processes / threads are organized and applications interact with the
> > kernel.
>
> Sorry, I don't quite follow, in what way is it radically different?
> What is magically different about a process versus a thread in this
> sub-division?

I meant that cgroupfs as opposed to most other programming interfaces
that we publish to applications. We already have process / thread
hierarchy which is created through forking/cloning and conventions
built around them for interaction. No sane application programming
interface requires individual applications to open a file somewhere,
echo some values to it and use directory operations to manage its
organization. Will get back to this later.

> > All controllers only get what their ancestors can hand down to them.
> > That's basic hierarchical behavior.
>
> And many users want non work-conserving systems in which we can add
> and remove idle resources. This means that how much bandwidth an
> ancestor has is not fixed in stone.

I'm having a hard time following you on this part of the discussion.
Can you give me an example?

> > If that's the case and we fail miserably at creating a reasonable
> > programming interface for that, we can always revive thread
> > granularity. This is mostly a policy decision after all.
>
> These interfaces should be presented side-by-side. This is not a
> reasonable patch-later part of the interface as we depend on it today.

Revival of thread affinity is trivial and will stay that way for a
long time and the transition is already gradual, so it'll be a lost
opportunity but there is quite a bit of maneuvering room. Anyways, on
with the sub-process interface.

Skipping description of the problems with the current setup here as
I've repated it a couple times in this thread already.

On the other sub-thread, I said that process/thread tree and cgroup
association are inherently tied. This is because a new child task is
always born into the parent's cgroup and the only reason cgroup works
on system management use cases is because system management often
controls enough of how processes are created.

The flexible migration that cgroup supports may suggest that an
external agent with enough information can define and manage
sub-process hierarchy without involving the target application but
this doesn't necessarily work because such information is often only
available in the application itself and the internal thread hierarchy
should be agreeable to the hierarchy that's being imposed upon it -
when threads are dynamically created, different parts of the hierarchy
should be created by different parent thread.

Also, the problem with external and in-application manipulations
stepping on each other's toes is mostly not caused by individual
config settings but by the possibility that they may try to set up
different hierarchies or modify the existing one in a way which is not
expected by the other.

Given that thread hierarchy already needs to be compatible with
resource hierarchy, is something unix programs already understands and
thus can render itself to an a lot more conventional interface which
doesn't cause organizational conflicts, I think it's logical to use
that for sub-process resource distribution.

So, it comes down to sth like the following

set_resource($TID, $FLAGS, $KEY, $VAL)

- If $TID isn't already a resource group leader, it creates a
sub-cgroup, sets $KEY to $VAL and moves $PID and all its descendants
to it.

- If $TID is already a resource group leader, set $KEY to $VAL.

- If the process is moved to another cgroup, the sub-hierarchy is
preserved.

The reality is a bit more complex and cgroup core would need to handle
implicit leaf cgroups and duplicating sub-hierarchy. The biggest
complexity would be extending atomic multi-thread migrations to
accomodate multiple targets but it already does atomic multi-task
migrations and performing the migrations back-to-back should work.
Controller side changes wouldn't be much. Copying configs to clone
sub-hierarchy and specifying which are availble should be about it.

This should give applications a simple and straight-forward interface
to program against while avoiding all the issues with exposing
cgroupfs directly to individual applications.

> > So, the proposed patches already merge cpu and cpuacct, at least in
> > appearance. Given the kitchen-sink nature of cpuset, I don't think it
> > makes sense to fuse it with cpu.
>
> Arguments in favor of this:
> a) Today the load-balancer has _no_ understanding of group level
> cpu-affinity masks.
> b) With SCHED_NUMA, we can benefit from also being able to apply (b)
> to understand which nodes are usable.

Controllers can cooperate with each other on the unified hierarchy -
cpu can just query the matching cpuset css about the relevant
attributes and the results will always be properly hierarchical for
cpu too. There's no reason to merge the two controllers for that.

Thanks.

--
tejun

2015-08-25 21:13:47

by Tejun Heo

[permalink] [raw]
Subject: Re: [PATCH 3/3] sched: Implement interface for cgroup unified hierarchy

Hello, Kame.

On Tue, Aug 25, 2015 at 11:36:25AM +0900, Kamezawa Hiroyuki wrote:
> I think I should explain my customer's use case of qemu + cpuset/cpu (via libvirt)
>
> (1) Isolating hypervisor thread.
> As already discussed, hypervisor threads are isolated by cpuset. But their purpose
> is to avoid _latency_ spike caused by hypervisor behavior. So, "nice" cannot be solution
> as already discussed.
>
> (2) Fixed rate vcpu service.
> With using cpu controller's quota/period feature, my customer creates vcpu models like
> Low(1GHz), Mid(2GHz), High(3GHz) for IaaS system.
>
> To do this, each vcpus should be quota-limited independently, with per-thread cpu control.
>
> Especially, the method (1) is used in several enterprise customers for stabilizing their system.
>
> Sub-process control should be provided by some way.

Can you please take a look at the proposal on my reply to Paul's
email? AFAICS, both of above cases should be fine with that.

Thanks.

--
tejun