2019-01-29 18:27:01

by Chris Down

[permalink] [raw]
Subject: [PATCH] mm: Make memory.emin the baseline for utilisation determination

Roman points out that when when we do the low reclaim pass, we scale the
reclaim pressure relative to position between 0 and the maximum
protection threshold.

However, if the maximum protection is based on memory.elow, and
memory.emin is above zero, this means we still may get binary behaviour
on second-pass low reclaim. This is because we scale starting at 0, not
starting at memory.emin, and since we don't scan at all below emin, we
end up with cliff behaviour.

This should be a fairly uncommon case since usually we don't go into the
second pass, but it makes sense to scale our low reclaim pressure
starting at emin.

You can test this by catting two large sparse files, one in a cgroup
with emin set to some moderate size compared to physical RAM, and
another cgroup without any emin. In both cgroups, set an elow larger
than 50% of physical RAM. The one with emin will have less page
scanning, as reclaim pressure is lower.

Signed-off-by: Chris Down <[email protected]>
Suggested-by: Roman Gushchin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Roman Gushchin <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
---
include/linux/memcontrol.h | 6 +++--
mm/vmscan.c | 55 +++++++++++++++++++++++---------------
2 files changed, 37 insertions(+), 24 deletions(-)

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 290cfbfd60cd..89e460f9612f 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -333,9 +333,11 @@ static inline bool mem_cgroup_disabled(void)
return !cgroup_subsys_enabled(memory_cgrp_subsys);
}

-static inline unsigned long mem_cgroup_protection(struct mem_cgroup *memcg)
+static inline void mem_cgroup_protection(struct mem_cgroup *memcg,
+ unsigned long *min, unsigned long *low)
{
- return max(READ_ONCE(memcg->memory.emin), READ_ONCE(memcg->memory.elow));
+ *min = READ_ONCE(memcg->memory.emin);
+ *low = READ_ONCE(memcg->memory.elow);
}

enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root,
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 549251818605..f7c4ab39d5d0 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2447,12 +2447,12 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
int file = is_file_lru(lru);
unsigned long lruvec_size;
unsigned long scan;
- unsigned long protection;
+ unsigned long min, low;

lruvec_size = lruvec_lru_size(lruvec, lru, sc->reclaim_idx);
- protection = mem_cgroup_protection(memcg);
+ mem_cgroup_protection(memcg, &min, &low);

- if (protection > 0) {
+ if (min || low) {
/*
* Scale a cgroup's reclaim pressure by proportioning
* its current usage to its memory.low or memory.min
@@ -2467,28 +2467,38 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
* set it too low, which is not ideal.
*/
unsigned long cgroup_size = mem_cgroup_size(memcg);
- unsigned long baseline = 0;

/*
- * During the reclaim first pass, we only consider
- * cgroups in excess of their protection setting, but if
- * that doesn't produce free pages, we come back for a
- * second pass where we reclaim from all groups.
+ * If there is any protection in place, we adjust scan
+ * pressure in proportion to how much a group's current
+ * usage exceeds that, in percent.
*
- * To maintain fairness in both cases, the first pass
- * targets groups in proportion to their overage, and
- * the second pass targets groups in proportion to their
- * protection utilization.
- *
- * So on the first pass, a group whose size is 130% of
- * its protection will be targeted at 30% of its size.
- * On the second pass, a group whose size is at 40% of
- * its protection will be
- * targeted at 40% of its size.
+ * There is one special case: in the first reclaim pass,
+ * we skip over all groups that are within their low
+ * protection. If that fails to reclaim enough pages to
+ * satisfy the reclaim goal, we come back and override
+ * the best-effort low protection. However, we still
+ * ideally want to honor how well-behaved groups are in
+ * that case instead of simply punishing them all
+ * equally. As such, we reclaim them based on how much
+ * of their best-effort protection they are using. Usage
+ * below memory.min is excluded from consideration when
+ * calculating utilisation, as it isn't ever
+ * reclaimable, so it might as well not exist for our
+ * purposes.
*/
- if (!sc->memcg_low_reclaim)
- baseline = lruvec_size;
- scan = lruvec_size * cgroup_size / protection - baseline;
+ if (sc->memcg_low_reclaim && low > min) {
+ /*
+ * Reclaim according to utilisation between min
+ * and low
+ */
+ scan = lruvec_size * (cgroup_size - min) /
+ (low - min);
+ } else {
+ /* Reclaim according to protection overage */
+ scan = lruvec_size * cgroup_size /
+ max(min, low) - lruvec_size;
+ }

/*
* Don't allow the scan target to exceed the lruvec
@@ -2504,7 +2514,8 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
* some cases in the case of large overages.
*
* Also, minimally target SWAP_CLUSTER_MAX pages to keep
- * reclaim moving forwards.
+ * reclaim moving forwards, avoiding decremeting
+ * sc->priority further than desirable.
*/
scan = clamp(scan, SWAP_CLUSTER_MAX, lruvec_size);
} else {
--
2.20.1



2019-01-29 19:05:15

by Chris Down

[permalink] [raw]
Subject: [PATCH v2] mm: Make memory.emin the baseline for utilisation determination

Roman points out that when when we do the low reclaim pass, we scale the
reclaim pressure relative to position between 0 and the maximum
protection threshold.

However, if the maximum protection is based on memory.elow, and
memory.emin is above zero, this means we still may get binary behaviour
on second-pass low reclaim. This is because we scale starting at 0, not
starting at memory.emin, and since we don't scan at all below emin, we
end up with cliff behaviour.

This should be a fairly uncommon case since usually we don't go into the
second pass, but it makes sense to scale our low reclaim pressure
starting at emin.

You can test this by catting two large sparse files, one in a cgroup
with emin set to some moderate size compared to physical RAM, and
another cgroup without any emin. In both cgroups, set an elow larger
than 50% of physical RAM. The one with emin will have less page
scanning, as reclaim pressure is lower.

Signed-off-by: Chris Down <[email protected]>
Suggested-by: Roman Gushchin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Roman Gushchin <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
---
include/linux/memcontrol.h | 12 ++++++---
mm/vmscan.c | 55 +++++++++++++++++++++++---------------
2 files changed, 41 insertions(+), 26 deletions(-)

I'm an idiot and forgot to also update the !CONFIG_MEMCG definition for
mem_cgroup_protection. This is the fixed version.

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 290cfbfd60cd..ba99d25d2a98 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -333,9 +333,11 @@ static inline bool mem_cgroup_disabled(void)
return !cgroup_subsys_enabled(memory_cgrp_subsys);
}

-static inline unsigned long mem_cgroup_protection(struct mem_cgroup *memcg)
+static inline void mem_cgroup_protection(struct mem_cgroup *memcg,
+ unsigned long *min, unsigned long *low)
{
- return max(READ_ONCE(memcg->memory.emin), READ_ONCE(memcg->memory.elow));
+ *min = READ_ONCE(memcg->memory.emin);
+ *low = READ_ONCE(memcg->memory.elow);
}

enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root,
@@ -826,9 +828,11 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm,
{
}

-static inline unsigned long mem_cgroup_protection(struct mem_cgroup *memcg)
+static inline void mem_cgroup_protection(struct mem_cgroup *memcg,
+ unsigned long *min, unsigned long *low)
{
- return 0;
+ *min = NULL;
+ *low = NULL;
}

static inline enum mem_cgroup_protection mem_cgroup_protected(
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 549251818605..f7c4ab39d5d0 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2447,12 +2447,12 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
int file = is_file_lru(lru);
unsigned long lruvec_size;
unsigned long scan;
- unsigned long protection;
+ unsigned long min, low;

lruvec_size = lruvec_lru_size(lruvec, lru, sc->reclaim_idx);
- protection = mem_cgroup_protection(memcg);
+ mem_cgroup_protection(memcg, &min, &low);

- if (protection > 0) {
+ if (min || low) {
/*
* Scale a cgroup's reclaim pressure by proportioning
* its current usage to its memory.low or memory.min
@@ -2467,28 +2467,38 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
* set it too low, which is not ideal.
*/
unsigned long cgroup_size = mem_cgroup_size(memcg);
- unsigned long baseline = 0;

/*
- * During the reclaim first pass, we only consider
- * cgroups in excess of their protection setting, but if
- * that doesn't produce free pages, we come back for a
- * second pass where we reclaim from all groups.
+ * If there is any protection in place, we adjust scan
+ * pressure in proportion to how much a group's current
+ * usage exceeds that, in percent.
*
- * To maintain fairness in both cases, the first pass
- * targets groups in proportion to their overage, and
- * the second pass targets groups in proportion to their
- * protection utilization.
- *
- * So on the first pass, a group whose size is 130% of
- * its protection will be targeted at 30% of its size.
- * On the second pass, a group whose size is at 40% of
- * its protection will be
- * targeted at 40% of its size.
+ * There is one special case: in the first reclaim pass,
+ * we skip over all groups that are within their low
+ * protection. If that fails to reclaim enough pages to
+ * satisfy the reclaim goal, we come back and override
+ * the best-effort low protection. However, we still
+ * ideally want to honor how well-behaved groups are in
+ * that case instead of simply punishing them all
+ * equally. As such, we reclaim them based on how much
+ * of their best-effort protection they are using. Usage
+ * below memory.min is excluded from consideration when
+ * calculating utilisation, as it isn't ever
+ * reclaimable, so it might as well not exist for our
+ * purposes.
*/
- if (!sc->memcg_low_reclaim)
- baseline = lruvec_size;
- scan = lruvec_size * cgroup_size / protection - baseline;
+ if (sc->memcg_low_reclaim && low > min) {
+ /*
+ * Reclaim according to utilisation between min
+ * and low
+ */
+ scan = lruvec_size * (cgroup_size - min) /
+ (low - min);
+ } else {
+ /* Reclaim according to protection overage */
+ scan = lruvec_size * cgroup_size /
+ max(min, low) - lruvec_size;
+ }

/*
* Don't allow the scan target to exceed the lruvec
@@ -2504,7 +2514,8 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
* some cases in the case of large overages.
*
* Also, minimally target SWAP_CLUSTER_MAX pages to keep
- * reclaim moving forwards.
+ * reclaim moving forwards, avoiding decremeting
+ * sc->priority further than desirable.
*/
scan = clamp(scan, SWAP_CLUSTER_MAX, lruvec_size);
} else {
--
2.20.1


2019-01-29 19:15:52

by Chris Down

[permalink] [raw]
Subject: Re: [PATCH v3] mm: Make memory.emin the baseline for utilisation determination

Roman points out that when when we do the low reclaim pass, we scale the
reclaim pressure relative to position between 0 and the maximum
protection threshold.

However, if the maximum protection is based on memory.elow, and
memory.emin is above zero, this means we still may get binary behaviour
on second-pass low reclaim. This is because we scale starting at 0, not
starting at memory.emin, and since we don't scan at all below emin, we
end up with cliff behaviour.

This should be a fairly uncommon case since usually we don't go into the
second pass, but it makes sense to scale our low reclaim pressure
starting at emin.

You can test this by catting two large sparse files, one in a cgroup
with emin set to some moderate size compared to physical RAM, and
another cgroup without any emin. In both cgroups, set an elow larger
than 50% of physical RAM. The one with emin will have less page
scanning, as reclaim pressure is lower.

Signed-off-by: Chris Down <[email protected]>
Suggested-by: Roman Gushchin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Roman Gushchin <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
---
include/linux/memcontrol.h | 12 ++++++---
mm/vmscan.c | 55 +++++++++++++++++++++++---------------
2 files changed, 41 insertions(+), 26 deletions(-)

...well, I sent it with NULL for !CONFIG_MEMCG when I wanted 0. This is the
fixed fix(tm) patch.

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 290cfbfd60cd..6f7e0e1b581d 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -333,9 +333,11 @@ static inline bool mem_cgroup_disabled(void)
return !cgroup_subsys_enabled(memory_cgrp_subsys);
}

-static inline unsigned long mem_cgroup_protection(struct mem_cgroup *memcg)
+static inline void mem_cgroup_protection(struct mem_cgroup *memcg,
+ unsigned long *min, unsigned long *low)
{
- return max(READ_ONCE(memcg->memory.emin), READ_ONCE(memcg->memory.elow));
+ *min = READ_ONCE(memcg->memory.emin);
+ *low = READ_ONCE(memcg->memory.elow);
}

enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root,
@@ -826,9 +828,11 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm,
{
}

-static inline unsigned long mem_cgroup_protection(struct mem_cgroup *memcg)
+static inline void mem_cgroup_protection(struct mem_cgroup *memcg,
+ unsigned long *min, unsigned long *low)
{
- return 0;
+ *min = 0;
+ *low = 0;
}

static inline enum mem_cgroup_protection mem_cgroup_protected(
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 549251818605..f7c4ab39d5d0 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2447,12 +2447,12 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
int file = is_file_lru(lru);
unsigned long lruvec_size;
unsigned long scan;
- unsigned long protection;
+ unsigned long min, low;

lruvec_size = lruvec_lru_size(lruvec, lru, sc->reclaim_idx);
- protection = mem_cgroup_protection(memcg);
+ mem_cgroup_protection(memcg, &min, &low);

- if (protection > 0) {
+ if (min || low) {
/*
* Scale a cgroup's reclaim pressure by proportioning
* its current usage to its memory.low or memory.min
@@ -2467,28 +2467,38 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
* set it too low, which is not ideal.
*/
unsigned long cgroup_size = mem_cgroup_size(memcg);
- unsigned long baseline = 0;

/*
- * During the reclaim first pass, we only consider
- * cgroups in excess of their protection setting, but if
- * that doesn't produce free pages, we come back for a
- * second pass where we reclaim from all groups.
+ * If there is any protection in place, we adjust scan
+ * pressure in proportion to how much a group's current
+ * usage exceeds that, in percent.
*
- * To maintain fairness in both cases, the first pass
- * targets groups in proportion to their overage, and
- * the second pass targets groups in proportion to their
- * protection utilization.
- *
- * So on the first pass, a group whose size is 130% of
- * its protection will be targeted at 30% of its size.
- * On the second pass, a group whose size is at 40% of
- * its protection will be
- * targeted at 40% of its size.
+ * There is one special case: in the first reclaim pass,
+ * we skip over all groups that are within their low
+ * protection. If that fails to reclaim enough pages to
+ * satisfy the reclaim goal, we come back and override
+ * the best-effort low protection. However, we still
+ * ideally want to honor how well-behaved groups are in
+ * that case instead of simply punishing them all
+ * equally. As such, we reclaim them based on how much
+ * of their best-effort protection they are using. Usage
+ * below memory.min is excluded from consideration when
+ * calculating utilisation, as it isn't ever
+ * reclaimable, so it might as well not exist for our
+ * purposes.
*/
- if (!sc->memcg_low_reclaim)
- baseline = lruvec_size;
- scan = lruvec_size * cgroup_size / protection - baseline;
+ if (sc->memcg_low_reclaim && low > min) {
+ /*
+ * Reclaim according to utilisation between min
+ * and low
+ */
+ scan = lruvec_size * (cgroup_size - min) /
+ (low - min);
+ } else {
+ /* Reclaim according to protection overage */
+ scan = lruvec_size * cgroup_size /
+ max(min, low) - lruvec_size;
+ }

/*
* Don't allow the scan target to exceed the lruvec
@@ -2504,7 +2514,8 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
* some cases in the case of large overages.
*
* Also, minimally target SWAP_CLUSTER_MAX pages to keep
- * reclaim moving forwards.
+ * reclaim moving forwards, avoiding decremeting
+ * sc->priority further than desirable.
*/
scan = clamp(scan, SWAP_CLUSTER_MAX, lruvec_size);
} else {
--
2.20.1


2019-01-30 23:02:08

by Roman Gushchin

[permalink] [raw]
Subject: Re: [PATCH v3] mm: Make memory.emin the baseline for utilisation determination

On Tue, Jan 29, 2019 at 02:15:25PM -0500, Chris Down wrote:
> Roman points out that when when we do the low reclaim pass, we scale the
> reclaim pressure relative to position between 0 and the maximum
> protection threshold.
>
> However, if the maximum protection is based on memory.elow, and
> memory.emin is above zero, this means we still may get binary behaviour
> on second-pass low reclaim. This is because we scale starting at 0, not
> starting at memory.emin, and since we don't scan at all below emin, we
> end up with cliff behaviour.
>
> This should be a fairly uncommon case since usually we don't go into the
> second pass, but it makes sense to scale our low reclaim pressure
> starting at emin.
>
> You can test this by catting two large sparse files, one in a cgroup
> with emin set to some moderate size compared to physical RAM, and
> another cgroup without any emin. In both cgroups, set an elow larger
> than 50% of physical RAM. The one with emin will have less page
> scanning, as reclaim pressure is lower.
>
> Signed-off-by: Chris Down <[email protected]>
> Suggested-by: Roman Gushchin <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Tejun Heo <[email protected]>
> Cc: Roman Gushchin <[email protected]>
> Cc: Dennis Zhou <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]

Acked-by: Roman Gushchin <[email protected]>

Thanks!

2019-02-01 05:28:36

by Chris Down

[permalink] [raw]
Subject: [PATCH v4] mm: Make memory.emin the baseline for utilisation determination

Roman points out that when when we do the low reclaim pass, we scale the
reclaim pressure relative to position between 0 and the maximum
protection threshold.

However, if the maximum protection is based on memory.elow, and
memory.emin is above zero, this means we still may get binary behaviour
on second-pass low reclaim. This is because we scale starting at 0, not
starting at memory.emin, and since we don't scan at all below emin, we
end up with cliff behaviour.

This should be a fairly uncommon case since usually we don't go into the
second pass, but it makes sense to scale our low reclaim pressure
starting at emin.

You can test this by catting two large sparse files, one in a cgroup
with emin set to some moderate size compared to physical RAM, and
another cgroup without any emin. In both cgroups, set an elow larger
than 50% of physical RAM. The one with emin will have less page
scanning, as reclaim pressure is lower.

Signed-off-by: Chris Down <[email protected]>
Suggested-by: Roman Gushchin <[email protected]>
Cc: Johannes Weiner <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Tejun Heo <[email protected]>
Cc: Roman Gushchin <[email protected]>
Cc: Dennis Zhou <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
---
include/linux/memcontrol.h | 19 ++++++++-----
mm/vmscan.c | 55 +++++++++++++++++++++++---------------
2 files changed, 46 insertions(+), 28 deletions(-)

Rebase on top of and apply the same idea as what was applied to handle
cgroup_memory=disable properly for the original proportional patch
([email protected], "mm, memcg: Handle
cgroup_disable=memory when getting memcg protection").

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 49742489aa56..0fcbea7ad0c8 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -333,12 +333,17 @@ static inline bool mem_cgroup_disabled(void)
return !cgroup_subsys_enabled(memory_cgrp_subsys);
}

-static inline unsigned long mem_cgroup_protection(struct mem_cgroup *memcg)
+static inline void mem_cgroup_protection(struct mem_cgroup *memcg,
+ unsigned long *min, unsigned long *low)
{
- if (mem_cgroup_disabled())
- return 0;
+ if (mem_cgroup_disabled()) {
+ *min = 0;
+ *low = 0;
+ return;
+ }

- return max(READ_ONCE(memcg->memory.emin), READ_ONCE(memcg->memory.elow));
+ *min = READ_ONCE(memcg->memory.emin);
+ *low = READ_ONCE(memcg->memory.elow);
}

enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root,
@@ -829,9 +834,11 @@ static inline void memcg_memory_event_mm(struct mm_struct *mm,
{
}

-static inline unsigned long mem_cgroup_protection(struct mem_cgroup *memcg)
+static inline void mem_cgroup_protection(struct mem_cgroup *memcg,
+ unsigned long *min, unsigned long *low)
{
- return 0;
+ *min = 0;
+ *low = 0;
}

static inline enum mem_cgroup_protection mem_cgroup_protected(
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 549251818605..f7c4ab39d5d0 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2447,12 +2447,12 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
int file = is_file_lru(lru);
unsigned long lruvec_size;
unsigned long scan;
- unsigned long protection;
+ unsigned long min, low;

lruvec_size = lruvec_lru_size(lruvec, lru, sc->reclaim_idx);
- protection = mem_cgroup_protection(memcg);
+ mem_cgroup_protection(memcg, &min, &low);

- if (protection > 0) {
+ if (min || low) {
/*
* Scale a cgroup's reclaim pressure by proportioning
* its current usage to its memory.low or memory.min
@@ -2467,28 +2467,38 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
* set it too low, which is not ideal.
*/
unsigned long cgroup_size = mem_cgroup_size(memcg);
- unsigned long baseline = 0;

/*
- * During the reclaim first pass, we only consider
- * cgroups in excess of their protection setting, but if
- * that doesn't produce free pages, we come back for a
- * second pass where we reclaim from all groups.
+ * If there is any protection in place, we adjust scan
+ * pressure in proportion to how much a group's current
+ * usage exceeds that, in percent.
*
- * To maintain fairness in both cases, the first pass
- * targets groups in proportion to their overage, and
- * the second pass targets groups in proportion to their
- * protection utilization.
- *
- * So on the first pass, a group whose size is 130% of
- * its protection will be targeted at 30% of its size.
- * On the second pass, a group whose size is at 40% of
- * its protection will be
- * targeted at 40% of its size.
+ * There is one special case: in the first reclaim pass,
+ * we skip over all groups that are within their low
+ * protection. If that fails to reclaim enough pages to
+ * satisfy the reclaim goal, we come back and override
+ * the best-effort low protection. However, we still
+ * ideally want to honor how well-behaved groups are in
+ * that case instead of simply punishing them all
+ * equally. As such, we reclaim them based on how much
+ * of their best-effort protection they are using. Usage
+ * below memory.min is excluded from consideration when
+ * calculating utilisation, as it isn't ever
+ * reclaimable, so it might as well not exist for our
+ * purposes.
*/
- if (!sc->memcg_low_reclaim)
- baseline = lruvec_size;
- scan = lruvec_size * cgroup_size / protection - baseline;
+ if (sc->memcg_low_reclaim && low > min) {
+ /*
+ * Reclaim according to utilisation between min
+ * and low
+ */
+ scan = lruvec_size * (cgroup_size - min) /
+ (low - min);
+ } else {
+ /* Reclaim according to protection overage */
+ scan = lruvec_size * cgroup_size /
+ max(min, low) - lruvec_size;
+ }

/*
* Don't allow the scan target to exceed the lruvec
@@ -2504,7 +2514,8 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
* some cases in the case of large overages.
*
* Also, minimally target SWAP_CLUSTER_MAX pages to keep
- * reclaim moving forwards.
+ * reclaim moving forwards, avoiding decremeting
+ * sc->priority further than desirable.
*/
scan = clamp(scan, SWAP_CLUSTER_MAX, lruvec_size);
} else {
--
2.20.1


2019-02-06 15:16:54

by Johannes Weiner

[permalink] [raw]
Subject: Re: [PATCH v4] mm: Make memory.emin the baseline for utilisation determination

On Fri, Feb 01, 2019 at 12:18:10AM -0500, Chris Down wrote:
> Roman points out that when when we do the low reclaim pass, we scale the
> reclaim pressure relative to position between 0 and the maximum
> protection threshold.
>
> However, if the maximum protection is based on memory.elow, and
> memory.emin is above zero, this means we still may get binary behaviour
> on second-pass low reclaim. This is because we scale starting at 0, not
> starting at memory.emin, and since we don't scan at all below emin, we
> end up with cliff behaviour.
>
> This should be a fairly uncommon case since usually we don't go into the
> second pass, but it makes sense to scale our low reclaim pressure
> starting at emin.
>
> You can test this by catting two large sparse files, one in a cgroup
> with emin set to some moderate size compared to physical RAM, and
> another cgroup without any emin. In both cgroups, set an elow larger
> than 50% of physical RAM. The one with emin will have less page
> scanning, as reclaim pressure is lower.
>
> Signed-off-by: Chris Down <[email protected]>
> Suggested-by: Roman Gushchin <[email protected]>
> Cc: Johannes Weiner <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Michal Hocko <[email protected]>
> Cc: Tejun Heo <[email protected]>
> Cc: Roman Gushchin <[email protected]>
> Cc: Dennis Zhou <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]

Acked-by: Johannes Weiner <[email protected]>