New series based on the discussion in the previous thread around
getting lock_page_memcg() out of rmap.
I beat on this with concurrent high-frequency moving of tasks that
partially share a swapped out shmem file. I didn't spot anything
problematic. That said, it is quite subtle, and Hugh, I'd feel better
if you could also subject it to your torture suite ;)
Thanks!
Against yesterday's mm-unstable.
Documentation/admin-guide/cgroup-v1/memory.rst | 11 ++++-
mm/memcontrol.c | 56 ++++++++++++++++++------
mm/rmap.c | 26 ++++-------
3 files changed, 60 insertions(+), 33 deletions(-)
The previous patch made sure charge moving only touches pages for
which page_mapped() is stable. lock_page_memcg() is no longer needed.
Signed-off-by: Johannes Weiner <[email protected]>
---
mm/rmap.c | 26 ++++++++------------------
1 file changed, 8 insertions(+), 18 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index b616870a09be..32e48b1c5847 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1222,9 +1222,6 @@ void page_add_anon_rmap(struct page *page,
bool compound = flags & RMAP_COMPOUND;
bool first = true;
- if (unlikely(PageKsm(page)))
- lock_page_memcg(page);
-
/* Is page being mapped by PTE? Is this its first map to be added? */
if (likely(!compound)) {
first = atomic_inc_and_test(&page->_mapcount);
@@ -1262,15 +1259,14 @@ void page_add_anon_rmap(struct page *page,
if (nr)
__mod_lruvec_page_state(page, NR_ANON_MAPPED, nr);
- if (unlikely(PageKsm(page)))
- unlock_page_memcg(page);
-
- /* address might be in next vma when migration races vma_adjust */
- else if (first)
- __page_set_anon_rmap(page, vma, address,
- !!(flags & RMAP_EXCLUSIVE));
- else
- __page_check_anon_rmap(page, vma, address);
+ if (likely(!PageKsm(page))) {
+ /* address might be in next vma when migration races vma_adjust */
+ if (first)
+ __page_set_anon_rmap(page, vma, address,
+ !!(flags & RMAP_EXCLUSIVE));
+ else
+ __page_check_anon_rmap(page, vma, address);
+ }
mlock_vma_page(page, vma, compound);
}
@@ -1329,7 +1325,6 @@ void page_add_file_rmap(struct page *page,
bool first;
VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page);
- lock_page_memcg(page);
/* Is page being mapped by PTE? Is this its first map to be added? */
if (likely(!compound)) {
@@ -1365,7 +1360,6 @@ void page_add_file_rmap(struct page *page,
NR_SHMEM_PMDMAPPED : NR_FILE_PMDMAPPED, nr_pmdmapped);
if (nr)
__mod_lruvec_page_state(page, NR_FILE_MAPPED, nr);
- unlock_page_memcg(page);
mlock_vma_page(page, vma, compound);
}
@@ -1394,8 +1388,6 @@ void page_remove_rmap(struct page *page,
return;
}
- lock_page_memcg(page);
-
/* Is page being unmapped by PTE? Is this its last map to be removed? */
if (likely(!compound)) {
last = atomic_add_negative(-1, &page->_mapcount);
@@ -1451,8 +1443,6 @@ void page_remove_rmap(struct page *page,
* and remember that it's only reliable while mapped.
*/
- unlock_page_memcg(page);
-
munlock_vma_page(page, vma, compound);
}
--
2.38.1
Charge moving mode in cgroup1 allows memory to follow tasks as they
migrate between cgroups. This is, and always has been, a questionable
thing to do - for several reasons.
First, it's expensive. Pages need to be identified, locked and
isolated from various MM operations, and reassigned, one by one.
Second, it's unreliable. Once pages are charged to a cgroup, there
isn't always a clear owner task anymore. Cache isn't moved at all, for
example. Mapped memory is moved - but if trylocking or isolating a
page fails, it's arbitrarily left behind. Frequent moving between
domains may leave a task's memory scattered all over the place.
Third, it isn't really needed. Launcher tasks can kick off workload
tasks directly in their target cgroup. Using dedicated per-workload
groups allows fine-grained policy adjustments - no need to move tasks
and their physical pages between control domains. The feature was
never forward-ported to cgroup2, and it hasn't been missed.
Despite it being a niche usecase, the maintenance overhead of
supporting it is enormous. Because pages are moved while they are live
and subject to various MM operations, the synchronization rules are
complicated. There are lock_page_memcg() in MM and FS code, which
non-cgroup people don't understand. In some cases we've been able to
shift code and cgroup API calls around such that we can rely on native
locking as much as possible. But that's fragile, and sometimes we need
to hold MM locks for longer than we otherwise would (pte lock e.g.).
Mark the feature deprecated. Hopefully we can remove it soon.
Signed-off-by: Johannes Weiner <[email protected]>
---
Documentation/admin-guide/cgroup-v1/memory.rst | 11 ++++++++++-
mm/memcontrol.c | 4 ++++
2 files changed, 14 insertions(+), 1 deletion(-)
diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst
index 60370f2c67b9..87d7877b98ec 100644
--- a/Documentation/admin-guide/cgroup-v1/memory.rst
+++ b/Documentation/admin-guide/cgroup-v1/memory.rst
@@ -86,6 +86,8 @@ Brief summary of control files.
memory.swappiness set/show swappiness parameter of vmscan
(See sysctl's vm.swappiness)
memory.move_charge_at_immigrate set/show controls of moving charges
+ This knob is deprecated and shouldn't be
+ used.
memory.oom_control set/show oom controls.
memory.numa_stat show the number of memory usage per numa
node
@@ -717,9 +719,16 @@ Soft limits can be setup by using the following commands (in this example we
It is recommended to set the soft limit always below the hard limit,
otherwise the hard limit will take precedence.
-8. Move charges at task migration
+8. Move charges at task migration (DEPRECATED!)
=================================
+THIS IS DEPRECATED!
+
+It's expensive and unreliable! It's better practice to launch workload
+tasks directly from inside their target cgroup. Use dedicated workload
+cgroups to allow fine-grained policy adjustments without having to
+move physical pages between control domains.
+
Users can move charges associated with a task along with task migration, that
is, uncharge task's pages from the old cgroup and charge them to the new cgroup.
This feature is not supported in !CONFIG_MMU environments because of lack of
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index b696354c1b21..e650a38d9a90 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3919,6 +3919,10 @@ static int mem_cgroup_move_charge_write(struct cgroup_subsys_state *css,
{
struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+ pr_warn_once("Cgroup memory moving is deprecated. "
+ "Please report your usecase to [email protected] if you "
+ "depend on this functionality.\n");
+
if (val & ~MOVE_MASK)
return -EINVAL;
--
2.38.1
On Tue, Dec 6, 2022 at 9:14 AM Johannes Weiner <[email protected]> wrote:
>
> Charge moving mode in cgroup1 allows memory to follow tasks as they
> migrate between cgroups. This is, and always has been, a questionable
> thing to do - for several reasons.
>
> First, it's expensive. Pages need to be identified, locked and
> isolated from various MM operations, and reassigned, one by one.
>
> Second, it's unreliable. Once pages are charged to a cgroup, there
> isn't always a clear owner task anymore. Cache isn't moved at all, for
> example. Mapped memory is moved - but if trylocking or isolating a
> page fails, it's arbitrarily left behind. Frequent moving between
> domains may leave a task's memory scattered all over the place.
>
> Third, it isn't really needed. Launcher tasks can kick off workload
> tasks directly in their target cgroup. Using dedicated per-workload
> groups allows fine-grained policy adjustments - no need to move tasks
> and their physical pages between control domains. The feature was
> never forward-ported to cgroup2, and it hasn't been missed.
>
> Despite it being a niche usecase, the maintenance overhead of
> supporting it is enormous. Because pages are moved while they are live
> and subject to various MM operations, the synchronization rules are
> complicated. There are lock_page_memcg() in MM and FS code, which
> non-cgroup people don't understand. In some cases we've been able to
> shift code and cgroup API calls around such that we can rely on native
> locking as much as possible. But that's fragile, and sometimes we need
> to hold MM locks for longer than we otherwise would (pte lock e.g.).
>
> Mark the feature deprecated. Hopefully we can remove it soon.
>
> Signed-off-by: Johannes Weiner <[email protected]>
Acked-by: Shakeel Butt <[email protected]>
I would request this patch to be backported to stable kernels as well
for early warnings to users which update to newer kernels very late.
On Tue, 6 Dec 2022, Johannes Weiner wrote:
> The previous patch made sure charge moving only touches pages for
> which page_mapped() is stable. lock_page_memcg() is no longer needed.
>
> Signed-off-by: Johannes Weiner <[email protected]>
Acked-by: Hugh Dickins <[email protected]>
> ---
> mm/rmap.c | 26 ++++++++------------------
> 1 file changed, 8 insertions(+), 18 deletions(-)
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index b616870a09be..32e48b1c5847 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1222,9 +1222,6 @@ void page_add_anon_rmap(struct page *page,
> bool compound = flags & RMAP_COMPOUND;
> bool first = true;
>
> - if (unlikely(PageKsm(page)))
> - lock_page_memcg(page);
> -
> /* Is page being mapped by PTE? Is this its first map to be added? */
> if (likely(!compound)) {
> first = atomic_inc_and_test(&page->_mapcount);
> @@ -1262,15 +1259,14 @@ void page_add_anon_rmap(struct page *page,
> if (nr)
> __mod_lruvec_page_state(page, NR_ANON_MAPPED, nr);
>
> - if (unlikely(PageKsm(page)))
> - unlock_page_memcg(page);
> -
> - /* address might be in next vma when migration races vma_adjust */
> - else if (first)
> - __page_set_anon_rmap(page, vma, address,
> - !!(flags & RMAP_EXCLUSIVE));
> - else
> - __page_check_anon_rmap(page, vma, address);
> + if (likely(!PageKsm(page))) {
> + /* address might be in next vma when migration races vma_adjust */
> + if (first)
> + __page_set_anon_rmap(page, vma, address,
> + !!(flags & RMAP_EXCLUSIVE));
> + else
> + __page_check_anon_rmap(page, vma, address);
> + }
>
> mlock_vma_page(page, vma, compound);
> }
> @@ -1329,7 +1325,6 @@ void page_add_file_rmap(struct page *page,
> bool first;
>
> VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page);
> - lock_page_memcg(page);
>
> /* Is page being mapped by PTE? Is this its first map to be added? */
> if (likely(!compound)) {
> @@ -1365,7 +1360,6 @@ void page_add_file_rmap(struct page *page,
> NR_SHMEM_PMDMAPPED : NR_FILE_PMDMAPPED, nr_pmdmapped);
> if (nr)
> __mod_lruvec_page_state(page, NR_FILE_MAPPED, nr);
> - unlock_page_memcg(page);
>
> mlock_vma_page(page, vma, compound);
> }
> @@ -1394,8 +1388,6 @@ void page_remove_rmap(struct page *page,
> return;
> }
>
> - lock_page_memcg(page);
> -
> /* Is page being unmapped by PTE? Is this its last map to be removed? */
> if (likely(!compound)) {
> last = atomic_add_negative(-1, &page->_mapcount);
> @@ -1451,8 +1443,6 @@ void page_remove_rmap(struct page *page,
> * and remember that it's only reliable while mapped.
> */
>
> - unlock_page_memcg(page);
> -
> munlock_vma_page(page, vma, compound);
> }
>
> --
> 2.38.1
>
>
On Tue, 6 Dec 2022, Johannes Weiner wrote:
> Charge moving mode in cgroup1 allows memory to follow tasks as they
> migrate between cgroups. This is, and always has been, a questionable
> thing to do - for several reasons.
>
> First, it's expensive. Pages need to be identified, locked and
> isolated from various MM operations, and reassigned, one by one.
>
> Second, it's unreliable. Once pages are charged to a cgroup, there
> isn't always a clear owner task anymore. Cache isn't moved at all, for
> example. Mapped memory is moved - but if trylocking or isolating a
> page fails, it's arbitrarily left behind. Frequent moving between
> domains may leave a task's memory scattered all over the place.
>
> Third, it isn't really needed. Launcher tasks can kick off workload
> tasks directly in their target cgroup. Using dedicated per-workload
> groups allows fine-grained policy adjustments - no need to move tasks
> and their physical pages between control domains. The feature was
> never forward-ported to cgroup2, and it hasn't been missed.
>
> Despite it being a niche usecase, the maintenance overhead of
> supporting it is enormous. Because pages are moved while they are live
> and subject to various MM operations, the synchronization rules are
> complicated. There are lock_page_memcg() in MM and FS code, which
> non-cgroup people don't understand. In some cases we've been able to
> shift code and cgroup API calls around such that we can rely on native
> locking as much as possible. But that's fragile, and sometimes we need
> to hold MM locks for longer than we otherwise would (pte lock e.g.).
>
> Mark the feature deprecated. Hopefully we can remove it soon.
>
> Signed-off-by: Johannes Weiner <[email protected]>
Acked-by: Hugh Dickins <[email protected]>
but I wonder if it would be helpful to mention move_charge_at_immigrate
in the deprecation message: maybe the first line should be
"Cgroup memory moving (move_charge_at_immigrate) is deprecated.\n"
> ---
> Documentation/admin-guide/cgroup-v1/memory.rst | 11 ++++++++++-
> mm/memcontrol.c | 4 ++++
> 2 files changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst
> index 60370f2c67b9..87d7877b98ec 100644
> --- a/Documentation/admin-guide/cgroup-v1/memory.rst
> +++ b/Documentation/admin-guide/cgroup-v1/memory.rst
> @@ -86,6 +86,8 @@ Brief summary of control files.
> memory.swappiness set/show swappiness parameter of vmscan
> (See sysctl's vm.swappiness)
> memory.move_charge_at_immigrate set/show controls of moving charges
> + This knob is deprecated and shouldn't be
> + used.
> memory.oom_control set/show oom controls.
> memory.numa_stat show the number of memory usage per numa
> node
> @@ -717,9 +719,16 @@ Soft limits can be setup by using the following commands (in this example we
> It is recommended to set the soft limit always below the hard limit,
> otherwise the hard limit will take precedence.
>
> -8. Move charges at task migration
> +8. Move charges at task migration (DEPRECATED!)
> =================================
>
> +THIS IS DEPRECATED!
> +
> +It's expensive and unreliable! It's better practice to launch workload
> +tasks directly from inside their target cgroup. Use dedicated workload
> +cgroups to allow fine-grained policy adjustments without having to
> +move physical pages between control domains.
> +
> Users can move charges associated with a task along with task migration, that
> is, uncharge task's pages from the old cgroup and charge them to the new cgroup.
> This feature is not supported in !CONFIG_MMU environments because of lack of
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index b696354c1b21..e650a38d9a90 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3919,6 +3919,10 @@ static int mem_cgroup_move_charge_write(struct cgroup_subsys_state *css,
> {
> struct mem_cgroup *memcg = mem_cgroup_from_css(css);
>
> + pr_warn_once("Cgroup memory moving is deprecated. "
> + "Please report your usecase to [email protected] if you "
> + "depend on this functionality.\n");
> +
> if (val & ~MOVE_MASK)
> return -EINVAL;
>
> --
> 2.38.1
>
>
On Tue, Dec 06, 2022 at 05:58:14PM -0800, Hugh Dickins wrote:
> On Tue, 6 Dec 2022, Johannes Weiner wrote:
>
> > Charge moving mode in cgroup1 allows memory to follow tasks as they
> > migrate between cgroups. This is, and always has been, a questionable
> > thing to do - for several reasons.
> >
> > First, it's expensive. Pages need to be identified, locked and
> > isolated from various MM operations, and reassigned, one by one.
> >
> > Second, it's unreliable. Once pages are charged to a cgroup, there
> > isn't always a clear owner task anymore. Cache isn't moved at all, for
> > example. Mapped memory is moved - but if trylocking or isolating a
> > page fails, it's arbitrarily left behind. Frequent moving between
> > domains may leave a task's memory scattered all over the place.
> >
> > Third, it isn't really needed. Launcher tasks can kick off workload
> > tasks directly in their target cgroup. Using dedicated per-workload
> > groups allows fine-grained policy adjustments - no need to move tasks
> > and their physical pages between control domains. The feature was
> > never forward-ported to cgroup2, and it hasn't been missed.
> >
> > Despite it being a niche usecase, the maintenance overhead of
> > supporting it is enormous. Because pages are moved while they are live
> > and subject to various MM operations, the synchronization rules are
> > complicated. There are lock_page_memcg() in MM and FS code, which
> > non-cgroup people don't understand. In some cases we've been able to
> > shift code and cgroup API calls around such that we can rely on native
> > locking as much as possible. But that's fragile, and sometimes we need
> > to hold MM locks for longer than we otherwise would (pte lock e.g.).
> >
> > Mark the feature deprecated. Hopefully we can remove it soon.
> >
> > Signed-off-by: Johannes Weiner <[email protected]>
>
> Acked-by: Hugh Dickins <[email protected]>
Thanks
> but I wonder if it would be helpful to mention move_charge_at_immigrate
> in the deprecation message: maybe the first line should be
> "Cgroup memory moving (move_charge_at_immigrate) is deprecated.\n"
Fair enough! Here is the updated patch.
---
From 0e791e6ab8ba2f75dd4205684c06bcc7308d9867 Mon Sep 17 00:00:00 2001
From: Johannes Weiner <[email protected]>
Date: Mon, 5 Dec 2022 19:57:06 +0100
Subject: [PATCH] mm: memcontrol: deprecate charge moving
Charge moving mode in cgroup1 allows memory to follow tasks as they
migrate between cgroups. This is, and always has been, a questionable
thing to do - for several reasons.
First, it's expensive. Pages need to be identified, locked and
isolated from various MM operations, and reassigned, one by one.
Second, it's unreliable. Once pages are charged to a cgroup, there
isn't always a clear owner task anymore. Cache isn't moved at all, for
example. Mapped memory is moved - but if trylocking or isolating a
page fails, it's arbitrarily left behind. Frequent moving between
domains may leave a task's memory scattered all over the place.
Third, it isn't really needed. Launcher tasks can kick off workload
tasks directly in their target cgroup. Using dedicated per-workload
groups allows fine-grained policy adjustments - no need to move tasks
and their physical pages between control domains. The feature was
never forward-ported to cgroup2, and it hasn't been missed.
Despite it being a niche usecase, the maintenance overhead of
supporting it is enormous. Because pages are moved while they are live
and subject to various MM operations, the synchronization rules are
complicated. There are lock_page_memcg() in MM and FS code, which
non-cgroup people don't understand. In some cases we've been able to
shift code and cgroup API calls around such that we can rely on native
locking as much as possible. But that's fragile, and sometimes we need
to hold MM locks for longer than we otherwise would (pte lock e.g.).
Mark the feature deprecated. Hopefully we can remove it soon.
Signed-off-by: Johannes Weiner <[email protected]>
Acked-by: Shakeel Butt <[email protected]>
Acked-by: Hugh Dickins <[email protected]>
Cc: [email protected]
---
Documentation/admin-guide/cgroup-v1/memory.rst | 11 ++++++++++-
mm/memcontrol.c | 4 ++++
2 files changed, 14 insertions(+), 1 deletion(-)
diff --git a/Documentation/admin-guide/cgroup-v1/memory.rst b/Documentation/admin-guide/cgroup-v1/memory.rst
index 60370f2c67b9..87d7877b98ec 100644
--- a/Documentation/admin-guide/cgroup-v1/memory.rst
+++ b/Documentation/admin-guide/cgroup-v1/memory.rst
@@ -86,6 +86,8 @@ Brief summary of control files.
memory.swappiness set/show swappiness parameter of vmscan
(See sysctl's vm.swappiness)
memory.move_charge_at_immigrate set/show controls of moving charges
+ This knob is deprecated and shouldn't be
+ used.
memory.oom_control set/show oom controls.
memory.numa_stat show the number of memory usage per numa
node
@@ -717,9 +719,16 @@ Soft limits can be setup by using the following commands (in this example we
It is recommended to set the soft limit always below the hard limit,
otherwise the hard limit will take precedence.
-8. Move charges at task migration
+8. Move charges at task migration (DEPRECATED!)
=================================
+THIS IS DEPRECATED!
+
+It's expensive and unreliable! It's better practice to launch workload
+tasks directly from inside their target cgroup. Use dedicated workload
+cgroups to allow fine-grained policy adjustments without having to
+move physical pages between control domains.
+
Users can move charges associated with a task along with task migration, that
is, uncharge task's pages from the old cgroup and charge them to the new cgroup.
This feature is not supported in !CONFIG_MMU environments because of lack of
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index b696354c1b21..9c9a42153b76 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3919,6 +3919,10 @@ static int mem_cgroup_move_charge_write(struct cgroup_subsys_state *css,
{
struct mem_cgroup *memcg = mem_cgroup_from_css(css);
+ pr_warn_once("Cgroup memory moving (move_charge_at_immigrate) is deprecated. "
+ "Please report your usecase to [email protected] if you "
+ "depend on this functionality.\n");
+
if (val & ~MOVE_MASK)
return -EINVAL;
--
2.38.1
On Tue 06-12-22 18:13:38, Johannes Weiner wrote:
> New series based on the discussion in the previous thread around
> getting lock_page_memcg() out of rmap.
>
> I beat on this with concurrent high-frequency moving of tasks that
> partially share a swapped out shmem file. I didn't spot anything
> problematic. That said, it is quite subtle, and Hugh, I'd feel better
> if you could also subject it to your torture suite ;)
For the whole series
Acked-by: Michal Hocko <[email protected]>
Thanks!
>
> Thanks!
>
> Against yesterday's mm-unstable.
>
> Documentation/admin-guide/cgroup-v1/memory.rst | 11 ++++-
> mm/memcontrol.c | 56 ++++++++++++++++++------
> mm/rmap.c | 26 ++++-------
> 3 files changed, 60 insertions(+), 33 deletions(-)
>
--
Michal Hocko
SUSE Labs
On Tue, 6 Dec 2022 16:03:54 -0800 Shakeel Butt <[email protected]> wrote:
> On Tue, Dec 6, 2022 at 9:14 AM Johannes Weiner <[email protected]> wrote:
> >
> > Charge moving mode in cgroup1 allows memory to follow tasks as they
> > migrate between cgroups. This is, and always has been, a questionable
> > thing to do - for several reasons.
> >
> > First, it's expensive. Pages need to be identified, locked and
> > isolated from various MM operations, and reassigned, one by one.
> >
> > Second, it's unreliable. Once pages are charged to a cgroup, there
> > isn't always a clear owner task anymore. Cache isn't moved at all, for
> > example. Mapped memory is moved - but if trylocking or isolating a
> > page fails, it's arbitrarily left behind. Frequent moving between
> > domains may leave a task's memory scattered all over the place.
> >
> > Third, it isn't really needed. Launcher tasks can kick off workload
> > tasks directly in their target cgroup. Using dedicated per-workload
> > groups allows fine-grained policy adjustments - no need to move tasks
> > and their physical pages between control domains. The feature was
> > never forward-ported to cgroup2, and it hasn't been missed.
> >
> > Despite it being a niche usecase, the maintenance overhead of
> > supporting it is enormous. Because pages are moved while they are live
> > and subject to various MM operations, the synchronization rules are
> > complicated. There are lock_page_memcg() in MM and FS code, which
> > non-cgroup people don't understand. In some cases we've been able to
> > shift code and cgroup API calls around such that we can rely on native
> > locking as much as possible. But that's fragile, and sometimes we need
> > to hold MM locks for longer than we otherwise would (pte lock e.g.).
> >
> > Mark the feature deprecated. Hopefully we can remove it soon.
> >
> > Signed-off-by: Johannes Weiner <[email protected]>
>
> Acked-by: Shakeel Butt <[email protected]>
>
> I would request this patch to be backported to stable kernels as well
> for early warnings to users which update to newer kernels very late.
Sounds reasonable, but the changelog should have a few words in it
explaining why we're requesting the backport. I guess I can type those
in.
We're at -rc8 and I'm not planning on merging these up until after
6.2-rc1 is out. Please feel free to argue with me on that score.
On Wed, Dec 7, 2022 at 1:51 PM Andrew Morton <[email protected]> wrote:
>
> On Tue, 6 Dec 2022 16:03:54 -0800 Shakeel Butt <[email protected]> wrote:
>
> > On Tue, Dec 6, 2022 at 9:14 AM Johannes Weiner <[email protected]> wrote:
> > >
> > > Charge moving mode in cgroup1 allows memory to follow tasks as they
> > > migrate between cgroups. This is, and always has been, a questionable
> > > thing to do - for several reasons.
> > >
> > > First, it's expensive. Pages need to be identified, locked and
> > > isolated from various MM operations, and reassigned, one by one.
> > >
> > > Second, it's unreliable. Once pages are charged to a cgroup, there
> > > isn't always a clear owner task anymore. Cache isn't moved at all, for
> > > example. Mapped memory is moved - but if trylocking or isolating a
> > > page fails, it's arbitrarily left behind. Frequent moving between
> > > domains may leave a task's memory scattered all over the place.
> > >
> > > Third, it isn't really needed. Launcher tasks can kick off workload
> > > tasks directly in their target cgroup. Using dedicated per-workload
> > > groups allows fine-grained policy adjustments - no need to move tasks
> > > and their physical pages between control domains. The feature was
> > > never forward-ported to cgroup2, and it hasn't been missed.
> > >
> > > Despite it being a niche usecase, the maintenance overhead of
> > > supporting it is enormous. Because pages are moved while they are live
> > > and subject to various MM operations, the synchronization rules are
> > > complicated. There are lock_page_memcg() in MM and FS code, which
> > > non-cgroup people don't understand. In some cases we've been able to
> > > shift code and cgroup API calls around such that we can rely on native
> > > locking as much as possible. But that's fragile, and sometimes we need
> > > to hold MM locks for longer than we otherwise would (pte lock e.g.).
> > >
> > > Mark the feature deprecated. Hopefully we can remove it soon.
> > >
> > > Signed-off-by: Johannes Weiner <[email protected]>
> >
> > Acked-by: Shakeel Butt <[email protected]>
> >
> > I would request this patch to be backported to stable kernels as well
> > for early warnings to users which update to newer kernels very late.
>
> Sounds reasonable, but the changelog should have a few words in it
> explaining why we're requesting the backport. I guess I can type those
> in.
Thanks a lot.
>
> We're at -rc8 and I'm not planning on merging these up until after
> 6.2-rc1 is out. Please feel free to argue with me on that score.
No, I totally agree with you. There is no such urgency in merging
these and a couple of weeks delay is totally fine.
On Tue, Dec 6, 2022 at 9:14 AM Johannes Weiner <[email protected]> wrote:
>
> The previous patch made sure charge moving only touches pages for
> which page_mapped() is stable. lock_page_memcg() is no longer needed.
>
> Signed-off-by: Johannes Weiner <[email protected]>
Acked-by: Shakeel Butt <[email protected]>