The kernel currently doesn't provide any method to show the overall
system's peak memory usage recorded. Instead, only each slice's peak
memory usage recorded except for cgroup root is shown through each
memory.peak.
Each slice might consume their peak memory at different time. This is
stored at memory.peak in each own slice. The sum of every memory.peak
doesn't mean the total system's peak memory usage recorded. The sum at
certain point without having a peak memory usage in their slice can have
the largest value.
time | slice1 | slice2 | sum
=======================================
t1 | 50 | 200 | 250
---------------------------------------
t2 | 150 | 150 | 300
---------------------------------------
t3 | 180 | 20 | 200
---------------------------------------
t4 | 80 | 20 | 100
memory.peak value of slice1 is 180 and memory.peak value of slice2 is 200.
Only these information are provided through memory.peak value from each
slice without providing the overall system's peak memory usage. The total
sum of these two value is 380, but this doesn't represent the real peak
memory usage of the overall system. The peak value what we want to get is
shown in t2 as 300, which doesn't have any biggest number even in one
slice. Therefore the proper way to show the system's overall peak memory
usage recorded needs to be provided.
Hence, expose memory.peak in the cgrop root in order to allow this.
Co-developed-by: Christopher Wong <[email protected]>
Signed-off-by: Christopher Wong <[email protected]>
Signed-off-by: Matthew Chae <[email protected]>
---
mm/memcontrol.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 73afff8062f9..974fc044a7e7 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6646,7 +6646,6 @@ static struct cftype memory_files[] = {
},
{
.name = "peak",
- .flags = CFTYPE_NOT_ON_ROOT,
.read_u64 = memory_peak_read,
},
{
--
2.20.1
On Tue 21-02-23 15:34:20, Matthew Chae wrote:
> The kernel currently doesn't provide any method to show the overall
> system's peak memory usage recorded. Instead, only each slice's peak
> memory usage recorded except for cgroup root is shown through each
> memory.peak.
>
> Each slice might consume their peak memory at different time. This is
> stored at memory.peak in each own slice. The sum of every memory.peak
> doesn't mean the total system's peak memory usage recorded. The sum at
> certain point without having a peak memory usage in their slice can have
> the largest value.
>
> time | slice1 | slice2 | sum
> =======================================
> t1 | 50 | 200 | 250
> ---------------------------------------
> t2 | 150 | 150 | 300
> ---------------------------------------
> t3 | 180 | 20 | 200
> ---------------------------------------
> t4 | 80 | 20 | 100
>
> memory.peak value of slice1 is 180 and memory.peak value of slice2 is 200.
> Only these information are provided through memory.peak value from each
> slice without providing the overall system's peak memory usage. The total
> sum of these two value is 380, but this doesn't represent the real peak
> memory usage of the overall system. The peak value what we want to get is
> shown in t2 as 300, which doesn't have any biggest number even in one
> slice. Therefore the proper way to show the system's overall peak memory
> usage recorded needs to be provided.
The problem I can see is that the root's peak value doesn't really
represent the system peak memory usage because it only reflects memcg
accounted memory. So there is plenty of memory consumption which is not
covered. On top of that a lot of memory contributed to the root memcg is
not accounted at all (see try_charge and its callers) so the cumulative
hierarchical value is incomplete and I believe misleading as well.
--
Michal Hocko
SUSE Labs
Hello Matthew.
On Tue, Feb 21, 2023 at 03:34:20PM +0100, Matthew Chae <[email protected]> wrote:
> The kernel currently doesn't provide any method to show the overall
> system's peak memory usage recorded. Instead, only each slice's peak
> memory usage recorded except for cgroup root is shown through each
> memory.peak.
The memory.peak value is useful as a calibration insight when you want to
configure memcg limit.
But there is no global (memcg) limit on memory. So what would be this
(not clearly) defined value good for? Or better then userspace sampling
of chosen available metric?
Thanks,
Michal
On Thu, Feb 23, 2023 at 04:22:33PM +0000, Matthew Chae wrote:
> Hi Michal,
>
> First off, thank you for sharing your opinion.
> I'd like to monitor the peak memory usage recorded of overall system or at least cgroup accounted memory through memory.peak.
> But it looks like this is not relevant to what I wanted.
> It might be good to have some proper way for checking the system's peak memory usage recorded.
I guess you might want to do the opposite: instead of tracking the peak usage,
you can record the bottom of available free memory.
Thanks!
On Thu, Feb 23, 2023 at 07:00:57PM +0000, Matthew Chae wrote:
> Hi Roman,
>
> I'd like to get the peak memory usage recorded overall time, rather than at a certain time.
> Plus, I expect that the systematical way might have better performance compared to userspace sampling.
I'm not necessarily saying to do this in userspace, you can try add a new system-wide counter
(new /proc/vmstat entry). Obviously, it might be easier to do this in userspace.
My point is to do it on system level rather than cgroup level and record a bottom of free
memory rather than the peak of used memory.
> If I understand correctly, recording the bottom of available free memory might not be helpful for this.
> Am I missing something?
Why?
On Thu 23-02-23 19:00:57, Matthew Chae wrote:
> Hi Roman,
>
> I'd like to get the peak memory usage recorded overall time, rather than at a certain time.
Sampling /proc/vmstat should have a minimal overhead and you will get
not only a single value but also a break down to broad cathegory users
(LRU, slab, page tables etc.). Unfortunatelly this doesn't cover all the
users (e.g. direct users of the page allocator are not accounted to any
specific counter) but it should give you a reasonable idea how is memory
utilized. Specific metrics really depend on what you are interested in.
Another approach that might give you a different angle to the memory
consumption is to watch PSI metrics. This will not tell you the peak
memory usage but it will give you an useful cost model for the memory
usage. Being low on free memory itself is not a bad thing, i.e. you are
paying for the amount of memory so it would be rather sub-optimal to not
use it whole, right? If the memory can be reclaimed easily (e.g. by
reclaiming idle caches) then the overhead of a high memory utilization
should be reasonably low so the overal price of the reclaim is worth it.
On the other hand an over utilized system with a working set size larger
than the available memory would spend a lot of time reclaiming so the
performance would drop down.
All that being said the primary question is what is your usecase.
--
Michal Hocko
SUSE Labs
On Fri 24-02-23 15:18:49, Matthew Chae wrote:
> Hi Michal
>
> Thank you for helping me gain full insight.
> It looks like there is no proper way to get the peak memory usage recorded
> without adding any overhead to the system and for all users. But I fully
> understand what you kindly explained. Basically, having low memory left
> doesn't mean a bad situation for the system, So checking the peak memory
> doesn't mean a lot and is not necessary.
You might find https://www.pdl.cmu.edu/ftp/NVM/tmo_asplos22.pdf
interesting and helpful
--
Michal Hocko
SUSE Labs