ChangeLog
Since v1
- Droped "[5/5] add NR_ANON_PAGES to OOM log" patch
- Instead, introduce "[5/5] add shmem vmstat" patch
- Fixed unit bug (Thanks Minchan)
- Separated isolated vmstat to two field (Thanks Minchan and Wu)
- Fixed isolated page and lumpy reclaim issue (Thanks Minchan)
- Rewrote some patch description (Thanks Christoph)
Current OOM log doesn't provide sufficient memory usage information. it cause
make confusion to lkml MM guys.
this patch series add some memory usage information to OOM log.
Subject: [PATCH] add per-zone statistics to show_free_areas()
show_free_areas() displays only a limited amount of zone counters. This
patch includes additional counters in the display to allow easier
debugging. This may be especially useful if an OOM is due to running out
of DMA memory.
Signed-off-by: KOSAKI Motohiro <[email protected]>
Reviewed-by: Christoph Lameter <[email protected]>
Acked-by: Wu Fengguang <[email protected]>
---
mm/page_alloc.c | 20 ++++++++++++++++++++
1 file changed, 20 insertions(+)
Index: b/mm/page_alloc.c
===================================================================
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2151,6 +2151,16 @@ void show_free_areas(void)
" inactive_file:%lukB"
" unevictable:%lukB"
" present:%lukB"
+ " mlocked:%lukB"
+ " dirty:%lukB"
+ " writeback:%lukB"
+ " mapped:%lukB"
+ " slab_reclaimable:%lukB"
+ " slab_unreclaimable:%lukB"
+ " pagetables:%lukB"
+ " unstable:%lukB"
+ " bounce:%lukB"
+ " writeback_tmp:%lukB"
" pages_scanned:%lu"
" all_unreclaimable? %s"
"\n",
@@ -2165,6 +2175,16 @@ void show_free_areas(void)
K(zone_page_state(zone, NR_INACTIVE_FILE)),
K(zone_page_state(zone, NR_UNEVICTABLE)),
K(zone->present_pages),
+ K(zone_page_state(zone, NR_MLOCK)),
+ K(zone_page_state(zone, NR_FILE_DIRTY)),
+ K(zone_page_state(zone, NR_WRITEBACK)),
+ K(zone_page_state(zone, NR_FILE_MAPPED)),
+ K(zone_page_state(zone, NR_SLAB_RECLAIMABLE)),
+ K(zone_page_state(zone, NR_SLAB_UNRECLAIMABLE)),
+ K(zone_page_state(zone, NR_PAGETABLE)),
+ K(zone_page_state(zone, NR_UNSTABLE_NFS)),
+ K(zone_page_state(zone, NR_BOUNCE)),
+ K(zone_page_state(zone, NR_WRITEBACK_TEMP)),
zone->pages_scanned,
(zone_is_all_unreclaimable(zone) ? "yes" : "no")
);
ChangeLog
Since v2
- Changed display order, now, "buffer" field display right after unstable
Since v1
- Fixed showing the number with kilobyte unit issue
================
Subject: [PATCH] add buffer cache information to show_free_areas()
When administrator analysis memory shortage reason from OOM log, They
often need to know rest number of cache like pages.
Then, show_free_areas() shouldn't only display page cache, but also it
should display buffer cache.
Signed-off-by: KOSAKI Motohiro <[email protected]>
Acked-by: Wu Fengguang <[email protected]>
---
mm/page_alloc.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
Index: b/mm/page_alloc.c
===================================================================
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2118,7 +2118,7 @@ void show_free_areas(void)
printk("Active_anon:%lu active_file:%lu inactive_anon:%lu\n"
" inactive_file:%lu"
" unevictable:%lu"
- " dirty:%lu writeback:%lu unstable:%lu\n"
+ " dirty:%lu writeback:%lu unstable:%lu buffer:%lu\n"
" free:%lu slab_reclaimable:%lu slab_unreclaimable:%lu\n"
" mapped:%lu pagetables:%lu bounce:%lu\n",
global_page_state(NR_ACTIVE_ANON),
@@ -2129,6 +2129,7 @@ void show_free_areas(void)
global_page_state(NR_FILE_DIRTY),
global_page_state(NR_WRITEBACK),
global_page_state(NR_UNSTABLE_NFS),
+ nr_blockdev_pages(),
global_page_state(NR_FREE_PAGES),
global_page_state(NR_SLAB_RECLAIMABLE),
global_page_state(NR_SLAB_UNRECLAIMABLE),
Subject: [PATCH] Show kernel stack usage to /proc/meminfo and OOM log
The amount of memory allocated to kernel stacks can become significant and
cause OOM conditions. However, we do not display the amount of memory
consumed by stacks.'
Add code to display the amount of memory used for stacks in /proc/meminfo.
Signed-off-by: KOSAKI Motohiro <[email protected]>
Reviewed-by: <[email protected]>
---
drivers/base/node.c | 3 +++
fs/proc/meminfo.c | 2 ++
include/linux/mmzone.h | 3 ++-
kernel/fork.c | 11 +++++++++++
mm/page_alloc.c | 3 +++
mm/vmstat.c | 1 +
6 files changed, 22 insertions(+), 1 deletion(-)
Index: b/fs/proc/meminfo.c
===================================================================
--- a/fs/proc/meminfo.c
+++ b/fs/proc/meminfo.c
@@ -84,6 +84,7 @@ static int meminfo_proc_show(struct seq_
"Slab: %8lu kB\n"
"SReclaimable: %8lu kB\n"
"SUnreclaim: %8lu kB\n"
+ "KernelStack: %8lu kB\n"
"PageTables: %8lu kB\n"
#ifdef CONFIG_QUICKLIST
"Quicklists: %8lu kB\n"
@@ -128,6 +129,7 @@ static int meminfo_proc_show(struct seq_
global_page_state(NR_SLAB_UNRECLAIMABLE)),
K(global_page_state(NR_SLAB_RECLAIMABLE)),
K(global_page_state(NR_SLAB_UNRECLAIMABLE)),
+ global_page_state(NR_KERNEL_STACK) * THREAD_SIZE / 1024,
K(global_page_state(NR_PAGETABLE)),
#ifdef CONFIG_QUICKLIST
K(quicklist_total_size()),
Index: b/include/linux/mmzone.h
===================================================================
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -94,10 +94,11 @@ enum zone_stat_item {
NR_SLAB_RECLAIMABLE,
NR_SLAB_UNRECLAIMABLE,
NR_PAGETABLE, /* used for pagetables */
+ NR_KERNEL_STACK,
+ /* Second 128 byte cacheline */
NR_UNSTABLE_NFS, /* NFS unstable pages */
NR_BOUNCE,
NR_VMSCAN_WRITE,
- /* Second 128 byte cacheline */
NR_WRITEBACK_TEMP, /* Writeback using temporary buffers */
#ifdef CONFIG_NUMA
NUMA_HIT, /* allocated in intended node */
Index: b/kernel/fork.c
===================================================================
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -137,9 +137,17 @@ struct kmem_cache *vm_area_cachep;
/* SLAB cache for mm_struct structures (tsk->mm) */
static struct kmem_cache *mm_cachep;
+static void account_kernel_stack(struct thread_info *ti, int account)
+{
+ struct zone *zone = page_zone(virt_to_page(ti));
+
+ mod_zone_page_state(zone, NR_KERNEL_STACK, account);
+}
+
void free_task(struct task_struct *tsk)
{
prop_local_destroy_single(&tsk->dirties);
+ account_kernel_stack(tsk->stack, -1);
free_thread_info(tsk->stack);
rt_mutex_debug_task_free(tsk);
ftrace_graph_exit_task(tsk);
@@ -255,6 +263,9 @@ static struct task_struct *dup_task_stru
tsk->btrace_seq = 0;
#endif
tsk->splice_pipe = NULL;
+
+ account_kernel_stack(ti, 1);
+
return tsk;
out:
Index: b/mm/page_alloc.c
===================================================================
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2158,6 +2158,7 @@ void show_free_areas(void)
" mapped:%lukB"
" slab_reclaimable:%lukB"
" slab_unreclaimable:%lukB"
+ " kernel_stack:%lukB"
" pagetables:%lukB"
" unstable:%lukB"
" bounce:%lukB"
@@ -2182,6 +2183,8 @@ void show_free_areas(void)
K(zone_page_state(zone, NR_FILE_MAPPED)),
K(zone_page_state(zone, NR_SLAB_RECLAIMABLE)),
K(zone_page_state(zone, NR_SLAB_UNRECLAIMABLE)),
+ zone_page_state(zone, NR_KERNEL_STACK) *
+ THREAD_SIZE / 1024,
K(zone_page_state(zone, NR_PAGETABLE)),
K(zone_page_state(zone, NR_UNSTABLE_NFS)),
K(zone_page_state(zone, NR_BOUNCE)),
Index: b/mm/vmstat.c
===================================================================
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -639,6 +639,7 @@ static const char * const vmstat_text[]
"nr_slab_reclaimable",
"nr_slab_unreclaimable",
"nr_page_table_pages",
+ "nr_kernel_stack",
"nr_unstable",
"nr_bounce",
"nr_vmscan_write",
Index: b/drivers/base/node.c
===================================================================
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -85,6 +85,7 @@ static ssize_t node_read_meminfo(struct
"Node %d FilePages: %8lu kB\n"
"Node %d Mapped: %8lu kB\n"
"Node %d AnonPages: %8lu kB\n"
+ "Node %d KernelStack: %8lu kB\n"
"Node %d PageTables: %8lu kB\n"
"Node %d NFS_Unstable: %8lu kB\n"
"Node %d Bounce: %8lu kB\n"
@@ -116,6 +117,8 @@ static ssize_t node_read_meminfo(struct
nid, K(node_page_state(nid, NR_FILE_PAGES)),
nid, K(node_page_state(nid, NR_FILE_MAPPED)),
nid, K(node_page_state(nid, NR_ANON_PAGES)),
+ nid, node_page_state(nid, NR_KERNEL_STACK) *
+ THREAD_SIZE / 1024,
nid, K(node_page_state(nid, NR_PAGETABLE)),
nid, K(node_page_state(nid, NR_UNSTABLE_NFS)),
nid, K(node_page_state(nid, NR_BOUNCE)),
ChangeLog
Since v4
- Changed displaing order in show_free_areas() (as Wu's suggested)
Since v3
- Fixed misaccount page bug when lumby reclaim occur
Since v2
- Separated IsolateLRU field to Isolated(anon) and Isolated(file)
Since v1
- Renamed IsolatePages to IsolatedLRU
==================================
Subject: [PATCH] add isolate pages vmstat
If the system have plenty threads or processes, concurrent reclaim can
isolate very much pages.
Unfortunately, current /proc/meminfo and OOM log can't show it.
This patch provide the way of showing this information.
reproduce way
-----------------------
% ./hackbench 140 process 1000
=> couse OOM
active_anon:146 inactive_anon:0 isolated_anon:49245
active_file:41 inactive_file:0 isolated_file:113
unevictable:0
dirty:0 writeback:0 buffer:49 unstable:0
free:184 slab_reclaimable:276 slab_unreclaimable:5492
mapped:87 pagetables:28239 bounce:0
Signed-off-by: KOSAKI Motohiro <[email protected]>
Acked-by: Rik van Riel <[email protected]>
Acked-by: Wu Fengguang <[email protected]>
---
drivers/base/node.c | 4 ++++
fs/proc/meminfo.c | 4 ++++
include/linux/mmzone.h | 2 ++
mm/page_alloc.c | 14 ++++++++++----
mm/vmscan.c | 13 +++++++++++++
mm/vmstat.c | 3 ++-
6 files changed, 35 insertions(+), 5 deletions(-)
Index: b/fs/proc/meminfo.c
===================================================================
--- a/fs/proc/meminfo.c
+++ b/fs/proc/meminfo.c
@@ -65,6 +65,8 @@ static int meminfo_proc_show(struct seq_
"Active(file): %8lu kB\n"
"Inactive(file): %8lu kB\n"
"Unevictable: %8lu kB\n"
+ "Isolated(anon): %8lu kB\n"
+ "Isolated(file): %8lu kB\n"
"Mlocked: %8lu kB\n"
#ifdef CONFIG_HIGHMEM
"HighTotal: %8lu kB\n"
@@ -109,6 +111,8 @@ static int meminfo_proc_show(struct seq_
K(pages[LRU_ACTIVE_FILE]),
K(pages[LRU_INACTIVE_FILE]),
K(pages[LRU_UNEVICTABLE]),
+ K(global_page_state(NR_ISOLATED_ANON)),
+ K(global_page_state(NR_ISOLATED_FILE)),
K(global_page_state(NR_MLOCK)),
#ifdef CONFIG_HIGHMEM
K(i.totalhigh),
Index: b/include/linux/mmzone.h
===================================================================
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -100,6 +100,8 @@ enum zone_stat_item {
NR_BOUNCE,
NR_VMSCAN_WRITE,
NR_WRITEBACK_TEMP, /* Writeback using temporary buffers */
+ NR_ISOLATED_ANON, /* Temporary isolated pages from anon lru */
+ NR_ISOLATED_FILE, /* Temporary isolated pages from file lru */
#ifdef CONFIG_NUMA
NUMA_HIT, /* allocated in intended node */
NUMA_MISS, /* allocated in non intended node */
Index: b/mm/page_alloc.c
===================================================================
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2115,16 +2115,18 @@ void show_free_areas(void)
}
}
- printk("Active_anon:%lu active_file:%lu inactive_anon:%lu\n"
- " inactive_file:%lu"
- " unevictable:%lu"
+ printk("active_anon:%lu inactive_anon:%lu isolated_anon:%lu\n"
+ " active_file:%lu inactive_file:%lu isolated_file:%lu\n"
+ " unevictable:%lu\n"
" dirty:%lu writeback:%lu unstable:%lu buffer:%lu\n"
" free:%lu slab_reclaimable:%lu slab_unreclaimable:%lu\n"
" mapped:%lu pagetables:%lu bounce:%lu\n",
global_page_state(NR_ACTIVE_ANON),
- global_page_state(NR_ACTIVE_FILE),
global_page_state(NR_INACTIVE_ANON),
+ global_page_state(NR_ISOLATED_ANON),
+ global_page_state(NR_ACTIVE_FILE),
global_page_state(NR_INACTIVE_FILE),
+ global_page_state(NR_ISOLATED_FILE),
global_page_state(NR_UNEVICTABLE),
global_page_state(NR_FILE_DIRTY),
global_page_state(NR_WRITEBACK),
@@ -2151,6 +2153,8 @@ void show_free_areas(void)
" active_file:%lukB"
" inactive_file:%lukB"
" unevictable:%lukB"
+ " isolated(anon):%lukB"
+ " isolated(file):%lukB"
" present:%lukB"
" mlocked:%lukB"
" dirty:%lukB"
@@ -2176,6 +2180,8 @@ void show_free_areas(void)
K(zone_page_state(zone, NR_ACTIVE_FILE)),
K(zone_page_state(zone, NR_INACTIVE_FILE)),
K(zone_page_state(zone, NR_UNEVICTABLE)),
+ K(zone_page_state(zone, NR_ISOLATED_ANON)),
+ K(zone_page_state(zone, NR_ISOLATED_FILE)),
K(zone->present_pages),
K(zone_page_state(zone, NR_MLOCK)),
K(zone_page_state(zone, NR_FILE_DIRTY)),
Index: b/mm/vmscan.c
===================================================================
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1067,6 +1067,8 @@ static unsigned long shrink_inactive_lis
unsigned long nr_active;
unsigned int count[NR_LRU_LISTS] = { 0, };
int mode = lumpy_reclaim ? ISOLATE_BOTH : ISOLATE_INACTIVE;
+ unsigned long nr_anon;
+ unsigned long nr_file;
nr_taken = sc->isolate_pages(sc->swap_cluster_max,
&page_list, &nr_scan, sc->order, mode,
@@ -1083,6 +1085,12 @@ static unsigned long shrink_inactive_lis
__mod_zone_page_state(zone, NR_INACTIVE_ANON,
-count[LRU_INACTIVE_ANON]);
+ nr_anon = count[LRU_ACTIVE_ANON] + count[LRU_INACTIVE_ANON];
+ nr_file = count[LRU_ACTIVE_FILE] + count[LRU_INACTIVE_FILE];
+
+ __mod_zone_page_state(zone, NR_ISOLATED_ANON, nr_anon);
+ __mod_zone_page_state(zone, NR_ISOLATED_FILE, nr_file);
+
if (scanning_global_lru(sc))
zone->pages_scanned += nr_scan;
@@ -1131,6 +1139,8 @@ static unsigned long shrink_inactive_lis
goto done;
spin_lock(&zone->lru_lock);
+ __mod_zone_page_state(zone, NR_ISOLATED_ANON, -nr_anon);
+ __mod_zone_page_state(zone, NR_ISOLATED_FILE, -nr_file);
/*
* Put back any unfreeable pages.
*/
@@ -1205,6 +1215,7 @@ static void move_active_pages_to_lru(str
unsigned long pgmoved = 0;
struct pagevec pvec;
struct page *page;
+ int file = is_file_lru(lru);
pagevec_init(&pvec, 1);
@@ -1232,6 +1243,7 @@ static void move_active_pages_to_lru(str
}
}
__mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
+ __mod_zone_page_state(zone, NR_ISOLATED_ANON + file, -pgmoved);
if (!is_active_lru(lru))
__count_vm_events(PGDEACTIVATE, pgmoved);
}
@@ -1267,6 +1279,7 @@ static void shrink_active_list(unsigned
__mod_zone_page_state(zone, NR_ACTIVE_FILE, -pgmoved);
else
__mod_zone_page_state(zone, NR_ACTIVE_ANON, -pgmoved);
+ __mod_zone_page_state(zone, NR_ISOLATED_ANON + file, pgmoved);
spin_unlock_irq(&zone->lru_lock);
pgmoved = 0; /* count referenced (mapping) mapped pages */
Index: b/mm/vmstat.c
===================================================================
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -644,7 +644,8 @@ static const char * const vmstat_text[]
"nr_bounce",
"nr_vmscan_write",
"nr_writeback_temp",
-
+ "nr_isolated_anon",
+ "nr_isolated_file",
#ifdef CONFIG_NUMA
"numa_hit",
"numa_miss",
Index: b/drivers/base/node.c
===================================================================
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -73,6 +73,8 @@ static ssize_t node_read_meminfo(struct
"Node %d Active(file): %8lu kB\n"
"Node %d Inactive(file): %8lu kB\n"
"Node %d Unevictable: %8lu kB\n"
+ "Node %d Isolated(anon): %8lu kB\n"
+ "Node %d Isolated(file): %8lu kB\n"
"Node %d Mlocked: %8lu kB\n"
#ifdef CONFIG_HIGHMEM
"Node %d HighTotal: %8lu kB\n"
@@ -105,6 +107,8 @@ static ssize_t node_read_meminfo(struct
nid, K(node_page_state(nid, NR_ACTIVE_FILE)),
nid, K(node_page_state(nid, NR_INACTIVE_FILE)),
nid, K(node_page_state(nid, NR_UNEVICTABLE)),
+ nid, K(node_page_state(nid, NR_ISOLATED_ANON)),
+ nid, K(node_page_state(nid, NR_ISOLATED_FILE)),
nid, K(node_page_state(nid, NR_MLOCK)),
#ifdef CONFIG_HIGHMEM
nid, K(i.totalhigh),
ChangeLog
Since v1
- Fixed misaccounting bug on page migration
========================
Subject: [PATCH] add shmem vmstat
Recently, We faced several OOM problem by plenty GEM cache. and generally,
plenty Shmem/Tmpfs potentially makes memory shortage problem.
We often use following calculation to know shmem pages,
shmem = NR_ACTIVE_ANON + NR_INACTIVE_ANON - NR_ANON_PAGES
but it is wrong expression. it doesn't consider isolated page and
mlocked page.
Then, This patch make explicit Shmem/Tmpfs vm-stat accounting.
Signed-off-by: KOSAKI Motohiro <[email protected]>
---
drivers/base/node.c | 2 ++
fs/proc/meminfo.c | 2 ++
include/linux/mmzone.h | 1 +
mm/filemap.c | 4 ++++
mm/migrate.c | 4 ++++
mm/page_alloc.c | 5 ++++-
mm/vmstat.c | 1 +
7 files changed, 18 insertions(+), 1 deletion(-)
Index: b/drivers/base/node.c
===================================================================
--- a/drivers/base/node.c
+++ b/drivers/base/node.c
@@ -87,6 +87,7 @@ static ssize_t node_read_meminfo(struct
"Node %d FilePages: %8lu kB\n"
"Node %d Mapped: %8lu kB\n"
"Node %d AnonPages: %8lu kB\n"
+ "Node %d Shmem: %8lu kB\n"
"Node %d KernelStack: %8lu kB\n"
"Node %d PageTables: %8lu kB\n"
"Node %d NFS_Unstable: %8lu kB\n"
@@ -121,6 +122,7 @@ static ssize_t node_read_meminfo(struct
nid, K(node_page_state(nid, NR_FILE_PAGES)),
nid, K(node_page_state(nid, NR_FILE_MAPPED)),
nid, K(node_page_state(nid, NR_ANON_PAGES)),
+ nid, K(node_page_state(nid, NR_SHMEM)),
nid, node_page_state(nid, NR_KERNEL_STACK) *
THREAD_SIZE / 1024,
nid, K(node_page_state(nid, NR_PAGETABLE)),
Index: b/fs/proc/meminfo.c
===================================================================
--- a/fs/proc/meminfo.c
+++ b/fs/proc/meminfo.c
@@ -83,6 +83,7 @@ static int meminfo_proc_show(struct seq_
"Writeback: %8lu kB\n"
"AnonPages: %8lu kB\n"
"Mapped: %8lu kB\n"
+ "Shmem: %8lu kB\n"
"Slab: %8lu kB\n"
"SReclaimable: %8lu kB\n"
"SUnreclaim: %8lu kB\n"
@@ -129,6 +130,7 @@ static int meminfo_proc_show(struct seq_
K(global_page_state(NR_WRITEBACK)),
K(global_page_state(NR_ANON_PAGES)),
K(global_page_state(NR_FILE_MAPPED)),
+ K(global_page_state(NR_SHMEM)),
K(global_page_state(NR_SLAB_RECLAIMABLE) +
global_page_state(NR_SLAB_UNRECLAIMABLE)),
K(global_page_state(NR_SLAB_RECLAIMABLE)),
Index: b/include/linux/mmzone.h
===================================================================
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -102,6 +102,7 @@ enum zone_stat_item {
NR_WRITEBACK_TEMP, /* Writeback using temporary buffers */
NR_ISOLATED_ANON, /* Temporary isolated pages from anon lru */
NR_ISOLATED_FILE, /* Temporary isolated pages from file lru */
+ NR_SHMEM, /* shmem pages (included tmpfs/GEM pages) */
#ifdef CONFIG_NUMA
NUMA_HIT, /* allocated in intended node */
NUMA_MISS, /* allocated in non intended node */
Index: b/mm/filemap.c
===================================================================
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -120,6 +120,8 @@ void __remove_from_page_cache(struct pag
page->mapping = NULL;
mapping->nrpages--;
__dec_zone_page_state(page, NR_FILE_PAGES);
+ if (PageSwapBacked(page))
+ __dec_zone_page_state(page, NR_SHMEM);
BUG_ON(page_mapped(page));
/*
@@ -476,6 +478,8 @@ int add_to_page_cache_locked(struct page
if (likely(!error)) {
mapping->nrpages++;
__inc_zone_page_state(page, NR_FILE_PAGES);
+ if (PageSwapBacked(page))
+ __inc_zone_page_state(page, NR_SHMEM);
spin_unlock_irq(&mapping->tree_lock);
} else {
page->mapping = NULL;
Index: b/mm/vmstat.c
===================================================================
--- a/mm/vmstat.c
+++ b/mm/vmstat.c
@@ -646,6 +646,7 @@ static const char * const vmstat_text[]
"nr_writeback_temp",
"nr_isolated_anon",
"nr_isolated_file",
+ "nr_shmem",
#ifdef CONFIG_NUMA
"numa_hit",
"numa_miss",
Index: b/mm/page_alloc.c
===================================================================
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2120,7 +2120,7 @@ void show_free_areas(void)
" unevictable:%lu\n"
" dirty:%lu writeback:%lu unstable:%lu buffer:%lu\n"
" free:%lu slab_reclaimable:%lu slab_unreclaimable:%lu\n"
- " mapped:%lu pagetables:%lu bounce:%lu\n",
+ " mapped:%lu shmem:%lu pagetables:%lu bounce:%lu\n",
global_page_state(NR_ACTIVE_ANON),
global_page_state(NR_INACTIVE_ANON),
global_page_state(NR_ISOLATED_ANON),
@@ -2136,6 +2136,7 @@ void show_free_areas(void)
global_page_state(NR_SLAB_RECLAIMABLE),
global_page_state(NR_SLAB_UNRECLAIMABLE),
global_page_state(NR_FILE_MAPPED),
+ global_page_state(NR_SHMEM),
global_page_state(NR_PAGETABLE),
global_page_state(NR_BOUNCE));
@@ -2160,6 +2161,7 @@ void show_free_areas(void)
" dirty:%lukB"
" writeback:%lukB"
" mapped:%lukB"
+ " shmem:%lukB"
" slab_reclaimable:%lukB"
" slab_unreclaimable:%lukB"
" kernel_stack:%lukB"
@@ -2187,6 +2189,7 @@ void show_free_areas(void)
K(zone_page_state(zone, NR_FILE_DIRTY)),
K(zone_page_state(zone, NR_WRITEBACK)),
K(zone_page_state(zone, NR_FILE_MAPPED)),
+ K(zone_page_state(zone, NR_SHMEM)),
K(zone_page_state(zone, NR_SLAB_RECLAIMABLE)),
K(zone_page_state(zone, NR_SLAB_UNRECLAIMABLE)),
zone_page_state(zone, NR_KERNEL_STACK) *
Index: b/mm/migrate.c
===================================================================
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -312,7 +312,11 @@ static int migrate_page_move_mapping(str
*/
__dec_zone_page_state(page, NR_FILE_PAGES);
__inc_zone_page_state(newpage, NR_FILE_PAGES);
+ if (PageSwapBacked(page)) {
+ __dec_zone_page_state(page, NR_SHMEM);
+ __inc_zone_page_state(newpage, NR_SHMEM);
+ }
spin_unlock_irq(&mapping->tree_lock);
return 0;
On Thu, Jul 9, 2009 at 5:06 PM, KOSAKI
Motohiro<[email protected]> wrote:
> Subject: [PATCH] add per-zone statistics to show_free_areas()
>
> show_free_areas() displays only a limited amount of zone counters. This
> patch includes additional counters in the display to allow easier
> debugging. This may be especially useful if an OOM is due to running out
> of DMA memory.
>
>
> Signed-off-by: KOSAKI Motohiro <[email protected]>
> Reviewed-by: Christoph Lameter <[email protected]>
> Acked-by: Wu Fengguang <[email protected]>
Reviewed-by: Minchan Kim <[email protected]>
--
Kind regards,
Minchan Kim
On Thu, Jul 9, 2009 at 5:12 PM, KOSAKI
Motohiro<[email protected]> wrote:
> Subject: [PATCH] Show kernel stack usage to /proc/meminfo and OOM log
>
> The amount of memory allocated to kernel stacks can become significant and
> cause OOM conditions. However, we do not display the amount of memory
> consumed by stacks.'
>
> Add code to display the amount of memory used for stacks in /proc/meminfo.
>
>
> Signed-off-by: KOSAKI Motohiro <[email protected]>
> Reviewed-by: <[email protected]>
Reviewed-by: Minchan Kim <[email protected]>
--
Kind regards,
Minchan Kim
On Thu, Jul 9, 2009 at 5:14 PM, KOSAKI
Motohiro<[email protected]> wrote:
> ChangeLog
> Since v4
> - Changed displaing order in show_free_areas() (as Wu's suggested)
> Since v3
> - Fixed misaccount page bug when lumby reclaim occur
> Since v2
> - Separated IsolateLRU field to Isolated(anon) and Isolated(file)
> Since v1
> - Renamed IsolatePages to IsolatedLRU
>
> ==================================
> Subject: [PATCH] add isolate pages vmstat
>
> If the system have plenty threads or processes, concurrent reclaim can
> isolate very much pages.
> Unfortunately, current /proc/meminfo and OOM log can't show it.
>
> This patch provide the way of showing this information.
>
>
> reproduce way
> -----------------------
> % ./hackbench 140 process 1000
> => couse OOM
>
> active_anon:146 inactive_anon:0 isolated_anon:49245
> active_file:41 inactive_file:0 isolated_file:113
> unevictable:0
> dirty:0 writeback:0 buffer:49 unstable:0
> free:184 slab_reclaimable:276 slab_unreclaimable:5492
> mapped:87 pagetables:28239 bounce:0
>
>
> Signed-off-by: KOSAKI Motohiro <[email protected]>
> Acked-by: Rik van Riel <[email protected]>
> Acked-by: Wu Fengguang <[email protected]>
Reviewed-by: Minchan Kim <[email protected]>
Kind regards,
Minchan Kim
Thanks for your effort :)
On Thu, Jul 9, 2009 at 5:05 PM, KOSAKI
Motohiro<[email protected]> wrote:
>
> ChangeLog
> Since v1
> - Droped "[5/5] add NR_ANON_PAGES to OOM log" patch
> - Instead, introduce "[5/5] add shmem vmstat" patch
> - Fixed unit bug (Thanks Minchan)
> - Separated isolated vmstat to two field (Thanks Minchan and Wu)
> - Fixed isolated page and lumpy reclaim issue (Thanks Minchan)
> - Rewrote some patch description (Thanks Christoph)
>
>
> Current OOM log doesn't provide sufficient memory usage information. it cause
> make confusion to lkml MM guys.
>
> this patch series add some memory usage information to OOM log.
>
>
>
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to [email protected]. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"[email protected]"> [email protected] </a>
>
--
Kind regards,
Minchan Kim
KOSAKI Motohiro wrote:
> Subject: [PATCH] add per-zone statistics to show_free_areas()
>
> show_free_areas() displays only a limited amount of zone counters. This
> patch includes additional counters in the display to allow easier
> debugging. This may be especially useful if an OOM is due to running out
> of DMA memory.
>
>
> Signed-off-by: KOSAKI Motohiro <[email protected]>
> Reviewed-by: Christoph Lameter <[email protected]>
> Acked-by: Wu Fengguang <[email protected]>
Reviewed-by: Rik van Riel <[email protected]>
--
All rights reversed.
KOSAKI Motohiro wrote:
> ChangeLog
> Since v2
> - Changed display order, now, "buffer" field display right after unstable
>
> Since v1
> - Fixed showing the number with kilobyte unit issue
>
> ================
> Subject: [PATCH] add buffer cache information to show_free_areas()
>
> When administrator analysis memory shortage reason from OOM log, They
> often need to know rest number of cache like pages.
>
> Then, show_free_areas() shouldn't only display page cache, but also it
> should display buffer cache.
>
>
> Signed-off-by: KOSAKI Motohiro <[email protected]>
> Acked-by: Wu Fengguang <[email protected]>
Reviewed-by: Rik van Riel <[email protected]>
--
All rights reversed.
KOSAKI Motohiro wrote:
> Subject: [PATCH] Show kernel stack usage to /proc/meminfo and OOM log
>
> The amount of memory allocated to kernel stacks can become significant and
> cause OOM conditions. However, we do not display the amount of memory
> consumed by stacks.'
>
> Add code to display the amount of memory used for stacks in /proc/meminfo.
>
>
> Signed-off-by: KOSAKI Motohiro <[email protected]>
> Reviewed-by: <[email protected]>
Reviewed-by: Rik van Riel <[email protected]>
--
All rights reversed.
KOSAKI Motohiro wrote:
> ChangeLog
> Since v1
> - Fixed misaccounting bug on page migration
>
> ========================
> Subject: [PATCH] add shmem vmstat
>
> Recently, We faced several OOM problem by plenty GEM cache. and generally,
> plenty Shmem/Tmpfs potentially makes memory shortage problem.
>
> We often use following calculation to know shmem pages,
> shmem = NR_ACTIVE_ANON + NR_INACTIVE_ANON - NR_ANON_PAGES
> but it is wrong expression. it doesn't consider isolated page and
> mlocked page.
>
> Then, This patch make explicit Shmem/Tmpfs vm-stat accounting.
>
>
> Signed-off-by: KOSAKI Motohiro <[email protected]>
Acked-by: Rik van Riel <[email protected]>
--
All rights reversed.
On Thu, 9 Jul 2009, KOSAKI Motohiro wrote:
> Subject: [PATCH] add buffer cache information to show_free_areas()
>
> When administrator analysis memory shortage reason from OOM log, They
> often need to know rest number of cache like pages.
Maybe:
"
It is often useful to know the statistics for all pages that are handled
like page cache pages when looking at OOM log output.
Therefore show_free_areas() should also display buffer cache statistics.
"
On Thu, 9 Jul 2009, KOSAKI Motohiro wrote:
> Subject: [PATCH] Show kernel stack usage to /proc/meminfo and OOM log
Subject: Show kernel stack usage in /proc/meminfo and OOM log output
On Thu, 9 Jul 2009, KOSAKI Motohiro wrote:
> Subject: [PATCH] add isolate pages vmstat
>
> If the system have plenty threads or processes, concurrent reclaim can
> isolate very much pages.
> Unfortunately, current /proc/meminfo and OOM log can't show it.
"
If the system is running a heavy load of processes then concurrent reclaim
can isolate a large numbe of pages from the LRU. /proc/meminfo and the
output generated for an OOM do not show how many pages were isolated.
"
> This patch provide the way of showing this information.
"
This patch shows the information about isolated pages.
"
Page migration can also isolate a large number of pages from the LRU. But
the new counters are not used there.
On Thu, 9 Jul 2009, KOSAKI Motohiro wrote:
> Recently, We faced several OOM problem by plenty GEM cache. and generally,
> plenty Shmem/Tmpfs potentially makes memory shortage problem.
"
Recently we encountered OOM problems due to memory use of the GEM cache.
Generally a large amuont of Shmem/Tmpfs pages tend to create a memory
shortage problem.
"
> We often use following calculation to know shmem pages,
> shmem = NR_ACTIVE_ANON + NR_INACTIVE_ANON - NR_ANON_PAGES
> but it is wrong expression. it doesn't consider isolated page and
> mlocked page.
"
We often use the following calculation to determine the amount of shmem
pages:
shmem = NR_ACTIVE_ANON + NR_INACTIVE_ANON - NR_ANON_PAGES
however the expression does not consider isoalted and mlocked pages.
"
> Then, This patch make explicit Shmem/Tmpfs vm-stat accounting.
"
This patch adds explicit accounting for pages used by shmem and tmpfs.
"
Reviewed-by: Christoph Lameter <[email protected]>
> On Thu, 9 Jul 2009, KOSAKI Motohiro wrote:
>
> > Subject: [PATCH] add buffer cache information to show_free_areas()
> >
> > When administrator analysis memory shortage reason from OOM log, They
> > often need to know rest number of cache like pages.
>
> Maybe:
>
> "
> It is often useful to know the statistics for all pages that are handled
> like page cache pages when looking at OOM log output.
>
> Therefore show_free_areas() should also display buffer cache statistics.
> "
Thanks good description. Will fix.
On Thu, Jul 09, 2009 at 04:18:01PM +0800, KOSAKI Motohiro wrote:
> ChangeLog
> Since v1
> - Fixed misaccounting bug on page migration
>
> ========================
> Subject: [PATCH] add shmem vmstat
>
> Recently, We faced several OOM problem by plenty GEM cache. and generally,
> plenty Shmem/Tmpfs potentially makes memory shortage problem.
>
> We often use following calculation to know shmem pages,
> shmem = NR_ACTIVE_ANON + NR_INACTIVE_ANON - NR_ANON_PAGES
> but it is wrong expression. it doesn't consider isolated page and
> mlocked page.
>
> Then, This patch make explicit Shmem/Tmpfs vm-stat accounting.
>
>
> Signed-off-by: KOSAKI Motohiro <[email protected]>
Acked-by: Wu Fengguang <[email protected]>
Thanks for the nice work!
> On Thu, 9 Jul 2009, KOSAKI Motohiro wrote:
>
> > Subject: [PATCH] add isolate pages vmstat
> >
> > If the system have plenty threads or processes, concurrent reclaim can
> > isolate very much pages.
> > Unfortunately, current /proc/meminfo and OOM log can't show it.
>
> "
> If the system is running a heavy load of processes then concurrent reclaim
> can isolate a large numbe of pages from the LRU. /proc/meminfo and the
> output generated for an OOM do not show how many pages were isolated.
> "
>
> > This patch provide the way of showing this information.
>
> "
> This patch shows the information about isolated pages.
> "
>
>
> Page migration can also isolate a large number of pages from the LRU. But
> the new counters are not used there.
Correct. Will fix.
Plus, current reclaim logic depend on the system have enough much pages on LRU.
Maybe we don't only need to limit #-of-reclaimer, but also need to limit #-of-migrator.
I think we can use similar logic.
On Fri, 10 Jul 2009, KOSAKI Motohiro wrote:
> Plus, current reclaim logic depend on the system have enough much pages on LRU.
> Maybe we don't only need to limit #-of-reclaimer, but also need to limit #-of-migrator.
> I think we can use similar logic.
I think your isolate pages counters can be used in both locations.
> On Fri, 10 Jul 2009, KOSAKI Motohiro wrote:
>
> > Plus, current reclaim logic depend on the system have enough much pages on LRU.
> > Maybe we don't only need to limit #-of-reclaimer, but also need to limit #-of-migrator.
> > I think we can use similar logic.
>
> I think your isolate pages counters can be used in both locations.
>
I totally agree this :)