Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932824AbdIRQey (ORCPT ); Mon, 18 Sep 2017 12:34:54 -0400 Received: from gum.cmpxchg.org ([85.214.110.215]:49850 "EHLO gum.cmpxchg.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932725AbdIRQeq (ORCPT ); Mon, 18 Sep 2017 12:34:46 -0400 Date: Mon, 18 Sep 2017 09:34:34 -0700 From: Johannes Weiner To: Taras Kondratiuk Cc: Michal Hocko , linux-mm@kvack.org, xe-linux-external@cisco.com, Ruslan Ruslichenko , linux-kernel@vger.kernel.org Subject: Re: Detecting page cache trashing state Message-ID: <20170918163434.GA11236@cmpxchg.org> References: <150543458765.3781.10192373650821598320@takondra-t460s> <20170915143619.2ifgex2jxck2xt5u@dhcp22.suse.cz> <150549651001.4512.15084374619358055097@takondra-t460s> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="xHFwDpU9dbj6ez1V" Content-Disposition: inline In-Reply-To: <150549651001.4512.15084374619358055097@takondra-t460s> User-Agent: Mutt/1.8.3 (2017-05-23) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 56027 Lines: 1785 --xHFwDpU9dbj6ez1V Content-Type: text/plain; charset=us-ascii Content-Disposition: inline Hi Taras, On Fri, Sep 15, 2017 at 10:28:30AM -0700, Taras Kondratiuk wrote: > Quoting Michal Hocko (2017-09-15 07:36:19) > > On Thu 14-09-17 17:16:27, Taras Kondratiuk wrote: > > > Has somebody faced similar issue? How are you solving it? > > > > Yes this is a pain point for a _long_ time. And we still do not have a > > good answer upstream. Johannes has been playing in this area [1]. > > The main problem is that our OOM detection logic is based on the ability > > to reclaim memory to allocate new memory. And that is pretty much true > > for the pagecache when you are trashing. So we do not know that > > basically whole time is spent refaulting the memory back and forth. > > We do have some refault stats for the page cache but that is not > > integrated to the oom detection logic because this is really a > > non-trivial problem to solve without triggering early oom killer > > invocations. > > > > [1] http://lkml.kernel.org/r/20170727153010.23347-1-hannes@cmpxchg.org > > Thanks Michal. memdelay looks promising. We will check it. Great, I'm obviously interested in more users of it :) Please find attached the latest version of the patch series based on v4.13. It needs a bit more refactoring in the scheduler bits before resubmission, but it already contains a couple of fixes and improvements since the first version I sent out. Let me know if you need help rebasing to a different kernel version. --xHFwDpU9dbj6ez1V Content-Type: text/x-diff; charset=us-ascii Content-Disposition: attachment; filename="0001-sched-loadavg-consolidate-LOAD_INT-LOAD_FRAC-macros.patch" >From d5ffeb4d9d65fcff1b7e50dbde8264b4c32824a5 Mon Sep 17 00:00:00 2001 From: Johannes Weiner Date: Wed, 14 Jun 2017 11:12:05 -0400 Subject: [PATCH 1/3] sched/loadavg: consolidate LOAD_INT, LOAD_FRAC macros There are several identical definitions of those macros in places that mess with fixed-point load averages. Provide an official version. Signed-off-by: Johannes Weiner --- arch/powerpc/platforms/cell/spufs/sched.c | 3 --- arch/s390/appldata/appldata_os.c | 4 ---- drivers/cpuidle/governors/menu.c | 4 ---- fs/proc/loadavg.c | 3 --- include/linux/sched/loadavg.h | 3 +++ kernel/debug/kdb/kdb_main.c | 7 +------ 6 files changed, 4 insertions(+), 20 deletions(-) diff --git a/arch/powerpc/platforms/cell/spufs/sched.c b/arch/powerpc/platforms/cell/spufs/sched.c index 1fbb5da17dd2..de544070def3 100644 --- a/arch/powerpc/platforms/cell/spufs/sched.c +++ b/arch/powerpc/platforms/cell/spufs/sched.c @@ -1071,9 +1071,6 @@ void spuctx_switch_state(struct spu_context *ctx, } } -#define LOAD_INT(x) ((x) >> FSHIFT) -#define LOAD_FRAC(x) LOAD_INT(((x) & (FIXED_1-1)) * 100) - static int show_spu_loadavg(struct seq_file *s, void *private) { int a, b, c; diff --git a/arch/s390/appldata/appldata_os.c b/arch/s390/appldata/appldata_os.c index 45b3178200ab..a8aac17e1e82 100644 --- a/arch/s390/appldata/appldata_os.c +++ b/arch/s390/appldata/appldata_os.c @@ -24,10 +24,6 @@ #include "appldata.h" - -#define LOAD_INT(x) ((x) >> FSHIFT) -#define LOAD_FRAC(x) LOAD_INT(((x) & (FIXED_1-1)) * 100) - /* * OS data * diff --git a/drivers/cpuidle/governors/menu.c b/drivers/cpuidle/governors/menu.c index 61b64c2b2cb8..e215a2c10a61 100644 --- a/drivers/cpuidle/governors/menu.c +++ b/drivers/cpuidle/governors/menu.c @@ -132,10 +132,6 @@ struct menu_device { int interval_ptr; }; - -#define LOAD_INT(x) ((x) >> FSHIFT) -#define LOAD_FRAC(x) LOAD_INT(((x) & (FIXED_1-1)) * 100) - static inline int get_loadavg(unsigned long load) { return LOAD_INT(load) * 10 + LOAD_FRAC(load) / 10; diff --git a/fs/proc/loadavg.c b/fs/proc/loadavg.c index 983fce5c2418..111a25e4b088 100644 --- a/fs/proc/loadavg.c +++ b/fs/proc/loadavg.c @@ -9,9 +9,6 @@ #include #include -#define LOAD_INT(x) ((x) >> FSHIFT) -#define LOAD_FRAC(x) LOAD_INT(((x) & (FIXED_1-1)) * 100) - static int loadavg_proc_show(struct seq_file *m, void *v) { unsigned long avnrun[3]; diff --git a/include/linux/sched/loadavg.h b/include/linux/sched/loadavg.h index 4264bc6b2c27..745483bb5cca 100644 --- a/include/linux/sched/loadavg.h +++ b/include/linux/sched/loadavg.h @@ -26,6 +26,9 @@ extern void get_avenrun(unsigned long *loads, unsigned long offset, int shift); load += n*(FIXED_1-exp); \ load >>= FSHIFT; +#define LOAD_INT(x) ((x) >> FSHIFT) +#define LOAD_FRAC(x) LOAD_INT(((x) & (FIXED_1-1)) * 100) + extern void calc_global_load(unsigned long ticks); #endif /* _LINUX_SCHED_LOADAVG_H */ diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c index c8146d53ca67..2dddd25ccd7a 100644 --- a/kernel/debug/kdb/kdb_main.c +++ b/kernel/debug/kdb/kdb_main.c @@ -2571,16 +2571,11 @@ static int kdb_summary(int argc, const char **argv) } kdb_printf("%02ld:%02ld\n", val.uptime/(60*60), (val.uptime/60)%60); - /* lifted from fs/proc/proc_misc.c::loadavg_read_proc() */ - -#define LOAD_INT(x) ((x) >> FSHIFT) -#define LOAD_FRAC(x) LOAD_INT(((x) & (FIXED_1-1)) * 100) kdb_printf("load avg %ld.%02ld %ld.%02ld %ld.%02ld\n", LOAD_INT(val.loads[0]), LOAD_FRAC(val.loads[0]), LOAD_INT(val.loads[1]), LOAD_FRAC(val.loads[1]), LOAD_INT(val.loads[2]), LOAD_FRAC(val.loads[2])); -#undef LOAD_INT -#undef LOAD_FRAC + /* Display in kilobytes */ #define K(x) ((x) << (PAGE_SHIFT - 10)) kdb_printf("\nMemTotal: %8lu kB\nMemFree: %8lu kB\n" -- 2.14.1 --xHFwDpU9dbj6ez1V Content-Type: text/x-diff; charset=us-ascii Content-Disposition: attachment; filename="0002-mm-workingset-tell-cache-transitions-from-workingset.patch" >From 4ccc6444efbdcc30680eff6b8f345511c306f3d7 Mon Sep 17 00:00:00 2001 From: Johannes Weiner Date: Thu, 2 Mar 2017 09:58:03 -0500 Subject: [PATCH 2/3] mm: workingset: tell cache transitions from workingset thrashing Refaults happen during transitions between workingsets as well as in-place thrashing. Knowing the difference between the two has a range of applications, including measuring the impact of memory shortage on the system performance, as well as the ability to smarter balance pressure between the filesystem cache and the swap-backed workingset. During workingset transitions, inactive cache refaults and pushes out established active cache. When that active cache isn't stale, however, and also ends up refaulting, that's bonafide thrashing. Introduce a new page flag that tells on eviction whether the page has been active or not in its lifetime. This bit is then stored in the shadow entry, to classify refaults as transitioning or thrashing. Signed-off-by: Johannes Weiner --- include/linux/mmzone.h | 1 + include/linux/page-flags.h | 5 ++- include/linux/swap.h | 2 +- include/trace/events/mmflags.h | 1 + mm/filemap.c | 9 ++-- mm/huge_memory.c | 1 + mm/memcontrol.c | 2 + mm/migrate.c | 2 + mm/swap_state.c | 1 + mm/vmscan.c | 1 + mm/vmstat.c | 1 + mm/workingset.c | 96 +++++++++++++++++++++++++++--------------- 12 files changed, 79 insertions(+), 43 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index fc14b8b3f6ce..b8726b501166 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -156,6 +156,7 @@ enum node_stat_item { NR_ISOLATED_FILE, /* Temporary isolated pages from file lru */ WORKINGSET_REFAULT, WORKINGSET_ACTIVATE, + WORKINGSET_RESTORE, WORKINGSET_NODERECLAIM, NR_ANON_MAPPED, /* Mapped anonymous pages */ NR_FILE_MAPPED, /* pagecache pages mapped into pagetables. diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index d33e3280c8ad..f889af1a6aed 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -73,13 +73,14 @@ */ enum pageflags { PG_locked, /* Page is locked. Don't touch. */ - PG_error, PG_referenced, PG_uptodate, PG_dirty, PG_lru, PG_active, + PG_workingset, PG_waiters, /* Page has waiters, check its waitqueue. Must be bit #7 and in the same byte as "PG_locked" */ + PG_error, PG_slab, PG_owner_priv_1, /* Owner use. If pagecache, fs may use*/ PG_arch_1, @@ -272,6 +273,8 @@ PAGEFLAG(Dirty, dirty, PF_HEAD) TESTSCFLAG(Dirty, dirty, PF_HEAD) PAGEFLAG(LRU, lru, PF_HEAD) __CLEARPAGEFLAG(LRU, lru, PF_HEAD) PAGEFLAG(Active, active, PF_HEAD) __CLEARPAGEFLAG(Active, active, PF_HEAD) TESTCLEARFLAG(Active, active, PF_HEAD) +PAGEFLAG(Workingset, workingset, PF_HEAD) + TESTCLEARFLAG(Workingset, workingset, PF_HEAD) __PAGEFLAG(Slab, slab, PF_NO_TAIL) __PAGEFLAG(SlobFree, slob_free, PF_NO_TAIL) PAGEFLAG(Checked, checked, PF_NO_COMPOUND) /* Used by some filesystems */ diff --git a/include/linux/swap.h b/include/linux/swap.h index d83d28e53e62..914a173beee1 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -252,7 +252,7 @@ struct swap_info_struct { /* linux/mm/workingset.c */ void *workingset_eviction(struct address_space *mapping, struct page *page); -bool workingset_refault(void *shadow); +void workingset_refault(struct page *page, void *shadow); void workingset_activation(struct page *page); void workingset_update_node(struct radix_tree_node *node, void *private); diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h index 8e50d01c645f..aac9eb272754 100644 --- a/include/trace/events/mmflags.h +++ b/include/trace/events/mmflags.h @@ -90,6 +90,7 @@ {1UL << PG_dirty, "dirty" }, \ {1UL << PG_lru, "lru" }, \ {1UL << PG_active, "active" }, \ + {1UL << PG_workingset, "workingset" }, \ {1UL << PG_slab, "slab" }, \ {1UL << PG_owner_priv_1, "owner_priv_1" }, \ {1UL << PG_arch_1, "arch_1" }, \ diff --git a/mm/filemap.c b/mm/filemap.c index 65b4b6e7f7bd..da55a5693da9 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -823,12 +823,9 @@ int add_to_page_cache_lru(struct page *page, struct address_space *mapping, * data from the working set, only to cache data that will * get overwritten with something else, is a waste of memory. */ - if (!(gfp_mask & __GFP_WRITE) && - shadow && workingset_refault(shadow)) { - SetPageActive(page); - workingset_activation(page); - } else - ClearPageActive(page); + WARN_ON_ONCE(PageActive(page)); + if (!(gfp_mask & __GFP_WRITE) && shadow) + workingset_refault(page, shadow); lru_cache_add(page); } return ret; diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 90731e3b7e58..b18ac8084c2a 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -2239,6 +2239,7 @@ static void __split_huge_page_tail(struct page *head, int tail, (1L << PG_mlocked) | (1L << PG_uptodate) | (1L << PG_active) | + (1L << PG_workingset) | (1L << PG_locked) | (1L << PG_unevictable) | (1L << PG_dirty))); diff --git a/mm/memcontrol.c b/mm/memcontrol.c index e09741af816f..93b2eb063afd 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -5274,6 +5274,8 @@ static int memory_stat_show(struct seq_file *m, void *v) stat[WORKINGSET_REFAULT]); seq_printf(m, "workingset_activate %lu\n", stat[WORKINGSET_ACTIVATE]); + seq_printf(m, "workingset_restore %lu\n", + stat[WORKINGSET_RESTORE]); seq_printf(m, "workingset_nodereclaim %lu\n", stat[WORKINGSET_NODERECLAIM]); diff --git a/mm/migrate.c b/mm/migrate.c index e84eeb4e4356..48f4a79869ce 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -624,6 +624,8 @@ void migrate_page_copy(struct page *newpage, struct page *page) SetPageActive(newpage); } else if (TestClearPageUnevictable(page)) SetPageUnevictable(newpage); + if (PageWorkingset(page)) + SetPageWorkingset(newpage); if (PageChecked(page)) SetPageChecked(newpage); if (PageMappedToDisk(page)) diff --git a/mm/swap_state.c b/mm/swap_state.c index b68c93014f50..b39b3969be07 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -387,6 +387,7 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, /* * Initiate read into locked page and return. */ + SetPageWorkingset(new_page); lru_cache_add_anon(new_page); *new_page_allocated = true; return new_page; diff --git a/mm/vmscan.c b/mm/vmscan.c index a1af041930a6..60357cd84c67 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2022,6 +2022,7 @@ static void shrink_active_list(unsigned long nr_to_scan, } ClearPageActive(page); /* we are de-activating */ + SetPageWorkingset(page); list_add(&page->lru, &l_inactive); } diff --git a/mm/vmstat.c b/mm/vmstat.c index 9a4441bbeef2..87ce53498828 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -956,6 +956,7 @@ const char * const vmstat_text[] = { "nr_isolated_file", "workingset_refault", "workingset_activate", + "workingset_restore", "workingset_nodereclaim", "nr_anon_pages", "nr_mapped", diff --git a/mm/workingset.c b/mm/workingset.c index 7119cd745ace..264f0498f2bc 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -120,7 +120,7 @@ * the only thing eating into inactive list space is active pages. * * - * Activating refaulting pages + * Refaulting inactive pages * * All that is known about the active list is that the pages have been * accessed more than once in the past. This means that at any given @@ -133,6 +133,10 @@ * used less frequently than the refaulting page - or even not used at * all anymore. * + * That means if inactive cache is refaulting with a suitable refault + * distance, we assume the cache workingset is transitioning and put + * pressure on the current active list. + * * If this is wrong and demotion kicks in, the pages which are truly * used more frequently will be reactivated while the less frequently * used once will be evicted from memory. @@ -140,6 +144,14 @@ * But if this is right, the stale pages will be pushed out of memory * and the used pages get to stay in cache. * + * Refaulting active pages + * + * If on the other hand the refaulting pages have recently been + * deactivated, it means that the active list is no longer protecting + * actively used cache from reclaim. The cache is NOT transitioning to + * a different workingset; the existing workingset is thrashing in the + * space allocated to the page cache. + * * * Implementation * @@ -155,8 +167,7 @@ */ #define EVICTION_SHIFT (RADIX_TREE_EXCEPTIONAL_ENTRY + \ - NODES_SHIFT + \ - MEM_CGROUP_ID_SHIFT) + 1 + NODES_SHIFT + MEM_CGROUP_ID_SHIFT) #define EVICTION_MASK (~0UL >> EVICTION_SHIFT) /* @@ -169,23 +180,28 @@ */ static unsigned int bucket_order __read_mostly; -static void *pack_shadow(int memcgid, pg_data_t *pgdat, unsigned long eviction) +static void *pack_shadow(int memcgid, pg_data_t *pgdat, unsigned long eviction, + bool workingset) { eviction >>= bucket_order; eviction = (eviction << MEM_CGROUP_ID_SHIFT) | memcgid; eviction = (eviction << NODES_SHIFT) | pgdat->node_id; + eviction = (eviction << 1) | workingset; eviction = (eviction << RADIX_TREE_EXCEPTIONAL_SHIFT); return (void *)(eviction | RADIX_TREE_EXCEPTIONAL_ENTRY); } static void unpack_shadow(void *shadow, int *memcgidp, pg_data_t **pgdat, - unsigned long *evictionp) + unsigned long *evictionp, bool *workingsetp) { unsigned long entry = (unsigned long)shadow; int memcgid, nid; + bool workingset; entry >>= RADIX_TREE_EXCEPTIONAL_SHIFT; + workingset = entry & 1; + entry >>= 1; nid = entry & ((1UL << NODES_SHIFT) - 1); entry >>= NODES_SHIFT; memcgid = entry & ((1UL << MEM_CGROUP_ID_SHIFT) - 1); @@ -194,6 +210,7 @@ static void unpack_shadow(void *shadow, int *memcgidp, pg_data_t **pgdat, *memcgidp = memcgid; *pgdat = NODE_DATA(nid); *evictionp = entry << bucket_order; + *workingsetp = workingset; } /** @@ -206,8 +223,8 @@ static void unpack_shadow(void *shadow, int *memcgidp, pg_data_t **pgdat, */ void *workingset_eviction(struct address_space *mapping, struct page *page) { - struct mem_cgroup *memcg = page_memcg(page); struct pglist_data *pgdat = page_pgdat(page); + struct mem_cgroup *memcg = page_memcg(page); int memcgid = mem_cgroup_id(memcg); unsigned long eviction; struct lruvec *lruvec; @@ -219,30 +236,30 @@ void *workingset_eviction(struct address_space *mapping, struct page *page) lruvec = mem_cgroup_lruvec(pgdat, memcg); eviction = atomic_long_inc_return(&lruvec->inactive_age); - return pack_shadow(memcgid, pgdat, eviction); + return pack_shadow(memcgid, pgdat, eviction, PageWorkingset(page)); } /** * workingset_refault - evaluate the refault of a previously evicted page + * @page: the freshly allocated replacement page * @shadow: shadow entry of the evicted page * * Calculates and evaluates the refault distance of the previously * evicted page in the context of the node it was allocated in. - * - * Returns %true if the page should be activated, %false otherwise. */ -bool workingset_refault(void *shadow) +void workingset_refault(struct page *page, void *shadow) { unsigned long refault_distance; + struct pglist_data *pgdat; unsigned long active_file; struct mem_cgroup *memcg; unsigned long eviction; struct lruvec *lruvec; unsigned long refault; - struct pglist_data *pgdat; + bool workingset; int memcgid; - unpack_shadow(shadow, &memcgid, &pgdat, &eviction); + unpack_shadow(shadow, &memcgid, &pgdat, &eviction, &workingset); rcu_read_lock(); /* @@ -262,41 +279,50 @@ bool workingset_refault(void *shadow) * configurations instead. */ memcg = mem_cgroup_from_id(memcgid); - if (!mem_cgroup_disabled() && !memcg) { - rcu_read_unlock(); - return false; - } + if (!mem_cgroup_disabled() && !memcg) + goto out; lruvec = mem_cgroup_lruvec(pgdat, memcg); refault = atomic_long_read(&lruvec->inactive_age); active_file = lruvec_lru_size(lruvec, LRU_ACTIVE_FILE, MAX_NR_ZONES); /* - * The unsigned subtraction here gives an accurate distance - * across inactive_age overflows in most cases. + * Calculate the refault distance * - * There is a special case: usually, shadow entries have a - * short lifetime and are either refaulted or reclaimed along - * with the inode before they get too old. But it is not - * impossible for the inactive_age to lap a shadow entry in - * the field, which can then can result in a false small - * refault distance, leading to a false activation should this - * old entry actually refault again. However, earlier kernels - * used to deactivate unconditionally with *every* reclaim - * invocation for the longest time, so the occasional - * inappropriate activation leading to pressure on the active - * list is not a problem. + * The unsigned subtraction here gives an accurate distance + * across inactive_age overflows in most cases. There is a + * special case: usually, shadow entries have a short lifetime + * and are either refaulted or reclaimed along with the inode + * before they get too old. But it is not impossible for the + * inactive_age to lap a shadow entry in the field, which can + * then can result in a false small refault distance, leading + * to a false activation should this old entry actually + * refault again. However, earlier kernels used to deactivate + * unconditionally with *every* reclaim invocation for the + * longest time, so the occasional inappropriate activation + * leading to pressure on the active list is not a problem. */ refault_distance = (refault - eviction) & EVICTION_MASK; inc_lruvec_state(lruvec, WORKINGSET_REFAULT); - if (refault_distance <= active_file) { - inc_lruvec_state(lruvec, WORKINGSET_ACTIVATE); - rcu_read_unlock(); - return true; - } + /* + * Compare the distance to the existing workingset size. We + * don't act on pages that couldn't stay resident even if all + * the memory was available to the page cache. + */ + if (refault_distance > active_file) + goto out; + + SetPageActive(page); + SetPageWorkingset(page); + atomic_long_inc(&lruvec->inactive_age); + inc_lruvec_state(lruvec, WORKINGSET_ACTIVATE); + + /* Page was active prior to eviction */ + if (workingset) + inc_lruvec_state(lruvec, WORKINGSET_RESTORE); +out: rcu_read_unlock(); - return false; } /** -- 2.14.1 --xHFwDpU9dbj6ez1V Content-Type: text/x-diff; charset=us-ascii Content-Disposition: attachment; filename="0003-mm-sched-memdelay-memory-health-interface-for-system.patch" >From c3e97f5daf99bcd54383eaab466c477dbb743dd9 Mon Sep 17 00:00:00 2001 From: Johannes Weiner Date: Mon, 5 Jun 2017 16:07:22 -0400 Subject: [PATCH 3/3] mm/sched: memdelay: memory health interface for systems and workloads Linux doesn't have a useful metric to describe the memory health of a system, a cgroup container, or individual tasks. When workloads are bigger than available memory, they spend a certain amount of their time inside page reclaim, waiting on thrashing cache, and swapping in. This has impact on latency, and depending on the CPU capacity in the system can also translate to a decrease in throughput. While Linux exports some stats and counters for these events, it does not quantify the true impact they have on throughput and latency. How much of the execution time is spent unproductively? This is important to know when sizing workloads to systems and containers. It also comes in handy when evaluating the effectiveness and efficiency of the kernel's memory management policies and heuristics. This patch implements a metric that quantifies memory pressure in a unit that matters most to applications and does not rely on hardware aspects to be meaningful: wallclock time lost while waiting on memory. Whenever a task is blocked on refaults, swapins, or direct reclaim, the time it spends is accounted on the task level and aggregated into a domain state along with other tasks on the system and cgroup level. Each task has a /proc//memdelay file that lists the microseconds the task has been delayed since it's been forked. That file can be sampled periodically for recent delays, or before and after certain operations to measure their memory-related latencies. On the system and cgroup-level, there are /proc/memdelay and memory.memdelay, respectively, and their format is as such: $ cat /proc/memdelay 2489084 41.61 47.28 29.66 0.00 0.00 0.00 The first line shows the cumulative delay times of all tasks in the domain - in this case, all tasks in the system cumulatively lost 2.49 seconds due to memory delays. The second and third line show percentages spent in aggregate states for the domain - system or cgroup - in a load average type format as decaying averages over the last 1m, 5m, and 15m: The second line indicates the share of wall-time the domain spends in a state where SOME tasks are delayed by memory while others are still productive (runnable or iowait). This indicates a latency problem for individual tasks, but since the CPU/IO capacity is still used, adding more memory might not necessarily improve the domain's throughput. The third line indicates the share of wall-time the domain spends in a state where ALL non-idle tasks are delayed by memory. In this state, the domain is entirely unproductive due to a lack of memory. v2: - fix active-delay condition when only other runnables, no iowait - drop private lock from sched path, we can use the rq lock - fix refault vs. simple lockwait detection - drop ktime, we can use cpu_clock() XXX: - eliminate redundant cgroup hierarchy walks in the scheduler Signed-off-by: Johannes Weiner --- fs/proc/array.c | 8 ++ fs/proc/base.c | 2 + fs/proc/internal.h | 2 + include/linux/memcontrol.h | 14 +++ include/linux/memdelay.h | 182 +++++++++++++++++++++++++++++ include/linux/sched.h | 8 ++ kernel/cgroup/cgroup.c | 3 +- kernel/fork.c | 4 + kernel/sched/Makefile | 2 +- kernel/sched/core.c | 27 +++++ kernel/sched/memdelay.c | 118 +++++++++++++++++++ mm/Makefile | 2 +- mm/compaction.c | 4 + mm/filemap.c | 11 ++ mm/memcontrol.c | 25 ++++ mm/memdelay.c | 285 +++++++++++++++++++++++++++++++++++++++++++++ mm/page_alloc.c | 11 +- mm/vmscan.c | 9 ++ 18 files changed, 712 insertions(+), 5 deletions(-) create mode 100644 include/linux/memdelay.h create mode 100644 kernel/sched/memdelay.c create mode 100644 mm/memdelay.c diff --git a/fs/proc/array.c b/fs/proc/array.c index 88c355574aa0..00e0e9aa3e70 100644 --- a/fs/proc/array.c +++ b/fs/proc/array.c @@ -611,6 +611,14 @@ int proc_pid_statm(struct seq_file *m, struct pid_namespace *ns, return 0; } +int proc_pid_memdelay(struct seq_file *m, struct pid_namespace *ns, + struct pid *pid, struct task_struct *task) +{ + seq_put_decimal_ull(m, "", task->memdelay_total); + seq_putc(m, '\n'); + return 0; +} + #ifdef CONFIG_PROC_CHILDREN static struct pid * get_children_pid(struct inode *inode, struct pid *pid_prev, loff_t pos) diff --git a/fs/proc/base.c b/fs/proc/base.c index 719c2e943ea1..19f194940c80 100644 --- a/fs/proc/base.c +++ b/fs/proc/base.c @@ -2916,6 +2916,7 @@ static const struct pid_entry tgid_base_stuff[] = { REG("cmdline", S_IRUGO, proc_pid_cmdline_ops), ONE("stat", S_IRUGO, proc_tgid_stat), ONE("statm", S_IRUGO, proc_pid_statm), + ONE("memdelay", S_IRUGO, proc_pid_memdelay), REG("maps", S_IRUGO, proc_pid_maps_operations), #ifdef CONFIG_NUMA REG("numa_maps", S_IRUGO, proc_pid_numa_maps_operations), @@ -3307,6 +3308,7 @@ static const struct pid_entry tid_base_stuff[] = { REG("cmdline", S_IRUGO, proc_pid_cmdline_ops), ONE("stat", S_IRUGO, proc_tid_stat), ONE("statm", S_IRUGO, proc_pid_statm), + ONE("memdelay", S_IRUGO, proc_pid_memdelay), REG("maps", S_IRUGO, proc_tid_maps_operations), #ifdef CONFIG_PROC_CHILDREN REG("children", S_IRUGO, proc_tid_children_operations), diff --git a/fs/proc/internal.h b/fs/proc/internal.h index aa2b89071630..7ab706c316b8 100644 --- a/fs/proc/internal.h +++ b/fs/proc/internal.h @@ -146,6 +146,8 @@ extern int proc_pid_status(struct seq_file *, struct pid_namespace *, struct pid *, struct task_struct *); extern int proc_pid_statm(struct seq_file *, struct pid_namespace *, struct pid *, struct task_struct *); +extern int proc_pid_memdelay(struct seq_file *, struct pid_namespace *, + struct pid *, struct task_struct *); /* * base.c diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index 9b15a4bcfa77..1f720d3090f7 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -30,6 +30,7 @@ #include #include #include +#include struct mem_cgroup; struct page; @@ -183,6 +184,9 @@ struct mem_cgroup { unsigned long soft_limit; + /* Memory delay measurement domain */ + struct memdelay_domain *memdelay_domain; + /* vmpressure notifications */ struct vmpressure vmpressure; @@ -728,6 +732,11 @@ static inline struct lruvec *mem_cgroup_page_lruvec(struct page *page, return &pgdat->lruvec; } +static inline struct mem_cgroup *parent_mem_cgroup(struct mem_cgroup *memcg) +{ + return NULL; +} + static inline bool mm_match_cgroup(struct mm_struct *mm, struct mem_cgroup *memcg) { @@ -740,6 +749,11 @@ static inline bool task_in_mem_cgroup(struct task_struct *task, return true; } +static inline struct mem_cgroup *mem_cgroup_from_task(struct task_struct *task) +{ + return NULL; +} + static inline struct mem_cgroup * mem_cgroup_iter(struct mem_cgroup *root, struct mem_cgroup *prev, diff --git a/include/linux/memdelay.h b/include/linux/memdelay.h new file mode 100644 index 000000000000..08ed4e4baedf --- /dev/null +++ b/include/linux/memdelay.h @@ -0,0 +1,182 @@ +#ifndef _LINUX_MEMDELAY_H +#define _LINUX_MEMDELAY_H + +#include +#include + +struct seq_file; +struct css_set; + +/* + * Task productivity states tracked by the scheduler + */ +enum memdelay_task_state { + MTS_NONE, /* Idle/unqueued/untracked */ + MTS_IOWAIT, /* Waiting for IO, not memory delayed */ + MTS_RUNNABLE, /* On the runqueue, not memory delayed */ + MTS_DELAYED, /* Memory delayed, not running */ + MTS_DELAYED_ACTIVE, /* Memory delayed, actively running */ + NR_MEMDELAY_TASK_STATES, +}; + +/* + * System/cgroup delay state tracked by the VM, composed of the + * productivity states of all tasks inside the domain. + */ +enum memdelay_domain_state { + MDS_NONE, /* No delayed tasks */ + MDS_SOME, /* Delayed tasks, working tasks */ + MDS_FULL, /* Delayed tasks, no working tasks */ + NR_MEMDELAY_DOMAIN_STATES, +}; + +struct memdelay_domain_cpu { + /* Task states of the domain on this CPU */ + int tasks[NR_MEMDELAY_TASK_STATES]; + + /* Delay state of the domain on this CPU */ + enum memdelay_domain_state state; + + /* Time of last state change */ + u64 state_start; +}; + +struct memdelay_domain { + /* Aggregate delayed time of all domain tasks */ + unsigned long aggregate; + + /* Per-CPU delay states in the domain */ + struct memdelay_domain_cpu __percpu *mdcs; + + /* Cumulative state times from all CPUs */ + unsigned long times[NR_MEMDELAY_DOMAIN_STATES]; + + /* Decaying state time averages over 1m, 5m, 15m */ + unsigned long period_expires; + unsigned long avg_full[3]; + unsigned long avg_some[3]; +}; + +/* mm/memdelay.c */ +extern struct memdelay_domain memdelay_global_domain; +void memdelay_init(void); +void memdelay_task_change(struct task_struct *task, + enum memdelay_task_state old, + enum memdelay_task_state new); +struct memdelay_domain *memdelay_domain_alloc(void); +void memdelay_domain_free(struct memdelay_domain *md); +int memdelay_domain_show(struct seq_file *s, struct memdelay_domain *md); + +/* kernel/sched/memdelay.c */ +void memdelay_enter(unsigned long *flags); +void memdelay_leave(unsigned long *flags); + +/** + * memdelay_schedule - note a context switch + * @prev: task scheduling out + * @next: task scheduling in + * + * A task switch doesn't affect the balance between delayed and + * productive tasks, but we have to update whether the delay is + * actively using the CPU or not. + */ +static inline void memdelay_schedule(struct task_struct *prev, + struct task_struct *next) +{ + if (prev->flags & PF_MEMDELAY) + memdelay_task_change(prev, MTS_DELAYED_ACTIVE, MTS_DELAYED); + + if (next->flags & PF_MEMDELAY) + memdelay_task_change(next, MTS_DELAYED, MTS_DELAYED_ACTIVE); +} + +/** + * memdelay_wakeup - note a task waking up + * @task: the task + * + * Notes an idle task becoming productive. Delayed tasks remain + * delayed even when they become runnable. + */ +static inline void memdelay_wakeup(struct task_struct *task) +{ + if (task->flags & PF_MEMDELAY) + return; + + if (task->in_iowait) + memdelay_task_change(task, MTS_IOWAIT, MTS_RUNNABLE); + else + memdelay_task_change(task, MTS_NONE, MTS_RUNNABLE); +} + +/** + * memdelay_wakeup - note a task going to sleep + * @task: the task + * + * Notes a working tasks becoming unproductive. Delayed tasks remain + * delayed. + */ +static inline void memdelay_sleep(struct task_struct *task) +{ + if (task->flags & PF_MEMDELAY) + return; + + if (task->in_iowait) + memdelay_task_change(task, MTS_RUNNABLE, MTS_IOWAIT); + else + memdelay_task_change(task, MTS_RUNNABLE, MTS_NONE); +} + +/** + * memdelay_del_add - track task movement between runqueues + * @task: the task + * @runnable: a runnable task is moved if %true, unqueued otherwise + * @add: task is being added if %true, removed otherwise + * + * Update the memdelay domain per-cpu states as tasks are being moved + * around the runqueues. + */ +static inline void memdelay_del_add(struct task_struct *task, + bool runnable, bool add) +{ + int state; + + if (task->flags & PF_MEMDELAY) + state = MTS_DELAYED; + else if (runnable) + state = MTS_RUNNABLE; + else if (task->in_iowait) + state = MTS_IOWAIT; + else + return; /* already MTS_NONE */ + + if (add) + memdelay_task_change(task, MTS_NONE, state); + else + memdelay_task_change(task, state, MTS_NONE); +} + +static inline void memdelay_del_runnable(struct task_struct *task) +{ + memdelay_del_add(task, true, false); +} + +static inline void memdelay_add_runnable(struct task_struct *task) +{ + memdelay_del_add(task, true, true); +} + +static inline void memdelay_del_sleeping(struct task_struct *task) +{ + memdelay_del_add(task, false, false); +} + +static inline void memdelay_add_sleeping(struct task_struct *task) +{ + memdelay_del_add(task, false, true); +} + +#ifdef CONFIG_CGROUPS +void cgroup_move_task(struct task_struct *task, struct css_set *to); +#endif + +#endif /* _LINUX_MEMDELAY_H */ diff --git a/include/linux/sched.h b/include/linux/sched.h index c05ac5f5aa03..de15e3c8c43a 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -651,6 +651,7 @@ struct task_struct { /* disallow userland-initiated cgroup migration */ unsigned no_cgroup_migration:1; #endif + unsigned memdelay_migrate_enqueue:1; unsigned long atomic_flags; /* Flags requiring atomic access. */ @@ -871,6 +872,12 @@ struct task_struct { struct io_context *io_context; + u64 memdelay_start; + unsigned long memdelay_total; +#ifdef CONFIG_DEBUG_VM + int memdelay_state; +#endif + /* Ptrace state: */ unsigned long ptrace_message; siginfo_t *last_siginfo; @@ -1274,6 +1281,7 @@ extern struct pid *cad_pid; #define PF_KTHREAD 0x00200000 /* I am a kernel thread */ #define PF_RANDOMIZE 0x00400000 /* Randomize virtual address space */ #define PF_SWAPWRITE 0x00800000 /* Allowed to write to swap */ +#define PF_MEMDELAY 0x01000000 /* Delayed due to lack of memory */ #define PF_NO_SETAFFINITY 0x04000000 /* Userland is not allowed to meddle with cpus_allowed */ #define PF_MCE_EARLY 0x08000000 /* Early kill for mce process policy */ #define PF_MUTEX_TESTER 0x20000000 /* Thread belongs to the rt mutex tester */ diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c index df2e0f14a95d..930aaef50396 100644 --- a/kernel/cgroup/cgroup.c +++ b/kernel/cgroup/cgroup.c @@ -699,7 +699,8 @@ static void css_set_move_task(struct task_struct *task, */ WARN_ON_ONCE(task->flags & PF_EXITING); - rcu_assign_pointer(task->cgroups, to_cset); + cgroup_move_task(task, to_cset); + list_add_tail(&task->cg_list, use_mg_tasks ? &to_cset->mg_tasks : &to_cset->tasks); } diff --git a/kernel/fork.c b/kernel/fork.c index b7e9e57b71ea..96dd35393be9 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -1208,6 +1208,10 @@ static int copy_mm(unsigned long clone_flags, struct task_struct *tsk) int retval; tsk->min_flt = tsk->maj_flt = 0; + tsk->memdelay_total = 0; +#ifdef CONFIG_DEBUG_VM + tsk->memdelay_state = 0; +#endif tsk->nvcsw = tsk->nivcsw = 0; #ifdef CONFIG_DETECT_HUNG_TASK tsk->last_switch_count = tsk->nvcsw + tsk->nivcsw; diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile index 53f0164ed362..84390fc42f60 100644 --- a/kernel/sched/Makefile +++ b/kernel/sched/Makefile @@ -17,7 +17,7 @@ endif obj-y += core.o loadavg.o clock.o cputime.o obj-y += idle_task.o fair.o rt.o deadline.o -obj-y += wait.o wait_bit.o swait.o completion.o idle.o +obj-y += wait.o wait_bit.o swait.o completion.o idle.o memdelay.o obj-$(CONFIG_SMP) += cpupri.o cpudeadline.o topology.o stop_task.o obj-$(CONFIG_SCHED_AUTOGROUP) += autogroup.o obj-$(CONFIG_SCHEDSTATS) += stats.o diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 0869b20fba81..bf105c870da6 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -26,6 +26,7 @@ #include #include #include +#include #include #include @@ -759,6 +760,14 @@ static inline void enqueue_task(struct rq *rq, struct task_struct *p, int flags) if (!(flags & ENQUEUE_RESTORE)) sched_info_queued(rq, p); + WARN_ON_ONCE(!(flags & ENQUEUE_WAKEUP) && p->memdelay_migrate_enqueue); + if (!(flags & ENQUEUE_WAKEUP) || p->memdelay_migrate_enqueue) { + memdelay_add_runnable(p); + p->memdelay_migrate_enqueue = 0; + } else { + memdelay_wakeup(p); + } + p->sched_class->enqueue_task(rq, p, flags); } @@ -770,6 +779,11 @@ static inline void dequeue_task(struct rq *rq, struct task_struct *p, int flags) if (!(flags & DEQUEUE_SAVE)) sched_info_dequeued(rq, p); + if (!(flags & DEQUEUE_SLEEP)) + memdelay_del_runnable(p); + else + memdelay_sleep(p); + p->sched_class->dequeue_task(rq, p, flags); } @@ -2044,7 +2058,16 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags) cpu = select_task_rq(p, p->wake_cpu, SD_BALANCE_WAKE, wake_flags); if (task_cpu(p) != cpu) { + struct rq_flags rf; + struct rq *rq; + wake_flags |= WF_MIGRATED; + + rq = __task_rq_lock(p, &rf); + memdelay_del_sleeping(p); + __task_rq_unlock(rq, &rf); + p->memdelay_migrate_enqueue = 1; + set_task_cpu(p, cpu); } @@ -3326,6 +3349,8 @@ static void __sched notrace __schedule(bool preempt) rq->curr = next; ++*switch_count; + memdelay_schedule(prev, next); + trace_sched_switch(preempt, prev, next); /* Also unlocks the rq: */ @@ -5919,6 +5944,8 @@ void __init sched_init(void) init_schedstats(); + memdelay_init(); + scheduler_running = 1; } diff --git a/kernel/sched/memdelay.c b/kernel/sched/memdelay.c new file mode 100644 index 000000000000..1d4813cd018a --- /dev/null +++ b/kernel/sched/memdelay.c @@ -0,0 +1,118 @@ +/* + * Memory delay metric + * + * Copyright (c) 2017 Facebook, Johannes Weiner + * + * This code quantifies and reports to userspace the wall-time impact + * of memory pressure on the system and memory-controlled cgroups. + */ + +#include +#include +#include + +#include "sched.h" + +/** + * memdelay_enter - mark the beginning of a memory delay section + * @flags: flags to handle nested memdelay sections + * + * Marks the calling task as being delayed due to a lack of memory, + * such as waiting for a workingset refault or performing reclaim. + */ +void memdelay_enter(unsigned long *flags) +{ + struct rq_flags rf; + struct rq *rq; + + *flags = current->flags & PF_MEMDELAY; + if (*flags) + return; + /* + * PF_MEMDELAY & accounting needs to be atomic wrt changes to + * the task's scheduling state and its domain association. + * Otherwise we could race with CPU or cgroup migration and + * misaccount. + */ + local_irq_disable(); + rq = this_rq(); + rq_lock(rq, &rf); + + current->flags |= PF_MEMDELAY; + memdelay_task_change(current, MTS_RUNNABLE, MTS_DELAYED_ACTIVE); + + rq_unlock(rq, &rf); + local_irq_enable(); +} + +/** + * memdelay_leave - mark the end of a memory delay section + * @flags: flags to handle nested memdelay sections + * + * Marks the calling task as no longer delayed due to memory. + */ +void memdelay_leave(unsigned long *flags) +{ + struct rq_flags rf; + struct rq *rq; + + if (*flags) + return; + /* + * PF_MEMDELAY & accounting needs to be atomic wrt changes to + * the task's scheduling state and its domain association. + * Otherwise we could race with CPU or cgroup migration and + * misaccount. + */ + local_irq_disable(); + rq = this_rq(); + rq_lock(rq, &rf); + + current->flags &= ~PF_MEMDELAY; + memdelay_task_change(current, MTS_DELAYED_ACTIVE, MTS_RUNNABLE); + + rq_unlock(rq, &rf); + local_irq_enable(); +} + +#ifdef CONFIG_CGROUPS +/** + * cgroup_move_task - move task to a different cgroup + * @task: the task + * @to: the target css_set + * + * Move task to a new cgroup and safely migrate its associated + * delayed/working state between the different domains. + * + * This function acquires the task's rq lock to lock out concurrent + * changes to the task's scheduling state and - in case the task is + * running - concurrent changes to its delay state. + */ +void cgroup_move_task(struct task_struct *task, struct css_set *to) +{ + struct rq_flags rf; + struct rq *rq; + int state; + + rq = task_rq_lock(task, &rf); + + if (task->flags & PF_MEMDELAY) + state = MTS_DELAYED + task_current(rq, task); + else if (task_on_rq_queued(task)) + state = MTS_RUNNABLE; + else if (task->in_iowait) + state = MTS_IOWAIT; + else + state = MTS_NONE; + + /* + * Lame to do this here, but the scheduler cannot be locked + * from the outside, so we move cgroups from inside sched/. + */ + memdelay_task_change(task, state, MTS_NONE); + rcu_assign_pointer(task->cgroups, to); + memdelay_task_change(task, MTS_NONE, state); + + task_rq_unlock(rq, task, &rf); +} +#endif /* CONFIG_CGROUPS */ diff --git a/mm/Makefile b/mm/Makefile index 411bd24d4a7c..c9bdbc5627e5 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -39,7 +39,7 @@ obj-y := filemap.o mempool.o oom_kill.o \ mm_init.o mmu_context.o percpu.o slab_common.o \ compaction.o vmacache.o swap_slots.o \ interval_tree.o list_lru.o workingset.o \ - debug.o $(mmu-y) + memdelay.o debug.o $(mmu-y) obj-y += init-mm.o diff --git a/mm/compaction.c b/mm/compaction.c index fb548e4c7bd4..adf67de23fee 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -2040,11 +2040,15 @@ static int kcompactd(void *p) pgdat->kcompactd_classzone_idx = pgdat->nr_zones - 1; while (!kthread_should_stop()) { + unsigned long mdflags; + trace_mm_compaction_kcompactd_sleep(pgdat->node_id); wait_event_freezable(pgdat->kcompactd_wait, kcompactd_work_requested(pgdat)); + memdelay_enter(&mdflags); kcompactd_do_work(pgdat); + memdelay_leave(&mdflags); } return 0; diff --git a/mm/filemap.c b/mm/filemap.c index da55a5693da9..648418694405 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -36,6 +36,7 @@ #include #include #include +#include #include "internal.h" #define CREATE_TRACE_POINTS @@ -961,8 +962,15 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, { struct wait_page_queue wait_page; wait_queue_entry_t *wait = &wait_page.wait; + unsigned long mdflags; + bool refault = false; int ret = 0; + if (bit_nr == PG_locked && !PageUptodate(page) && PageWorkingset(page)) { + memdelay_enter(&mdflags); + refault = true; + } + init_wait(wait); wait->flags = lock ? WQ_FLAG_EXCLUSIVE : 0; wait->func = wake_page_function; @@ -1001,6 +1009,9 @@ static inline int wait_on_page_bit_common(wait_queue_head_t *q, finish_wait(q, wait); + if (refault) + memdelay_leave(&mdflags); + /* * A signal could leave PageWaiters set. Clearing it here if * !waitqueue_active would be possible (by open-coding finish_wait), diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 93b2eb063afd..102f0f4d3f5c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -65,6 +65,7 @@ #include #include #include +#include #include "internal.h" #include #include @@ -3926,6 +3927,8 @@ static ssize_t memcg_write_event_control(struct kernfs_open_file *of, return ret; } +static int memory_memdelay_show(struct seq_file *m, void *v); + static struct cftype mem_cgroup_legacy_files[] = { { .name = "usage_in_bytes", @@ -3993,6 +3996,10 @@ static struct cftype mem_cgroup_legacy_files[] = { { .name = "pressure_level", }, + { + .name = "memdelay", + .seq_show = memory_memdelay_show, + }, #ifdef CONFIG_NUMA { .name = "numa_stat", @@ -4170,6 +4177,7 @@ static void __mem_cgroup_free(struct mem_cgroup *memcg) for_each_node(node) free_mem_cgroup_per_node_info(memcg, node); + memdelay_domain_free(memcg->memdelay_domain); free_percpu(memcg->stat); kfree(memcg); } @@ -4275,10 +4283,15 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css) /* The following stuff does not apply to the root */ if (!parent) { + memcg->memdelay_domain = &memdelay_global_domain; root_mem_cgroup = memcg; return &memcg->css; } + memcg->memdelay_domain = memdelay_domain_alloc(); + if (!memcg->memdelay_domain) + goto fail; + error = memcg_online_kmem(memcg); if (error) goto fail; @@ -5282,6 +5295,13 @@ static int memory_stat_show(struct seq_file *m, void *v) return 0; } +static int memory_memdelay_show(struct seq_file *m, void *v) +{ + struct mem_cgroup *memcg = mem_cgroup_from_css(seq_css(m)); + + return memdelay_domain_show(m, memcg->memdelay_domain); +} + static struct cftype memory_files[] = { { .name = "current", @@ -5317,6 +5337,11 @@ static struct cftype memory_files[] = { .flags = CFTYPE_NOT_ON_ROOT, .seq_show = memory_stat_show, }, + { + .name = "memdelay", + .flags = CFTYPE_NOT_ON_ROOT, + .seq_show = memory_memdelay_show, + }, { } /* terminate */ }; diff --git a/mm/memdelay.c b/mm/memdelay.c new file mode 100644 index 000000000000..c43d6f7ba22a --- /dev/null +++ b/mm/memdelay.c @@ -0,0 +1,285 @@ +/* + * Memory delay metric + * + * Copyright (c) 2017 Facebook, Johannes Weiner + * + * This code quantifies and reports to userspace the wall-time impact + * of memory pressure on the system and memory-controlled cgroups. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static DEFINE_PER_CPU(struct memdelay_domain_cpu, global_domain_cpus); + +/* System-level keeping of memory delay statistics */ +struct memdelay_domain memdelay_global_domain = { + .mdcs = &global_domain_cpus, +}; + +static void domain_init(struct memdelay_domain *md) +{ + md->period_expires = jiffies + LOAD_FREQ; +} + +/** + * memdelay_init - initialize the memdelay subsystem + * + * This needs to run before the scheduler starts queuing and + * scheduling tasks. + */ +void __init memdelay_init(void) +{ + domain_init(&memdelay_global_domain); +} + +static void domain_move_clock(struct memdelay_domain *md) +{ + unsigned long expires = READ_ONCE(md->period_expires); + unsigned long none, some, full; + int missed_periods; + unsigned long next; + int i; + + if (time_before(jiffies, expires)) + return; + + missed_periods = 1 + (jiffies - expires) / LOAD_FREQ; + next = expires + (missed_periods * LOAD_FREQ); + + if (cmpxchg(&md->period_expires, expires, next) != expires) + return; + + none = xchg(&md->times[MDS_NONE], 0); + some = xchg(&md->times[MDS_SOME], 0); + full = xchg(&md->times[MDS_FULL], 0); + + for (i = 0; i < missed_periods; i++) { + unsigned long pct; + + pct = some * 100 / max(none + some + full, 1UL); + pct *= FIXED_1; + CALC_LOAD(md->avg_some[0], EXP_1, pct); + CALC_LOAD(md->avg_some[1], EXP_5, pct); + CALC_LOAD(md->avg_some[2], EXP_15, pct); + + pct = full * 100 / max(none + some + full, 1UL); + pct *= FIXED_1; + CALC_LOAD(md->avg_full[0], EXP_1, pct); + CALC_LOAD(md->avg_full[1], EXP_5, pct); + CALC_LOAD(md->avg_full[2], EXP_15, pct); + + none = some = full = 0; + } +} + +static void domain_cpu_update(struct memdelay_domain *md, int cpu, + enum memdelay_task_state old, + enum memdelay_task_state new) +{ + enum memdelay_domain_state state; + struct memdelay_domain_cpu *mdc; + unsigned long delta; + u64 now; + + mdc = per_cpu_ptr(md->mdcs, cpu); + + if (old) { + WARN_ONCE(!mdc->tasks[old], "cpu=%d old=%d new=%d counter=%d\n", + cpu, old, new, mdc->tasks[old]); + mdc->tasks[old] -= 1; + } + if (new) + mdc->tasks[new] += 1; + + /* + * The domain is somewhat delayed when a number of tasks are + * delayed but there are still others running the workload. + * + * The domain is fully delayed when all non-idle tasks on the + * CPU are delayed, or when a delayed task is actively running + * and preventing productive tasks from making headway. + * + * The state times then add up over all CPUs in the domain: if + * the domain is fully blocked on one CPU and there is another + * one running the workload, the domain is considered fully + * blocked 50% of the time. + */ + if (mdc->tasks[MTS_DELAYED_ACTIVE] && !mdc->tasks[MTS_IOWAIT]) + state = MDS_FULL; + else if (mdc->tasks[MTS_DELAYED]) + state = (mdc->tasks[MTS_RUNNABLE] || mdc->tasks[MTS_IOWAIT]) ? + MDS_SOME : MDS_FULL; + else + state = MDS_NONE; + + if (mdc->state == state) + return; + + now = cpu_clock(cpu); + delta = (now - mdc->state_start) / NSEC_PER_USEC; + + domain_move_clock(md); + md->times[mdc->state] += delta; + + mdc->state = state; + mdc->state_start = now; +} + +static struct memdelay_domain *memcg_domain(struct mem_cgroup *memcg) +{ +#ifdef CONFIG_MEMCG + if (!mem_cgroup_disabled()) + return memcg->memdelay_domain; +#endif + return &memdelay_global_domain; +} + +/** + * memdelay_task_change - note a task changing its delay/work state + * @task: the task changing state + * @old: old task state + * @new: new task state + * + * Updates the task's domain counters to reflect a change in the + * task's delayed/working state. + */ +void memdelay_task_change(struct task_struct *task, + enum memdelay_task_state old, + enum memdelay_task_state new) +{ + int cpu = task_cpu(task); + struct mem_cgroup *memcg; + unsigned long delay = 0; + +#ifdef CONFIG_DEBUG_VM + WARN_ONCE(task->memdelay_state != old, + "cpu=%d task=%p state=%d (in_iowait=%d PF_MEMDELAYED=%d) old=%d new=%d\n", + cpu, task, task->memdelay_state, task->in_iowait, + !!(task->flags & PF_MEMDELAY), old, new); + task->memdelay_state = new; +#endif + + /* Account when tasks are entering and leaving delays */ + if (old < MTS_DELAYED && new >= MTS_DELAYED) { + task->memdelay_start = cpu_clock(cpu); + } else if (old >= MTS_DELAYED && new < MTS_DELAYED) { + delay = (cpu_clock(cpu) - task->memdelay_start) / NSEC_PER_USEC; + task->memdelay_total += delay; + } + + /* Account domain state changes */ + rcu_read_lock(); + memcg = mem_cgroup_from_task(task); + do { + struct memdelay_domain *md; + + md = memcg_domain(memcg); + md->aggregate += delay; + domain_cpu_update(md, cpu, old, new); + } while (memcg && (memcg = parent_mem_cgroup(memcg))); + rcu_read_unlock(); +}; + +/** + * memdelay_domain_alloc - allocate a cgroup memory delay domain + */ +struct memdelay_domain *memdelay_domain_alloc(void) +{ + struct memdelay_domain *md; + + md = kzalloc(sizeof(*md), GFP_KERNEL); + if (!md) + return NULL; + md->mdcs = alloc_percpu(struct memdelay_domain_cpu); + if (!md->mdcs) { + kfree(md); + return NULL; + } + domain_init(md); + return md; +} + +/** + * memdelay_domain_free - free a cgroup memory delay domain + */ +void memdelay_domain_free(struct memdelay_domain *md) +{ + if (md) { + free_percpu(md->mdcs); + kfree(md); + } +} + +/** + * memdelay_domain_show - format memory delay domain stats to a seq_file + * @s: the seq_file + * @md: the memory domain + */ +int memdelay_domain_show(struct seq_file *s, struct memdelay_domain *md) +{ + domain_move_clock(md); + + seq_printf(s, "%lu\n", md->aggregate); + + seq_printf(s, "%lu.%02lu %lu.%02lu %lu.%02lu\n", + LOAD_INT(md->avg_some[0]), LOAD_FRAC(md->avg_some[0]), + LOAD_INT(md->avg_some[1]), LOAD_FRAC(md->avg_some[1]), + LOAD_INT(md->avg_some[2]), LOAD_FRAC(md->avg_some[2])); + + seq_printf(s, "%lu.%02lu %lu.%02lu %lu.%02lu\n", + LOAD_INT(md->avg_full[0]), LOAD_FRAC(md->avg_full[0]), + LOAD_INT(md->avg_full[1]), LOAD_FRAC(md->avg_full[1]), + LOAD_INT(md->avg_full[2]), LOAD_FRAC(md->avg_full[2])); + +#ifdef CONFIG_DEBUG_VM + { + int cpu; + + for_each_online_cpu(cpu) { + struct memdelay_domain_cpu *mdc; + + mdc = per_cpu_ptr(md->mdcs, cpu); + seq_printf(s, "%d %d %d %d\n", + mdc->tasks[MTS_IOWAIT], + mdc->tasks[MTS_RUNNABLE], + mdc->tasks[MTS_DELAYED], + mdc->tasks[MTS_DELAYED_ACTIVE]); + } + } +#endif + + return 0; +} + +static int memdelay_show(struct seq_file *m, void *v) +{ + return memdelay_domain_show(m, &memdelay_global_domain); +} + +static int memdelay_open(struct inode *inode, struct file *file) +{ + return single_open(file, memdelay_show, NULL); +} + +static const struct file_operations memdelay_fops = { + .open = memdelay_open, + .read = seq_read, + .llseek = seq_lseek, + .release = single_release, +}; + +static int __init memdelay_proc_init(void) +{ + proc_create("memdelay", 0, NULL, &memdelay_fops); + return 0; +} +module_init(memdelay_proc_init); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1423da8dd16f..d8d01e9df982 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -67,6 +67,7 @@ #include #include #include +#include #include #include @@ -3364,16 +3365,19 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order, unsigned int alloc_flags, const struct alloc_context *ac, enum compact_priority prio, enum compact_result *compact_result) { - struct page *page; unsigned int noreclaim_flag; + unsigned long mdflags; + struct page *page; if (!order) return NULL; + memdelay_enter(&mdflags); noreclaim_flag = memalloc_noreclaim_save(); *compact_result = try_to_compact_pages(gfp_mask, order, alloc_flags, ac, prio); memalloc_noreclaim_restore(noreclaim_flag); + memdelay_leave(&mdflags); if (*compact_result <= COMPACT_INACTIVE) return NULL; @@ -3519,13 +3523,15 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order, const struct alloc_context *ac) { struct reclaim_state reclaim_state; - int progress; unsigned int noreclaim_flag; + unsigned long mdflags; + int progress; cond_resched(); /* We now go into synchronous reclaim */ cpuset_memory_pressure_bump(); + memdelay_enter(&mdflags); noreclaim_flag = memalloc_noreclaim_save(); lockdep_set_current_reclaim_state(gfp_mask); reclaim_state.reclaimed_slab = 0; @@ -3537,6 +3543,7 @@ __perform_reclaim(gfp_t gfp_mask, unsigned int order, current->reclaim_state = NULL; lockdep_clear_current_reclaim_state(); memalloc_noreclaim_restore(noreclaim_flag); + memdelay_leave(&mdflags); cond_resched(); diff --git a/mm/vmscan.c b/mm/vmscan.c index 60357cd84c67..1029305b9b3a 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -48,6 +48,7 @@ #include #include #include +#include #include #include @@ -3098,6 +3099,7 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, { struct zonelist *zonelist; unsigned long nr_reclaimed; + unsigned long mdflags; int nid; unsigned int noreclaim_flag; struct scan_control sc = { @@ -3126,9 +3128,11 @@ unsigned long try_to_free_mem_cgroup_pages(struct mem_cgroup *memcg, sc.gfp_mask, sc.reclaim_idx); + memdelay_enter(&mdflags); noreclaim_flag = memalloc_noreclaim_save(); nr_reclaimed = do_try_to_free_pages(zonelist, &sc); memalloc_noreclaim_restore(noreclaim_flag); + memdelay_leave(&mdflags); trace_mm_vmscan_memcg_reclaim_end(nr_reclaimed); @@ -3550,6 +3554,7 @@ static int kswapd(void *p) pgdat->kswapd_order = 0; pgdat->kswapd_classzone_idx = MAX_NR_ZONES; for ( ; ; ) { + unsigned long mdflags; bool ret; alloc_order = reclaim_order = pgdat->kswapd_order; @@ -3586,7 +3591,11 @@ static int kswapd(void *p) */ trace_mm_vmscan_kswapd_wake(pgdat->node_id, classzone_idx, alloc_order); + + memdelay_enter(&mdflags); reclaim_order = balance_pgdat(pgdat, alloc_order, classzone_idx); + memdelay_leave(&mdflags); + if (reclaim_order < alloc_order) goto kswapd_try_sleep; } -- 2.14.1 --xHFwDpU9dbj6ez1V--