Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp112557imm; Fri, 25 May 2018 17:30:00 -0700 (PDT) X-Google-Smtp-Source: AB8JxZrQcV2BomSNXtncfnmrQ3j+ybYn03HNP2FXht0QFqOcQVdIU+UdZoB3Ka8FjAk0gdqLuOSd X-Received: by 2002:a17:902:1025:: with SMTP id b34-v6mr4532112pla.207.1527294600884; Fri, 25 May 2018 17:30:00 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1527294600; cv=none; d=google.com; s=arc-20160816; b=Alp+/aRTymk9If140PlVzF2y0qt5dm1VE8wC4YU9NVkxTd6oFx3Of3vBYiOamE62zR oxxBgyRlpBDaoTKWR4ygpZ5gnrxnHkMDtj5E/GSvT0O/DjQ1UXRQNAn42crrlxnblK1e 21epXDOh+jXu2nkDOqjsS+h7RzJxIxiWvWeVCv55uhAKa7ngbjM5gyNbroTHWOFMFMJV k7cAQVC2RRD2+grxhq0MMwiAzt1oY83nhQgvmXA2kwxMy5iBzbmgCA7FyRT5B5Ba3o24 Jz7uTUC2jEQuvDLXujYADMiabiL7CTkvUNJOnsIw5/VpMBX4A5+weqRwesEFX9ReIvLb LLgg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:cc:to:subject:message-id:date:from :references:in-reply-to:mime-version:dkim-signature :arc-authentication-results; bh=LeE3nydzVDWN2EyGhc6bDvvtYHdY/D+e92hQcQMBLFs=; b=RIUAypfXotYbiforFcO3ykym33s4vzvTRNQornBuOFInJbJ2/MaPb42i7e4ozHs3Gv VgaENf5svBCCL6P/RBpRNEI8F7BFcWEWReO4HNm1p17LuYeA2hrS67ySeDiknvRxRLD9 0pAFjLstUuPvOBWU3vBICbgrtt7a8w9yRwj+Zh16nLSxqCfgJ6dwC2j9pUIWE850Zqhb 6+lzrSDtc67HE+r0UJb0LFQ881VWy88XvsepgqGUx3PbWO86bmeFj4yklnnZTDdQqlqm PQahVNHLHD5R+VyMyPnDmlW23mdGGkT2PpBnGbiHDkwWHGnZ8ICPS+F+au6ftxDX2Z/a g5uA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=ZAYhf6re; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f65-v6si2934575plb.271.2018.05.25.17.29.44; Fri, 25 May 2018 17:30:00 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=ZAYhf6re; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1030949AbeEZA3f (ORCPT + 99 others); Fri, 25 May 2018 20:29:35 -0400 Received: from mail-it0-f67.google.com ([209.85.214.67]:50644 "EHLO mail-it0-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1030839AbeEZA3c (ORCPT ); Fri, 25 May 2018 20:29:32 -0400 Received: by mail-it0-f67.google.com with SMTP id p3-v6so8858135itc.0 for ; Fri, 25 May 2018 17:29:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=LeE3nydzVDWN2EyGhc6bDvvtYHdY/D+e92hQcQMBLFs=; b=ZAYhf6reES5+GWZuoEDf08d9IGIzyR5WB5AVbO3Y9C5e1SpW32IxLlOSwDlfBCG+vn q8aakMKIQnduyLVHfRK+sOFZisJJggvV/t2F5oIsExniNLYHvR4NPHrN6/j802i4EveN symvpLJXNgoZN9R8SPGkyXQko9x/TnMHDyzV+zwx8v6mTfnxc+lGL4+ZWUIjGIcw3ywu W1oIMktCPTPsKd26WOE5u4kM/L2+N3W8Zb3fem92XVLH43m71QuF0QOCIn/t37nG+b8f RfXc3orHrUxpMog5wTjh+/8x3xBWqPBkTuEaIIW802o7WQ6glhZxGZjqkciPwo9oS4ka APyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=LeE3nydzVDWN2EyGhc6bDvvtYHdY/D+e92hQcQMBLFs=; b=AIE/QU/xxdGHRv1FfRSH3qt/lXR1Ri42erZGHiZn3oionAREgf8IQuRJUi8DfCR0z6 OcXCEOu9xXEsnZN3Z/+4EH3pcmC3QckjSiUAa+LCZco/06X0kL2f4jYA2RIebj9OFFGz A70jC2rHJFfVJPwVI/oryd2Yvp5ifwGxXt0J+EYztLFF3e4oZgrXL88CCaMwypliBF+h 5jSoUVu0r+oDuhEVoiuFxDTgOWVaKDQnsEhfMjdJCaqq2AjAZ4iZ/+q38hU/obPLEyhi oPEMtOepuzlJxqCzpkpJ7I3xKCsbpsedmG/rjN5dSBsd3LgLE9yaZW6QlBSlSmU5PfoZ Z+oQ== X-Gm-Message-State: ALKqPwdWY+0q/s2NVMUlGxnA+0RDEeQMyHBA3H5aUuDT4ZDcqX3/GTbC R86XSf9qF1PtLtDaz63cp3u9iSXh+tch5Jqd6Tanjw== X-Received: by 2002:a24:e56:: with SMTP id 83-v6mr3983805ite.16.1527294571441; Fri, 25 May 2018 17:29:31 -0700 (PDT) MIME-Version: 1.0 Received: by 2002:ac0:a1e5:0:0:0:0:0 with HTTP; Fri, 25 May 2018 17:29:30 -0700 (PDT) In-Reply-To: <20180507210135.1823-1-hannes@cmpxchg.org> References: <20180507210135.1823-1-hannes@cmpxchg.org> From: Suren Baghdasaryan Date: Fri, 25 May 2018 17:29:30 -0700 Message-ID: Subject: Re: [PATCH 0/7] psi: pressure stall information for CPU, memory, and IO To: Johannes Weiner Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, cgroups@vger.kernel.org, Ingo Molnar , Peter Zijlstra , Andrew Morton , Tejun Heo , Balbir Singh , Mike Galbraith , Oliver Yang , Shakeel Butt , xxx xxx , Taras Kondratiuk , Daniel Walker , Vinayak Menon , Ruslan Ruslichenko , kernel-team@fb.com Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Johannes, I tried your previous memdelay patches before this new set was posted and results were promising for predicting when Android system is close to OOM. I'm definitely going to try this one after I backport it to 4.9. On Mon, May 7, 2018 at 2:01 PM, Johannes Weiner wrote: > Hi, > > I previously submitted a version of this patch set called "memdelay", > which translated delays from reclaim, swap-in, thrashing page cache > into a pressure percentage of lost walltime. I've since extended this > code to aggregate all delay states tracked by delayacct in order to > have generalized pressure/overcommit levels for CPU, memory, and IO. > > There was feedback from Peter on the previous version that I have > incorporated as much as possible and as it still applies to this code: > > - got rid of the extra lock in the sched callbacks; all task > state changes we care about serialize through rq->lock > > - got rid of ktime_get() inside the sched callbacks and > switched time measuring to rq_clock() > > - got rid of all divisions inside the sched callbacks, > tracking everything natively in ns now > > I also moved this stuff into existing sched/stat.h callbacks, so it > doesn't get in the way in sched/core.c, and of course moved the whole > thing behind CONFIG_PSI since not everyone is going to want it. Would it make sense to split CONFIG_PSI into CONFIG_PSI_CPU, CONFIG_PSI_MEM and CONFIG_PSI_IO since one might need only specific subset of this feature? > > Real-world applications > > Since the last posting, we've begun using the data collected by this > code quite extensively at Facebook, and with several success stories. > > First we used it on systems that frequently locked up in low memory > situations. The reason this happens is that the OOM killer is > triggered by reclaim not being able to make forward progress, but with > fast flash devices there is *always* some clean and uptodate cache to > reclaim; the OOM killer never kicks in, even as tasks wait 80-90% of > the time faulting executables. There is no situation where this ever > makes sense in practice. We wrote a <100 line POC python script to > monitor memory pressure and kill stuff manually, way before such > pathological thrashing. > > We've since extended the python script into a more generic oomd that > we use all over the place, not just to avoid livelocks but also to > guarantee latency and throughput SLAs, since they're usually violated > way before the kernel OOM killer would ever kick in. > > We also use the memory pressure info for loadshedding. Our batch job > infrastructure used to refuse new requests on heuristics based on RSS > and other existing VM metrics in an attempt to avoid OOM kills and > maximize utilization. Since it was still plagued by frequent OOM > kills, we switched it to shed load on psi memory pressure, which has > turned out to be a much better bellwether, and we managed to reduce > OOM kills drastically. Reducing the rate of OOM outages from the > worker pool raised its aggregate productivity, and we were able to > switch that service to smaller machines. > > Lastly, we use cgroups to isolate a machine's main workload from > maintenance crap like package upgrades, logging, configuration, as > well as to prevent multiple workloads on a machine from stepping on > each others' toes. We were not able to do this properly without the > pressure metrics; we would see latency or bandwidth drops, but it > would often be hard to impossible to rootcause it post-mortem. We now > log and graph the pressure metrics for all containers in our fleet and > can trivially link service drops to resource pressure after the fact. > > How do you use this? > > A kernel with CONFIG_PSI=y will create a /proc/pressure directory with > 3 files: cpu, memory, and io. If using cgroup2, cgroups will also have > cpu.pressure, memory.pressure and io.pressure files, which simply > calculate pressure at the cgroup level instead of system-wide. > > The cpu file contains one line: > > some avg10=2.04 avg60=0.75 avg300=0.40 total=157656722 > > The averages give the percentage of walltime in which some tasks are > delayed on the runqueue while another task has the CPU. They're recent > averages over 10s, 1m, 5m windows, so you can tell short term trends > from long term ones, similarly to the load average. > > What to make of this number? If CPU utilization is at 100% and CPU > pressure is 0, it means the system is perfectly utilized, with one > runnable thread per CPU and nobody waiting. At two or more runnable > tasks per CPU, the system is 100% overcommitted and the pressure > average will indicate as much. From a utilization perspective this is > a great state of course: no CPU cycles are being wasted, even when 50% > of the threads were to go idle (and most workloads do vary). From the > perspective of the individual job it's not great, however, and they > might do better with more resources. Depending on what your priority > is, an elevated "some" number may or may not require action. > > The memory file contains two lines: > > some avg10=70.24 avg60=68.52 avg300=69.91 total=3559632828 > full avg10=57.59 avg60=58.06 avg300=60.38 total=3300487258 > > The some line is the same as for cpu: the time in which at least one > task is stalled on the resource. > > The full line, however, indicates time in which *nobody* is using the > CPU productively due to pressure: all non-idle tasks could be waiting > on thrashing cache simultaneously. It can also happen when a single > reclaimer occupies the CPU, since nothing else can make forward > progress during that time. CPU cycles are being wasted. Significant > time spent in there is a good trigger for killing, moving jobs to > other machines, or dropping incoming requests, since neither the jobs > nor the machine overall is making too much headway. > > The total= value gives the absolute stall time in microseconds. This > allows detecting latency spikes that might be too short to sway the > running averages. It also allows custom time averaging in case the > 10s/1m/5m windows aren't adequate for the usecase (or are too coarse > with future hardware). > Any reasons these specific windows were chosen (empirical data/historical reasons)? I'm worried that with the smallest window being 10s the signal might be too inert to detect fast memory pressure buildup before OOM kill happens. I'll have to experiment with that first, however if you have some insights into this already please share them. > The io file is similar to memory. However, unlike CPU and memory, the > block layer doesn't have a concept of hardware contention. We cannot > know if the IO a task is waiting on is being performed by the device > or whether the device is busy with or slowed down other requests. As a > result, we can tell how many CPU cycles go to waste due to IO delays, > but we can not identify the competition factor in those delays. > > These patches are against v4.17-rc4. > > Documentation/accounting/psi.txt | 73 ++++ > Documentation/cgroup-v2.txt | 18 + > arch/powerpc/platforms/cell/cpufreq_spudemand.c | 2 +- > arch/powerpc/platforms/cell/spufs/sched.c | 9 +- > arch/s390/appldata/appldata_os.c | 4 - > drivers/cpuidle/governors/menu.c | 4 - > fs/proc/loadavg.c | 3 - > include/linux/cgroup-defs.h | 4 + > include/linux/cgroup.h | 15 + > include/linux/delayacct.h | 23 + > include/linux/mmzone.h | 1 + > include/linux/page-flags.h | 5 +- > include/linux/psi.h | 52 +++ > include/linux/psi_types.h | 84 ++++ > include/linux/sched.h | 10 + > include/linux/sched/loadavg.h | 90 +++- > include/linux/sched/stat.h | 10 +- > include/linux/swap.h | 2 +- > include/trace/events/mmflags.h | 1 + > include/uapi/linux/taskstats.h | 6 +- > init/Kconfig | 20 + > kernel/cgroup/cgroup.c | 45 +- > kernel/debug/kdb/kdb_main.c | 7 +- > kernel/delayacct.c | 15 + > kernel/fork.c | 4 + > kernel/sched/Makefile | 1 + > kernel/sched/core.c | 3 + > kernel/sched/loadavg.c | 84 ---- > kernel/sched/psi.c | 499 ++++++++++++++++++++++ > kernel/sched/sched.h | 166 +++---- > kernel/sched/stats.h | 91 +++- > mm/compaction.c | 5 + > mm/filemap.c | 27 +- > mm/huge_memory.c | 1 + > mm/memcontrol.c | 2 + > mm/migrate.c | 2 + > mm/page_alloc.c | 10 + > mm/swap_state.c | 1 + > mm/vmscan.c | 14 + > mm/vmstat.c | 1 + > mm/workingset.c | 113 +++-- > tools/accounting/getdelays.c | 8 +- > 42 files changed, 1279 insertions(+), 256 deletions(-) > > > > Thanks, Suren.