Received: by 2002:a25:b794:0:0:0:0:0 with SMTP id n20csp7134836ybh; Thu, 8 Aug 2019 10:41:17 -0700 (PDT) X-Google-Smtp-Source: APXvYqw+jVNHVMD8rhFy6TA/t9+VfjcagSNyUBQPA+QPRwIh5BvgKzB0mDsmNalsdhvNNVYD2pdD X-Received: by 2002:a17:902:f087:: with SMTP id go7mr14866351plb.330.1565286077213; Thu, 08 Aug 2019 10:41:17 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1565286077; cv=none; d=google.com; s=arc-20160816; b=nsHzCSOSpHajP7NPv7mQ+QHCbRMOXFjnyrLEbQecAM7+CmBU+TgrYA8HOimC89stis tu+bJphgL3dX0/7xwo4O2GYIol2wz7bmcaXIJk+MJcRh4IUTrn6gzrZJpek8bJt49TzK 1V/TbBS4OjVsc6J5fplwk1NNKTBT1Tbx08Oapnrr3tIHV0TZVq+jQGoLJbKFovpkGhLB TmMuor/L1xx/CXYl4Bw6NYn6u0/Cabdu5PxXIazTeNhtpyJ0YFM2Ia+14FCtbVCPwPXV T5CFu+V5cc38iW3VtLIIJjdYzjEh7KXgC9AVYj7ZWck1f+p2vY45hp5lf0zjpAshKrvR yyxw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=MVBWtkHdRnfuS21lrUs4i3Ylv1eitpUH0JZwxhadCoc=; b=irEbQiz1HxefEDiT9cpJSyCBijbkwhDHrwTyBx4kQrPfzWd9tnY2eYzJw1MhzoKLlP wpok2EyCfoSZRTDl6BnzPUSrcX43fj2e/qJAw8habxUR19OTWrjsbigdaJYEsRpqmhKy ZYDEMxVO/BDBZAA5CkmeS/eFmcXg9YLKCJiKObc4oGeE2OZXeJk3JaMmETj+jEXVXk7I Ca1/P93JqvH59VjkQOexCHedoodmb3ZVBK4AsMl5cVEVfaVSArSgQcCLQCQP+HxQcxJM QHK8jjGg1q9VDwqiaMEzYxZswUepp9OjhbV5mO4KcOLkNUlEoMRc7b+/YY8xV//8jKdo L97g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id bf2si12263731plb.20.2019.08.08.10.41.01; Thu, 08 Aug 2019 10:41:17 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732725AbfHHQcc (ORCPT + 99 others); Thu, 8 Aug 2019 12:32:32 -0400 Received: from mx2.suse.de ([195.135.220.15]:44080 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1725535AbfHHQcc (ORCPT ); Thu, 8 Aug 2019 12:32:32 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id DEBE0AE7F; Thu, 8 Aug 2019 16:32:29 +0000 (UTC) Date: Thu, 8 Aug 2019 18:32:28 +0200 From: Michal Hocko To: ndrw.xf@redhazel.co.uk Cc: Johannes Weiner , Suren Baghdasaryan , Vlastimil Babka , "Artem S. Tashkinov" , Andrew Morton , LKML , linux-mm Subject: Re: Let's talk about the elephant in the room - the Linux kernel's inability to gracefully handle low memory pressure Message-ID: <20190808163228.GE18351@dhcp22.suse.cz> References: <398f31f3-0353-da0c-fc54-643687bb4774@suse.cz> <20190806142728.GA12107@cmpxchg.org> <20190806143608.GE11812@dhcp22.suse.cz> <20190806220150.GA22516@cmpxchg.org> <20190807075927.GO11812@dhcp22.suse.cz> <20190807205138.GA24222@cmpxchg.org> <20190808114826.GC18351@dhcp22.suse.cz> <806F5696-A8D6-481D-A82F-49DEC1F2B035@redhazel.co.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <806F5696-A8D6-481D-A82F-49DEC1F2B035@redhazel.co.uk> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 08-08-19 16:10:07, ndrw.xf@redhazel.co.uk wrote: > > > On 8 August 2019 12:48:26 BST, Michal Hocko wrote: > >> > >> Per default, the OOM killer will engage after 15 seconds of at least > >> 80% memory pressure. These values are tunable via sysctls > >> vm.thrashing_oom_period and vm.thrashing_oom_level. > > > >As I've said earlier I would be somehow more comfortable with a kernel > >command line/module parameter based tuning because it is less of a > >stable API and potential future stall detector might be completely > >independent on PSI and the current metric exported. But I can live with > >that because a period and level sounds quite generic. > > Would it be possible to reserve a fixed (configurable) amount of RAM for caches, I am afraid there is nothing like that available and I would even argue it doesn't make much sense either. What would you consider to be a cache? A kernel/userspace reclaimable memory? What about any other in kernel memory users? How would you setup such a limit and make it reasonably maintainable over different kernel releases when the memory footprint changes over time? Besides that how does that differ from the existing reclaim mechanism? Once your cache hits the limit, there would have to be some sort of the reclaim to happen and then we are back to square one when the reclaim is making progress but you are effectively treshing over the hot working set (e.g. code pages) > and trigger OOM killer earlier, before most UI code is evicted from memory? How does the kernel knows that important memory is evicted? E.g. say that your graphic stack is under pressure and it has to drop internal caches. No outstanding processes will be swapped out yet your UI will be completely frozen like. > In my use case, I am happy sacrificing e.g. 0.5GB and kill runaway > tasks _before_ the system freezes. Potentially OOM killer would also > work better in such conditions. I almost never work at close to full > memory capacity, it's always a single task that goes wrong and brings > the system down. If you know which task is that then you can put it into a memory cgroup with a stricter memory limit and have it killed before the overal system starts suffering. > The problem with PSI sensing is that it works after the fact (after > the freeze has already occurred). It is not very different from > issuing SysRq-f manually on a frozen system, although it would still > be a handy feature for batched tasks and remote access. Not really. PSI is giving you a matric that tells you how much time you spend on the memory reclaim. So you can start watching the system from lower utilization already. -- Michal Hocko SUSE Labs