Received: by 2002:a05:6a10:a852:0:0:0:0 with SMTP id d18csp62944pxy; Tue, 4 May 2021 18:36:19 -0700 (PDT) X-Google-Smtp-Source: ABdhPJz7spm/DjiaptgYL1indIrPVYu45A/pisf8rqgfC3Pl7d/7hETWMahyWOWfzBRvJlqbHPda X-Received: by 2002:aa7:9992:0:b029:28e:b432:190a with SMTP id k18-20020aa799920000b029028eb432190amr9550346pfh.50.1620178579037; Tue, 04 May 2021 18:36:19 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1620178579; cv=none; d=google.com; s=arc-20160816; b=v82ZfLMS+eqiIoP/5qpB5HK0xmcxmnu0eVELrbHIuqNgf2Q/fZbNjZSeMkY+6Ss5ci ktHLL+82dWIb/ZYawIn9SQeIhM9dmFLXFs8vPZSkojckHb9I5a0BzDjiwz3hIfRrTuEa s4DJf30MEFFtnbYGvNI5UfSclGX+Ju1xy4HyrEQT97STPAP2zTNH7fKE5EP9tUSVYiZ2 P2JfEJtGaznSY7t2LdclHdwd/dh01DnPxdyTzDjJzN+1buOiMv/2fXACJ4WzmZOi7+1J QhWx9LTQw5nrTHO3I2++0GdF7zeNHTkq4cP2SF4skJSp1bwADFsZKuo8FSCWcmErZ9xe WCwA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:cc:to:subject:message-id:date:from:in-reply-to :references:mime-version:dkim-signature; bh=VwkFVGeygfPPOOzWFv0DYtzQ6yrWI2H0fQZB1AzPEM0=; b=QI6G8WDgRZwbXLPKkRLcA/+eJXkzNd/GzijBRay/E7j6b0bsZykbd06G0qb2FaOyGW Iuvy0CwkejKdXwa+FH8N8pgZXiOECXmU3KSA+H6kS79HAKq9SLyLpFo6O+BxQxYUN3bW NnhtLlUfGWGu3w8m5a8t4bAFFLQgPXRtnRRPLFKiMHRJeL+cGisMgHSt6LVS40FH1JEt hMi2dzXHyExpiKfNEvXTuCXDcxOM947JlGlkepcXeXta0ZBWoAx/A3JOy+p4EPwt5vhY wcVKTNTxBLABQUI+mpQ+MHR2qDDMYNqz9EOyioVCFye9JJ5llYC4+C11M8s2g+vwNKGo q8Bw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=dYUEpvTt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a7si19748920pfv.84.2021.05.04.18.36.06; Tue, 04 May 2021 18:36:19 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@google.com header.s=20161025 header.b=dYUEpvTt; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=REJECT sp=REJECT dis=NONE) header.from=google.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231844AbhEEB1j (ORCPT + 99 others); Tue, 4 May 2021 21:27:39 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:35226 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229985AbhEEB1j (ORCPT ); Tue, 4 May 2021 21:27:39 -0400 Received: from mail-yb1-xb36.google.com (mail-yb1-xb36.google.com [IPv6:2607:f8b0:4864:20::b36]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C2598C061574 for ; Tue, 4 May 2021 18:26:42 -0700 (PDT) Received: by mail-yb1-xb36.google.com with SMTP id v39so599799ybd.4 for ; Tue, 04 May 2021 18:26:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=VwkFVGeygfPPOOzWFv0DYtzQ6yrWI2H0fQZB1AzPEM0=; b=dYUEpvTtcC/zgvuWMLUHqfh+FSH/C0VurBbahqhGRd7mtNCdElZiv25gRL44Iw79+S hP/vLRzLkzKCiTwfo2An3vgfDX/zKwdcrIBaAS/0z3mD49nou8AzB4Bm8Pt+qx9Z9a8q UEmF29J6Ygs9BJD/1ekctTMc2cFptqEm6WGNsXEDhOQ8h7+e74n0m1m7AsweZkRRwH+s fEnXJiLjo04MzFahr387l2EFB5WF/Fcsy2kSY7pCLdSj0V5HimTC43xBEpAbEm6Q2og7 7Hyq/nCGkgWsKYWBLoW7POCxrwV4Qzrh3c7IujMQhnUPAh0FIvllbAR9mNZIi0AKczv7 BRcg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=VwkFVGeygfPPOOzWFv0DYtzQ6yrWI2H0fQZB1AzPEM0=; b=n4G9VkPqj/FQADhw0dEMOtB5BrSgQO+Aiv3OtarK4xjdmtMeI3iA1LUviT5eGRlKVW sDOL7ixGlgtgfq+eMqgZ0E+khBofh2auZ5sPL/uls15ejtxF6Oi/9Qt1QPK0/ZTh28tg oPrlgppjhoft8GrcvJhvvBj1OWnvuCUjPBfvd5t9/5JUmZR+kS34/4rTuo7MRpg1VnrM psOyb3EB+zzN+LsXBuojGevxkdsAglTRAR+XmTyYxJ/yhaFz8ubg82f16AMCkU66clai QXQgi6r1eo+HfUD6QN2WKnrN8ecq+QF+FdcWLJ3LDASi7mnReBofTq93qFGXBrpiTOKA xdew== X-Gm-Message-State: AOAM532W8xDzJJ8gLB+qeYHIssVtsvo1DyzfR89LBu8Kh3WpvAUV/J+T fhE3xo7fJ8E5hpeDS4btd5q4Zjqs5rRNE98NcA3Tvg== X-Received: by 2002:a25:b049:: with SMTP id e9mr38030700ybj.111.1620178001694; Tue, 04 May 2021 18:26:41 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Suren Baghdasaryan Date: Tue, 4 May 2021 18:26:30 -0700 Message-ID: Subject: Re: [RFC] memory reserve for userspace oom-killer To: Shakeel Butt Cc: Michal Hocko , Johannes Weiner , Roman Gushchin , Linux MM , Andrew Morton , Cgroups , David Rientjes , LKML , Greg Thelen , Dragos Sbirlea , Priya Duraisamy Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, May 4, 2021 at 5:37 PM Shakeel Butt wrote: > > On Wed, Apr 21, 2021 at 7:29 AM Michal Hocko wrote: > > > [...] > > > > What if the pool is depleted? > > > > > > This would mean that either the estimate of mempool size is bad or > > > oom-killer is buggy and leaking memory. > > > > > > I am open to any design directions for mempool or some other way where > > > we can provide a notion of memory guarantee to oom-killer. > > > > OK, thanks for clarification. There will certainly be hard problems to > > sort out[1] but the overall idea makes sense to me and it sounds like a > > much better approach than a OOM specific solution. > > > > > > [1] - how the pool is going to be replenished without hitting all > > potential reclaim problems (thus dependencies on other all tasks > > directly/indirectly) yet to not rely on any background workers to do > > that on the task behalf without a proper accounting etc... > > -- > > I am currently contemplating between two paths here: > > First, the mempool, exposed through either prctl or a new syscall. > Users would need to trace their userspace oom-killer (or whatever > their use case is) to find an appropriate mempool size they would need > and periodically refill the mempools if allowed by the state of the > machine. The challenge here is to find a good value for the mempool > size and coordinating the refilling of mempools. > > Second is a mix of Roman and Peter's suggestions but much more > simplified. A very simple watchdog with a kill-list of processes and > if userspace didn't pet the watchdog within a specified time, it will > kill all the processes in the kill-list. The challenge here is to > maintain/update the kill-list. IIUC this solution is designed to identify cases when oomd/lmkd got stuck while allocating memory due to memory shortages and therefore can't feed the watchdog. In such a case the kernel goes ahead and kills some processes to free up memory and unblock the blocked process. Effectively this would limit the time such a process gets stuck by the duration of the watchdog timeout. If my understanding of this proposal is correct, then I see the following downsides: 1. oomd/lmkd are still not prevented from being stuck, it just limits the duration of this blocked state. Delaying kills when memory pressure is high even for short duration is very undesirable. I think having mempool reserves could address this issue better if it can always guarantee memory availability (not sure if it's possible in practice). 2. What would be performance overhead of this watchdog? To limit the duration of a process being blocked to a small enough value we would have to have quite a small timeout, which means oomd/lmkd would have to wake up quite often to feed the watchdog. Frequent wakeups on a battery-powered system is not a good idea. 3. What if oomd/lmkd gets stuck for some memory-unrelated reason and can't feed the watchdog? In such a scenario the kernel would assume that it is stuck due to memory shortages and would go on a killing spree. If there is a sure way to identify when a process gets stuck due to memory shortages then this could work better. 4. Additional complexity of keeping the list of potential victims in the kernel. Maybe we can simply reuse oom_score to choose the best victims? Thanks, Suren. > > I would prefer the direction which oomd and lmkd are open to adopt. > > Any suggestions?