Received: by 2002:a25:5b86:0:0:0:0:0 with SMTP id p128csp1689615ybb; Fri, 29 Mar 2019 09:19:16 -0700 (PDT) X-Google-Smtp-Source: APXvYqxTSycHzAh/i/knOwierQoPT6ZzkfZuTMbSlgPhscd7m6tW7ZjPh+tgZDGl/UcN1/iR0Gyj X-Received: by 2002:a62:6444:: with SMTP id y65mr39918273pfb.56.1553876356401; Fri, 29 Mar 2019 09:19:16 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1553876356; cv=none; d=google.com; s=arc-20160816; b=Ei920NyXfsYd8g1G1vUhmCC34FIdxRW0Ow1CJc5uv3lceC5PPOX6mzvzVEe8mBqkXA NoBKaYtWzC2cLqL196e3sxFLmWxNoXcruvgGMa3qGk5dFaCZP0z9/boBckIyM68OuhMM H/dVTxvfXuhMBtQqz84zE9n2no8j6do1v1w4nXIHv4ZnqWjyKMHY6GLjqSB19nJWNiWG PWRyVgKLxsBZzGSsykZa+QkXiOg3n+cpiu8y5QeCCFkDjQWeRS/GhkLX1GDPMrania6H tjAZVs930w1yUZuBwknTHOIuDZxkNFTVY8atdinRg/jzHACaGnTYotij9ndOTTtZp+pD Tkkw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date; bh=4mkhAVDAmSxhV6NHsQwW4naxzznaTX9qMP1ZYOoyAps=; b=1E+u+i2TilKnL9V0c3c5/AcvwY6j2aV0ijr4t1xr1vssI1bU5HUZTHO/wFoQuqO3l5 S4xd5h6arKVMPcu6sBZndxV3eee7H2CgkGiPd4iteOhnnXiSB/uyfOsxFlsdpSe/6ykM kFLQOVoz0B25Hy7qkyrwyFM9sokbY1PS54syujVY7xzCNyuwMYeyJhanAEfBReZJSOBF Xg0oUT3Uwss8zgppR0lZ0A8KFOXjnkBFV2nHmWk/oj3r6Yn0XNMROdJLJIJURkXOYcop /ta9Yt2JWziJyAOa5W41fooSUnI5dlOU+feDOwCcvjZDbkAXCdghZVVP5DMFWBfbssjz OTrw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id c2si2153787pls.226.2019.03.29.09.19.00; Fri, 29 Mar 2019 09:19:16 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729766AbfC2QQn (ORCPT + 99 others); Fri, 29 Mar 2019 12:16:43 -0400 Received: from usa-sjc-mx-foss1.foss.arm.com ([217.140.101.70]:35236 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729621AbfC2QQn (ORCPT ); Fri, 29 Mar 2019 12:16:43 -0400 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CA87980D; Fri, 29 Mar 2019 09:16:42 -0700 (PDT) Received: from arrakis.emea.arm.com (arrakis.cambridge.arm.com [10.1.196.78]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D92FC3F68F; Fri, 29 Mar 2019 09:16:40 -0700 (PDT) Date: Fri, 29 Mar 2019 16:16:38 +0000 From: Catalin Marinas To: Michal Hocko Cc: Matthew Wilcox , Qian Cai , akpm@linux-foundation.org, cl@linux.com, penberg@kernel.org, rientjes@google.com, iamjoonsoo.kim@lge.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v4] kmemleak: survive in a low-memory situation Message-ID: <20190329161637.GC48010@arrakis.emea.arm.com> References: <20190327005948.24263-1-cai@lca.pw> <20190327084432.GA11927@dhcp22.suse.cz> <20190327172955.GB17247@arrakis.emea.arm.com> <20190327182158.GS10344@bombadil.infradead.org> <20190328145917.GC10283@arrakis.emea.arm.com> <20190329120237.GB17624@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190329120237.GB17624@dhcp22.suse.cz> User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Mar 29, 2019 at 01:02:37PM +0100, Michal Hocko wrote: > On Thu 28-03-19 14:59:17, Catalin Marinas wrote: > [...] > > >From 09eba8f0235eb16409931e6aad77a45a12bedc82 Mon Sep 17 00:00:00 2001 > > From: Catalin Marinas > > Date: Thu, 28 Mar 2019 13:26:07 +0000 > > Subject: [PATCH] mm: kmemleak: Use mempool allocations for kmemleak objects > > > > This patch adds mempool allocations for struct kmemleak_object and > > kmemleak_scan_area as slightly more resilient than kmem_cache_alloc() > > under memory pressure. The patch also masks out all the gfp flags passed > > to kmemleak other than GFP_KERNEL|GFP_ATOMIC. > > Using mempool allocator is better than inventing its own implementation > but there is one thing to be slightly careful/worried about. > > This allocator expects that somebody will refill the pool in a finit > time. Most users are OK with that because objects in flight are going > to return in the pool in a relatively short time (think of an IO) but > kmemleak is not guaranteed to comply with that AFAIU. Sure ephemeral > allocations are happening all the time so there should be some churn > in the pool all the time but if we go to an extreme where there is a > serious memory leak then I suspect we might get stuck here without any > way forward. Page/slab allocator would eventually back off even though > small allocations never fail because a user context would get killed > sooner or later but there is no fatal_signal_pending backoff in the > mempool alloc path. We could improve the mempool code slightly to refill itself (from some workqueue or during a mempool_alloc() which allows blocking) but it's really just a best effort for a debug tool under OOM conditions. It may be sufficient just to make the mempool size tunable (via /sys/kernel/debug/kmemleak). > Anyway, I believe this is a step in the right direction and should the > above ever materializes as a relevant problem we can tune the mempool > to backoff for _some_ callers or do something similar. > > Btw. there is kmemleak_update_trace call in mempool_alloc, is this ok > for the kmemleak allocation path? It's not a problem, maybe only a small overhead in searching an rbtree in kmemleak but it cannot find anything since the kmemleak metadata is not tracked. And this only happens if a normal allocation fails and takes an existing object from the pool. I thought about passing the mempool back into kmemleak and checking whether it's one of the two pools it uses but concluded that it's not worth it. -- Catalin