Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752652Ab3FDJzT (ORCPT ); Tue, 4 Jun 2013 05:55:19 -0400 Received: from cantor2.suse.de ([195.135.220.15]:51478 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750738Ab3FDJzQ (ORCPT ); Tue, 4 Jun 2013 05:55:16 -0400 Date: Tue, 4 Jun 2013 11:55:14 +0200 From: Michal Hocko To: David Rientjes Cc: Andrew Morton , Johannes Weiner , KAMEZAWA Hiroyuki , linux-kernel@vger.kernel.org, linux-mm@kvack.org, cgroups@vger.kernel.org Subject: Re: [patch] mm, memcg: add oom killer delay Message-ID: <20130604095514.GC31242@dhcp22.suse.cz> References: <20130530150539.GA18155@dhcp22.suse.cz> <20130531081052.GA32491@dhcp22.suse.cz> <20130531112116.GC32491@dhcp22.suse.cz> <20130601102058.GA19474@dhcp22.suse.cz> <20130603193147.GC23659@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3514 Lines: 77 On Mon 03-06-13 14:17:54, David Rientjes wrote: > On Mon, 3 Jun 2013, Michal Hocko wrote: > > > > What do you suggest when you read the "tasks" file and it returns -ENOMEM > > > because kmalloc() fails because the userspace oom handler's memcg is also > > > oom? > > > > That would require that you track kernel allocations which is currently > > done only for explicit caches. > > > > That will not always be the case, and I think this could be a prerequisite > patch for such support that we have internally. > I'm not sure a userspace oom notifier would want to keep a > preallocated buffer around that is mlocked in memory for all possible > lengths of this file. Well, an oom handler which allocates memory under the same restricted memory doesn't make much sense to me. Tracking all kmem allocations makes it almost impossible to implement a non-trivial handler. > > > Obviously it's not a situation we want to get into, but unless you > > > know that handler's exact memory usage across multiple versions, nothing > > > else is sharing that memcg, and it's a perfect implementation, you can't > > > guarantee it. We need to address real world problems that occur in > > > practice. > > > > If you really need to have such a guarantee then you can have a _global_ > > watchdog observing oom_control of all groups that provide such a vague > > requirements for oom user handlers. > > > > The whole point is to allow the user to implement their own oom policy. OK, maybe I just wasn't clear enough or I am missing your point. Your users _can_ implement and register their oom handlers. But as your requirements are rather benevolent for handlers implementation you would have a global watchdog which would sit on the oom_control of those groups (which are allowed to have own handlers - all of them in your case I guess) and trigger (user defined/global) timeout when it gets a notification. If the group was under oom always during the timeout then just disable oom_control until oom is settled (under_oom is 0). Why wouldn't something like this work for your use case? > If the policy was completely encapsulated in kernel code, we don't need to > ever disable the oom killer even with memory.oom_control. Users may > choose to kill the largest process, the newest process, the oldest > process, sacrifice children instead of parents, prevent forkbombs, > implement their own priority scoring (which is what we do), kill the > allocating task, etc. > > To not merge this patch, I'd ask that you show an alternative that allows > users to implement their own userspace oom handlers and not require admin > intervention when things go wrong. Hohmm, so you are insisting on something that can be implemented in the userspace and put it into the kernel because it is more convenient for you and your use case. This doesn't sound like a way for accepting a feature. To make this absolutely clear. I do understand your requirements but you haven't shown any _argument_ why the timeout you are proposing cannot be implemented in the userspace. I will not ack this without this reasoning. And yes we should make memcg oom handling less deadlock prone and Johannes' work in this thread is a good step forward. -- Michal Hocko SUSE Labs -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/