Received: by 2002:ac0:a5a7:0:0:0:0:0 with SMTP id m36-v6csp1490610imm; Tue, 10 Jul 2018 02:45:03 -0700 (PDT) X-Google-Smtp-Source: AAOMgpdehqHFWFkdWtJLRyRDKH4Fmao87bJldG19VaSRAtuaUqMejoMmhtkrCAIKRlzW4F7F4y/T X-Received: by 2002:a62:c505:: with SMTP id j5-v6mr883376pfg.153.1531215903320; Tue, 10 Jul 2018 02:45:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1531215903; cv=none; d=google.com; s=arc-20160816; b=d6h6q0nEix2Roj2GISg9OkXOUwt77gOE52v1n8HuvGQa2Bd0XhPkFpM3pg+KRjTkOS XY0yKDnRGGQjea5sVHWsU2dGKVZjsxDRSV0wXPRDTxXO4hWCQnby6D/bv+RED+h67W09 fvYyZNbb9d25Ly3KzZ7xsTnze9BYQPj4/SAoDLZeFXYxyqJWJ1HQdAcPmO+T/RyHDPAx ayfx+cb+ick+O4NVIQpK15i1n+9W2S/q2ddBbEq+pP6/Tu00/sxtvPOdNoTaVOyCT6Ux NaftGt87rBKJFiJijlXyGzqp5zenNl8a2CqP63gB6EtlwkP/1C6SJDAK/MoEvTrqM6l/ 8c0w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=4KpfUjuj9wLIQQyYZE+KTzjsfWiPHecLklMefx9vD+k=; b=x79wwGq1HC5/YL0z7Z9a+EC3jXAsyptzwrYfDu/EG8Tk/m48/vYGgNI7KGqzCDg0xd ieMlIep09EMX0cEapE8ecWg1X5+hbUIP3pFoBiV+D891zl1Um4siRXC2k+G+kRr9i42q KRvAvESqQ/ueZkEbJBkL+OVPLg6jRQnusOJYq41JGywJkJRwx5Ekaz8eS9g/AfNPhs2Z 6yLV9dhfdGg2K8wt2lpVqmjIMzd2SA/mgJv33ah1Lpf/2jhqUSjJ2Fkez/fpahYXGxjx M2uPoIp3524lY9aDUAlT5OncGZHvgXmQ3TA7ZJWEyTgFlgvFhXnN2TStJFM04IH769pv k2LQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q7-v6si15979258pll.445.2018.07.10.02.44.48; Tue, 10 Jul 2018 02:45:03 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933138AbeGJJnq (ORCPT + 99 others); Tue, 10 Jul 2018 05:43:46 -0400 Received: from mx2.suse.de ([195.135.220.15]:60414 "EHLO mx1.suse.de" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751412AbeGJJnp (ORCPT ); Tue, 10 Jul 2018 05:43:45 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id DC96DAD2C; Tue, 10 Jul 2018 09:43:43 +0000 (UTC) Date: Tue, 10 Jul 2018 11:43:41 +0200 From: Michal Hocko To: David Rientjes Cc: Andrew Morton , Tetsuo Handa , linux-mm@kvack.org, LKML Subject: Re: [PATCH] mm, oom: remove sleep from under oom_lock Message-ID: <20180710094341.GD14284@dhcp22.suse.cz> References: <20180709074706.30635-1-mhocko@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.0 (2018-05-17) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon 09-07-18 15:49:53, David Rientjes wrote: > On Mon, 9 Jul 2018, Michal Hocko wrote: > > > From: Michal Hocko > > > > Tetsuo has pointed out that since 27ae357fa82b ("mm, oom: fix concurrent > > munlock and oom reaper unmap, v3") we have a strong synchronization > > between the oom_killer and victim's exiting because both have to take > > the oom_lock. Therefore the original heuristic to sleep for a short time > > in out_of_memory doesn't serve the original purpose. > > > > Moreover Tetsuo has noticed that the short sleep can be more harmful > > than actually useful. Hammering the system with many processes can lead > > to a starvation when the task holding the oom_lock can block for a > > long time (minutes) and block any further progress because the > > oom_reaper depends on the oom_lock as well. > > > > Drop the short sleep from out_of_memory when we hold the lock. Keep the > > sleep when the trylock fails to throttle the concurrent OOM paths a bit. > > This should be solved in a more reasonable way (e.g. sleep proportional > > to the time spent in the active reclaiming etc.) but this is much more > > complex thing to achieve. This is a quick fixup to remove a stale code. > > > > Reported-by: Tetsuo Handa > > Signed-off-by: Michal Hocko > > This reminds me: > > mm/oom_kill.c > > 54) int sysctl_oom_dump_tasks = 1; > 55) > 56) DEFINE_MUTEX(oom_lock); > 57) > 58) #ifdef CONFIG_NUMA > > Would you mind documenting oom_lock to specify what it's protecting? What do you think about the following? diff --git a/mm/oom_kill.c b/mm/oom_kill.c index ed9d473c571e..32e6f7becb40 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -53,6 +53,14 @@ int sysctl_panic_on_oom; int sysctl_oom_kill_allocating_task; int sysctl_oom_dump_tasks = 1; +/* + * Serializes oom killer invocations (out_of_memory()) from all contexts to + * prevent from over eager oom killing (e.g. when the oom killer is invoked + * from different domains). + * + * oom_killer_disable() relies on this lock to stabilize oom_killer_disabled + * and mark_oom_victim + */ DEFINE_MUTEX(oom_lock); #ifdef CONFIG_NUMA -- Michal Hocko SUSE Labs