Received: by 10.213.65.68 with SMTP id h4csp2031575imn; Thu, 5 Apr 2018 07:53:40 -0700 (PDT) X-Google-Smtp-Source: AIpwx4+1vi3pLgxmmyDGp/x/U80pZW8KUfTfpJ3R9OLOXolcrFzGz6e2JbiRlDKs2HCCvtPfj2Yb X-Received: by 10.98.8.133 with SMTP id 5mr17228133pfi.154.1522940020513; Thu, 05 Apr 2018 07:53:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1522940020; cv=none; d=google.com; s=arc-20160816; b=ZzcRDrWKz+4rDqP5lROKPDpxQKgBSioAEnS1dtq8uv1QRzc0FGCspOr51Gv63s4dSW y41yNydvaCPgtCWMDKeNpZkxtPlFPYy3A1KaOaaAjmItRVScUBavGfPOtvl+Fmha1kxQ W66Lw1n3ratdf8ha7NDp3ND2DQvjC586Ve7aknD5B5otWKyJkyLRBsaFfeB59DRrufoc 6R3D+kw9h2YaKfU0oRKnZKCZaCFLQu2M7A/Qy/9RGKC4qUd7SnNmzqu1ywM8+y9pRBEo cBpDiv+//sxrnNbD1yELLcYNRUvGPRUJzWr6Vw0YXEpLCYexKkXGs2QI6mSiqrohzDFr lWPQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=tofOyGnn0KXBTPIz3kjouvE5JdT6g5SdNfM9ZkUPZoQ=; b=UGqPod1rDq6XxaTGWvwOuHhMfi9o1VFYrF1QfvQWuKGaemZ2NWDN9eD4x2UNs8AX46 eO39tycee7sl1o4r4ZpcHIII7IVcnBBQ/cPY74PbCW4tAOIqkeVcjDgAcXeOEkfGgeMT MBALxvd+o6h1fy5tS90ZKjHt8SYY8U4S5ONg5eAPfFqbu/ICFrYNpT1WTElTrFFAxdBN y7JozRkLglwWEzOK8MXujOd07pnLm3pbNRtEe2N0wfc+g4k10b4XxvDbR/6ybOYVRlzo yAdX/mo+YKdd5BGn+soX02ES0nUeO17G0d8nf2RetdGcUqx3iIZDB+D+kkZKSO7ZyRUI uXOg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 197si5555124pge.78.2018.04.05.07.53.26; Thu, 05 Apr 2018 07:53:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751406AbeDEOwC (ORCPT + 99 others); Thu, 5 Apr 2018 10:52:02 -0400 Received: from mx2.suse.de ([195.135.220.15]:48606 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750835AbeDEOwB (ORCPT ); Thu, 5 Apr 2018 10:52:01 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 06407AE81; Thu, 5 Apr 2018 14:52:00 +0000 (UTC) Date: Thu, 5 Apr 2018 16:51:59 +0200 From: Michal Hocko To: Joel Fernandes Cc: Steven Rostedt , LKML , Zhaoyang Huang , Ingo Molnar , kernel-patch-test@lists.linaro.org, Andrew Morton , "open list:MEMORY MANAGEMENT" , Vlastimil Babka Subject: Re: [PATCH] ring-buffer: Add set/clear_current_oom_origin() during allocations Message-ID: <20180405145159.GM6312@dhcp22.suse.cz> References: <20180404115310.6c69e7b9@gandalf.local.home> <20180404120002.6561a5bc@gandalf.local.home> <20180404121326.6eca4fa3@gandalf.local.home> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.9.4 (2018-02-28) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 04-04-18 16:59:18, Joel Fernandes wrote: > Hi Steve, > > On Wed, Apr 4, 2018 at 9:18 AM, Joel Fernandes wrote: > > On Wed, Apr 4, 2018 at 9:13 AM, Steven Rostedt wrote: > > [..] > >>> > >>> Also, I agree with the new patch and its nice idea to do that. > >> > >> Thanks, want to give it a test too? > > With the latest tree and the below diff, I can still OOM-kill a victim > process doing a large buffer_size_kb write: > > I pulled your ftrace/core and added this: > + /* > i = si_mem_available(); > if (i < nr_pages) > return -ENOMEM; > + */ > > Here's a run in Qemu with 4-cores 1GB total memory: > > bash-4.3# ./m -m 1M & > [1] 1056 > bash-4.3# > bash-4.3# > bash-4.3# > bash-4.3# echo 10000000 > /d/tracing/buffer_size_kb > [ 33.213988] Out of memory: Kill process 1042 (bash) score > 1712050900 or sacrifice child > [ 33.215349] Killed process 1056 (m) total-vm:9220kB, > anon-rss:7564kB, file-rss:4kB, shmem-rss:640kB OK, so the reason your memory hog is triggered is that your echo is built-in and we properly select bask as an oom_origin but then another clever heuristic jumps in and tries to reduce the damage by sacrificing a child process. And your memory hog runs as a child from the same bash session. I cannot say I would love this heuristic. In fact I would really love to dig it deep under the ground. But this is a harder sell than it might seem. Anyway is your testing scenario really representative enough to care? Does the buffer_size_kb updater runs in the same process as any large memory process? > bash: echo: write error: Cannot allocate memory > [1]+ Killed ./m -m 1M > bash-4.3# > -- -- Michal Hocko SUSE Labs