Received: by 10.223.176.46 with SMTP id f43csp698256wra; Fri, 19 Jan 2018 00:22:56 -0800 (PST) X-Google-Smtp-Source: ACJfBotdXqbo1KHhdWVEMaS4Rld0VqDtnnmhKhSXJ47n2JK7A0TORhfWzCLNvy80oJvK6rj8/A64 X-Received: by 10.99.126.24 with SMTP id z24mr28068937pgc.143.1516350176484; Fri, 19 Jan 2018 00:22:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516350176; cv=none; d=google.com; s=arc-20160816; b=CL3VfttczgaNhDX6ezH5mYNW67ClMDeNN2XH1Fq5iHMJ3o4wWEncoEQs9y15rgXISd dZBYCNtIsg9cSXGC5fbQuqKkroiUWrdJ71a7sJ/tfEh6sWYy1tHrYl6WEAVnXTCgJRq0 VKHWY1lqTQqd41zRBfCkmBzK1SVWvYZqvl1T6mZA5u0VQxQbZUJKldkqUuKgWdzq1BK9 xpc1xa4UIkY06C9HSbWCUodNTta3V7l9Rei8W2bTqyxsxmXHwCV5yAJb57iP+ceLrmXQ f+sRYZsd+chzLmehKdT8ELooe+aFBR7bDAd8ie6jefU8nJyF2hEB80tk71+ZtUpDcmMV OnmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-transfer-encoding:content-disposition:mime-version :references:message-id:subject:cc:to:from:date :arc-authentication-results; bh=UvcIapq3nmY/Jiy1QGyMnC3PHu2P448QHZBdjEHQ6ng=; b=jHk/d2yft0o38BI7CyCFuGQnXK7PS6nHvO51Qr9kLGlm+7qSgTd3/6E/cKAGgr9YJc +7wCPgdjivS3+pfNdoOayqqr2jvpRv37VQm72t39m5se/6cvBme2nxqfK1abW5rWiKqu LeyGqWXzWPBfBaltKldoikrlMP4QQ1JjOT53be+uQvcg6UZzlmTYD3WYPN4Kh1DDhSye 41hN7EFQCUCqg8lomzA70owiaDr6LSjWwC32PvQ/SEhltUePoumdoroHpVlyhzWkETvZ nGkzqlP5ynbJ5CRdO2CzNvnfYNcDGaPJuS4hVeZJUOmuVWJyYvmI6gSgzglcmD5XI5XN 9q0w== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q7si7671630pgr.792.2018.01.19.00.22.41; Fri, 19 Jan 2018 00:22:56 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754818AbeASIU5 (ORCPT + 99 others); Fri, 19 Jan 2018 03:20:57 -0500 Received: from mx2.suse.de ([195.135.220.15]:35105 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754217AbeASIUt (ORCPT ); Fri, 19 Jan 2018 03:20:49 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id E73BFABF3; Fri, 19 Jan 2018 08:20:47 +0000 (UTC) Date: Fri, 19 Jan 2018 09:20:46 +0100 From: Michal Hocko To: Eric Anholt Cc: Andrey Grodzovsky , linux-kernel@vger.kernel.org, linux-mm@kvack.org, dri-devel@lists.freedesktop.org, amd-gfx@lists.freedesktop.org, Christian.Koenig@amd.com Subject: Re: [RFC] Per file OOM badness Message-ID: <20180119082046.GL6584@dhcp22.suse.cz> References: <1516294072-17841-1-git-send-email-andrey.grodzovsky@amd.com> <20180118170006.GG6584@dhcp22.suse.cz> <20180118171355.GH6584@dhcp22.suse.cz> <87k1wfgcmb.fsf@anholt.net> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <87k1wfgcmb.fsf@anholt.net> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu 18-01-18 12:01:32, Eric Anholt wrote: > Michal Hocko writes: > > > On Thu 18-01-18 18:00:06, Michal Hocko wrote: > >> On Thu 18-01-18 11:47:48, Andrey Grodzovsky wrote: > >> > Hi, this series is a revised version of an RFC sent by Christian K?nig > >> > a few years ago. The original RFC can be found at > >> > https://lists.freedesktop.org/archives/dri-devel/2015-September/089778.html > >> > > >> > This is the same idea and I've just adressed his concern from the original RFC > >> > and switched to a callback into file_ops instead of a new member in struct file. > >> > >> Please add the full description to the cover letter and do not make > >> people hunt links. > >> > >> Here is the origin cover letter text > >> : I'm currently working on the issue that when device drivers allocate memory on > >> : behalf of an application the OOM killer usually doesn't knew about that unless > >> : the application also get this memory mapped into their address space. > >> : > >> : This is especially annoying for graphics drivers where a lot of the VRAM > >> : usually isn't CPU accessible and so doesn't make sense to map into the > >> : address space of the process using it. > >> : > >> : The problem now is that when an application starts to use a lot of VRAM those > >> : buffers objects sooner or later get swapped out to system memory, but when we > >> : now run into an out of memory situation the OOM killer obviously doesn't knew > >> : anything about that memory and so usually kills the wrong process. > > > > OK, but how do you attribute that memory to a particular OOM killable > > entity? And how do you actually enforce that those resources get freed > > on the oom killer action? > > > >> : The following set of patches tries to address this problem by introducing a per > >> : file OOM badness score, which device drivers can use to give the OOM killer a > >> : hint how many resources are bound to a file descriptor so that it can make > >> : better decisions which process to kill. > > > > But files are not killable, they can be shared... In other words this > > doesn't help the oom killer to make an educated guess at all. > > Maybe some more context would help the discussion? > > The struct file in patch 3 is the DRM fd. That's effectively "my > process's interface to talking to the GPU" not "a single GPU resource". > Once that file is closed, all of the process's private, idle GPU buffers > will be immediately freed (this will be most of their allocations), and > some will be freed once the GPU completes some work (this will be most > of the rest of their allocations). > > Some GEM BOs won't be freed just by closing the fd, if they've been > shared between processes. Those are usually about 8-24MB total in a > process, rather than the GBs that modern apps use (or that our testcases > like to allocate and thus trigger oomkilling of the test harness instead > of the offending testcase...) > > Even if we just had the private+idle buffers being accounted in OOM > badness, that would be a huge step forward in system reliability. OK, in that case I would propose a different approach. We already have rss_stat. So why do not we simply add a new counter there MM_KERNELPAGES and consider those in oom_badness? The rule would be that such a memory is bound to the process life time. I guess we will find more users for this later. -- Michal Hocko SUSE Labs