Received: by 2002:ac0:a5b6:0:0:0:0:0 with SMTP id m51-v6csp660090imm; Wed, 13 Jun 2018 06:31:29 -0700 (PDT) X-Google-Smtp-Source: ADUXVKK5yVyD5gvgZRC5V265EBvsGqkFtVQ6jbOFgA+Zq6oAY9SMcf+S+PC9KY8eirM6/J1ANDwt X-Received: by 2002:a62:e310:: with SMTP id g16-v6mr5036587pfh.25.1528896689424; Wed, 13 Jun 2018 06:31:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1528896689; cv=none; d=google.com; s=arc-20160816; b=XUf/xnjL1cKtBwdJc/d149Kji3cjobcSO0B+rgTankv288EF4Mq2X2SFRfSgIEZnWC 8OjbNAr1aC9LUkNa+5EYj8cMxPD6EUZ32B0CifbQa7uTsPRY0MX4IBuG3Or/VezeFezE VV51uVWxxeBmcNxs6zhMkcxENyemEa5O7MODKDAzD6CPg7wqVlbPju5d9RBXdHXJr4IF X6k5+T8dDGoXSKQbdROkRolm3H9cSO/smy1AMFTJCKNXyi4e76XmGgkznP3OidfXYuFc Un4jwpFcKJnNgr2Z3V47fwGmcpIMni9zPAPC6Wy6g0YZrcaPYU0AEMXKJlFyKB+Gsm12 fMqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=5ckfJD+mF3kkzRtHxDt3sJBoxgjTqvKfDD+BFrTQmyU=; b=JDeijUMOn02ENhMqy4L862UjmM5tdaEcdBt/pSstvcMBdhpGC1bR89nuRJTn/mPNHA rWUBmV1FJxO0Sw1tiBqcEvJ9WQUjxzkf2OJMedeo4E63RSL7Gh9cDdd1udpBhlqunBaz 2NnwOS9zMsguTnE942OaHCi3Nu7vQ73V2dXXjshSz+Vq/4y/z6MR/yvVHPH0U3IVro7W 8Jaihnse3b/k9+vBu3nfJnltib9Thqyo0aT9dp/NC5YV9WmhWdTw5eYvP3AZfHc7P9zZ 10/mD8L9oBV3II/oxl85xNdLiDAaPBeQsQEzcSLy+Y3faglc6y1aOvTR3Km8qqXPDjEM WXrA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a13-v6si2346942pgv.158.2018.06.13.06.31.14; Wed, 13 Jun 2018 06:31:29 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S935406AbeFMN3r (ORCPT + 99 others); Wed, 13 Jun 2018 09:29:47 -0400 Received: from mx2.suse.de ([195.135.220.15]:44899 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S935269AbeFMN3q (ORCPT ); Wed, 13 Jun 2018 09:29:46 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (charybdis-ext-too.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 38C2DAE2A; Wed, 13 Jun 2018 13:29:45 +0000 (UTC) Date: Wed, 13 Jun 2018 15:29:44 +0200 From: Michal Hocko To: Tetsuo Handa Cc: David Rientjes , Andrew Morton , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [rfc patch] mm, oom: fix unnecessary killing of additional processes Message-ID: <20180613132944.GL13364@dhcp22.suse.cz> References: <20180525072636.GE11881@dhcp22.suse.cz> <20180528081345.GD1517@dhcp22.suse.cz> <20180531063212.GF15278@dhcp22.suse.cz> <20180601074642.GW15278@dhcp22.suse.cz> <20180605085707.GV19202@dhcp22.suse.cz> <56138495-fd91-62f8-464a-db9960bfeb28@i-love.sakura.ne.jp> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <56138495-fd91-62f8-464a-db9960bfeb28@i-love.sakura.ne.jp> User-Agent: Mutt/1.9.5 (2018-04-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 13-06-18 22:20:49, Tetsuo Handa wrote: > On 2018/06/05 17:57, Michal Hocko wrote: > >> For this reason, we see testing harnesses often oom killed immediately > >> after running a unittest that stresses reclaim or compaction by inducing a > >> system-wide oom condition. The harness spawns the unittest which spawns > >> an antagonist memory hog that is intended to be oom killed. When memory > >> is mlocked or there are a large number of threads faulting memory for the > >> antagonist, the unittest and the harness itself get oom killed because the > >> oom reaper sets MMF_OOM_SKIP; this ends up happening a lot on powerpc. > >> The memory hog has mm->mmap_sem readers queued ahead of a writer that is > >> doing mmap() so the oom reaper can't grab the sem quickly enough. > > > > How come the writer doesn't back off. mmap paths should be taking an > > exclusive mmap sem in killable sleep so it should back off. Or is the > > holder of the lock deep inside mmap path doing something else and not > > backing out with the exclusive lock held? > > > > Here is an example where the writer doesn't back off. > > http://lkml.kernel.org/r/20180607150546.1c7db21f70221008e14b8bb8@linux-foundation.org > > down_write_killable(&mm->mmap_sem) is nothing but increasing the possibility of > successfully back off. There is no guarantee that the owner of that exclusive > mmap sem will not be blocked by other unkillable waits. but we are talking about mmap() path here. Sure there are other paths which might need a back off while the lock is held and that should be addressed if possible but this is not really related to what David wrote above and I tried to understand. -- Michal Hocko SUSE Labs