Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752149AbbL1ONg (ORCPT ); Mon, 28 Dec 2015 09:13:36 -0500 Received: from www262.sakura.ne.jp ([202.181.97.72]:58055 "EHLO www262.sakura.ne.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751696AbbL1ONe (ORCPT ); Mon, 28 Dec 2015 09:13:34 -0500 To: mhocko@kernel.org, akpm@linux-foundation.org Cc: torvalds@linux-foundation.org, hannes@cmpxchg.org, mgorman@suse.de, rientjes@google.com, hillf.zj@alibaba-inc.com, kamezawa.hiroyu@jp.fujitsu.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 0/3] OOM detection rework v4 From: Tetsuo Handa References: <1450203586-10959-1-git-send-email-mhocko@kernel.org> <201512242141.EAH69761.MOVFQtHSFOJFLO@I-love.SAKURA.ne.jp> <201512282108.EDI82328.OHFLtVJOSQFMFO@I-love.SAKURA.ne.jp> In-Reply-To: <201512282108.EDI82328.OHFLtVJOSQFMFO@I-love.SAKURA.ne.jp> Message-Id: <201512282313.DHE87075.OSLJOFOtMVQHFF@I-love.SAKURA.ne.jp> X-Mailer: Winbiff [Version 2.51 PL2] X-Accept-Language: ja,en,zh Date: Mon, 28 Dec 2015 23:13:31 +0900 Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2051 Lines: 39 Tetsuo Handa wrote: > Tetsuo Handa wrote: > > I got OOM killers while running heavy disk I/O (extracting kernel source, > > running lxr's genxref command). (Environ: 4 CPUs / 2048MB RAM / no swap / XFS) > > Do you think these OOM killers reasonable? Too weak against fragmentation? > > Since I cannot establish workload that caused December 24's natural OOM > killers, I used the following stressor for generating similar situation. > I came to feel that I am observing a different problem which is currently hidden behind the "too small to fail" memory-allocation rule. That is, tasks requesting order > 0 pages are continuously losing the competition when tasks requesting order = 0 pages dominate, for reclaimed pages are stolen by tasks requesting order = 0 pages before reclaimed pages are combined to order > 0 pages (or maybe order > 0 pages are immediately split into order = 0 pages due to tasks requesting order = 0 pages). Currently, order <= PAGE_ALLOC_COSTLY_ORDER allocations implicitly retry unless chosen by the OOM killer. Therefore, even if tasks requesting order = 2 pages lost the competition when there are tasks requesting order = 0 pages, the order = 2 allocation request is implicitly retried and therefore the OOM killer is not invoked (though there is a problem that tasks requesting order > 0 allocation will stall as long as tasks requesting order = 0 pages dominate). But this patchset introduced a limit of 16 retries. Thus, if tasks requesting order = 2 pages lost the competition for 16 times due to tasks requesting order = 0 pages, tasks requesting order = 2 pages invoke the OOM killer. To avoid the OOM killer, we need to make sure that pages reclaimed for order > 0 allocations will not be stolen by tasks requesting order = 0 allocations. Is my feeling plausible? -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/