Received: by 2002:a05:6a11:4021:0:0:0:0 with SMTP id ky33csp1920084pxb; Mon, 20 Sep 2021 08:11:03 -0700 (PDT) X-Google-Smtp-Source: ABdhPJwcqDPn0FLEzMe+wzz6WN3972ipZwBZNtEV0d4VEN95nw9bO8mFq/F0J8CRzQUWOfGU6Ld5 X-Received: by 2002:a05:6402:143b:: with SMTP id c27mr13932511edx.224.1632150663361; Mon, 20 Sep 2021 08:11:03 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1632150663; cv=none; d=google.com; s=arc-20160816; b=dDsrS8OYyYoyqCbvLQ5Tll/uQOk1HTbWH+EA+/Q5qrxifZ64oB1aPd/SFbgBf6H7P1 0HxdWKjsAN4RdyJuhK/By+nviaU/oKIRgtT4IMcgFTTulDVkPBIOdwCMaVuaR007JHlI Qoyw5xS7CC09cxpGhoimCxGwIj+tLJSgS4OyvH36oxZUqRU8k9Pqk7FlnL7IRPG5IVlW OkEuswwRfF67viKFe46GV7JeT7mqFFN7uwymmK8a/qwps8xs5J46sL8nQBsXmw0Kf8/f oBcSQtwIKHjB7+yajIvJazqybCbSgbH7CykZ3C6DkJrUwGsmKwWJ2n9sNK6WueNvF7WQ mLMw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject:dkim-signature; bh=fcfmlYi3oAy4mMPZc/zpifYS2k4Mu/wKqEuxn9NCLX4=; b=fsW6u8GvkhFxz0x3sEYgaOYMuc1xPUuRy2chjzsOzVS4oVbi9d1K6cw4VllhR+sn+p efQw3ODyG60D230aLeAYsq9BFJTjtezrxbo56bGHPImqBW4yuwyQWCS4Q3zTJdfXKLGh ydmSLa7653/6d0nPis1eeois1AP3DaTIVzYifyETJIWAiO1/IA1OaBrMNUAriEZJ37qC w7SwjgF381zLhsf+OSqVE/9qpq8c6DJ2PdvLK2vJJ+XiDrKmu7m0bQBwstRlPE17a2zc EVKDU2BDd/MDLVyeV71NseEBM9LpIDB7j7l1LFv1o7ebZzblAE9mNld6rgVhKGuzjUIJ J0Fw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@virtuozzo.com header.s=relay header.b=EhkzUTTi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=virtuozzo.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id la20si20626003ejc.3.2021.09.20.08.10.37; Mon, 20 Sep 2021 08:11:03 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; dkim=pass header.i=@virtuozzo.com header.s=relay header.b=EhkzUTTi; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=QUARANTINE dis=NONE) header.from=virtuozzo.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234585AbhITLBI (ORCPT + 99 others); Mon, 20 Sep 2021 07:01:08 -0400 Received: from relay.sw.ru ([185.231.240.75]:47838 "EHLO relay.sw.ru" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232312AbhITLBH (ORCPT ); Mon, 20 Sep 2021 07:01:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=virtuozzo.com; s=relay; h=Content-Type:MIME-Version:Date:Message-ID:From: Subject; bh=fcfmlYi3oAy4mMPZc/zpifYS2k4Mu/wKqEuxn9NCLX4=; b=EhkzUTTiPeP0y6QIE tS5yZapeEnmboz9k8rcjYQVLemv3ZB7BlHR7WxOxSQlcuirn39nyxvtPU6OpMxggj32Oxd8GkTELH iBig6CzhI8Yf4h49HvGp/nyMcA4lKhB3UwLQtv2FemHd4TnAPw9+ItZfEPuTGGNEldqb4uv1Nay6o =; Received: from [10.93.0.56] by relay.sw.ru with esmtp (Exim 4.94.2) (envelope-from ) id 1mSH1M-002ZlW-7i; Mon, 20 Sep 2021 13:59:36 +0300 Subject: Re: [PATCH mm] vmalloc: back off when the current task is OOM-killed To: Tetsuo Handa , Andrew Morton Cc: Michal Hocko , Johannes Weiner , Vladimir Davydov , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel@openvz.org, "Uladzislau Rezki (Sony)" References: <20210919163126.431674722b8db218453dc18c@linux-foundation.org> From: Vasily Averin Message-ID: Date: Mon, 20 Sep 2021 13:59:35 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.13.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 9/20/21 4:22 AM, Tetsuo Handa wrote: > On 2021/09/20 8:31, Andrew Morton wrote: >> On Fri, 17 Sep 2021 11:06:49 +0300 Vasily Averin wrote: >> >>> Huge vmalloc allocation on heavy loaded node can lead to a global >>> memory shortage. A task called vmalloc can have the worst badness >>> and be chosen by OOM-killer, however received fatal signal and >>> oom victim mark does not interrupt allocation cycle. Vmalloc will >>> continue allocating pages over and over again, exacerbating the crisis >>> and consuming the memory freed up by another killed tasks. >>> >>> This patch allows OOM-killer to break vmalloc cycle, makes OOM more >>> effective and avoid host panic. >>> >>> Unfortunately it is not 100% safe. Previous attempt to break vmalloc >>> cycle was reverted by commit b8c8a338f75e ("Revert "vmalloc: back off when >>> the current task is killed"") due to some vmalloc callers did not handled >>> failures properly. Found issues was resolved, however, there may >>> be other similar places. >> >> Well that was lame of us. >> >> I believe that at least one of the kernel testbots can utilize fault >> injection. If we were to wire up vmalloc (as we have done with slab >> and pagealloc) then this will help to locate such buggy vmalloc callers. Andrew, could you please clarify how we can do it? Do you mean we can use exsiting allocation fault injection infrastructure to trigger such kind of issues? Unfortunately I found no ways to reach this goal. It allows to emulate single faults with small probability, however it is not enough, we need to completely disable all vmalloc allocations. I've tried to extend fault injection infrastructure however found that it is not trivial. That's why I've added direct fatal_signal_pending() check into my patch. > __alloc_pages_bulk() has three callers. > > alloc_pages_bulk_list() => No in-tree users. > > alloc_pages_bulk_array() => Used by xfs_buf_alloc_pages(), __page_pool_alloc_pages_slow(), svc_alloc_arg(). > > xfs_buf_alloc_pages() => Might retry forever until all pages are allocated (i.e. effectively __GFP_NOFAIL). This patch can cause infinite loop problem. You are right, I've missed it. However __alloc_pages_bulk() can return no new pages without my patch too: - due to fault injection inside prepare_alloc_pages() - if __rmqueue_pcplist() returns NULL and if array already had some assigned pages, - if both __rmqueue_pcplist() and following __alloc_pages(0) cannot get any page. On the other hand I cannot say that it is 100% xfs-related issue, it looks strange but they have some chance to get page after few attemps. So I think I can change 'break' to 'goto failed_irq', call __alloc_pages(0) and return 1 page. It seems is handled correctly in all callers too. > __page_pool_alloc_pages_slow() => Will not retry if allocation failed. This patch might help. > > svc_alloc_arg() => Will not retry if signal pending. This patch might help only if allocating a lot of pages. > > alloc_pages_bulk_array_node() => Used by vm_area_alloc_pages(). > > vm_area_alloc_pages() => Used by __vmalloc_area_node() from __vmalloc_node_range() from vmalloc functions. Needs !__GFP_NOFAIL check? Comments in description of __vmalloc_node() and kvmalloc() claim that __GFP_NOFAIL is not supported, I did not found any other callers used this flag.